Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
10,000 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The ELS matching procedure
This practical is based on the concepts introduced for optimising electrical contacts in photovoltaic cells. The procedure was published in [J. Mater. Chem. C (2016)]((http
Step1: Now let's do a proper scan
IP = 5.7 eV
EA = 4.0 eV
Window = 0.25 eV
Insulating threshold = 4.0 eV
Step2: 2. Lattice matching
Background
For stable interfaces there should be an integer relation between the lattice constants of the two surfaces in contact, which allows for perfect matching, with minimal strain. Generally a strain value of ~ 3% is considered acceptable, above this the interface will be incoherent.
This section uses the ASE package to construct the low index surfaces of the materials identified in the electronic step, as well as those of the target material. The code LatticeMatch.py to identify optimal matches.
First we need .cif files of the materials obtained from the electronic matching. These are obtained from the Materials Project website. Most of the .cif files are there already, but we should add Cu$_2$O and GaN, just for practice.
Lattice matching routine
The lattice matching routine involves obtaining reduced cells for each surface and looking for multiples of each side which match. The procedure is described in more detail in our paper.
<img src="Images/lattice_match.gif">
The actual clever stuff of the algorithm comes from a paper from Zur and McGill in J. Appl. Physics (1984).
<img src="Images/ZurMcGill.jpg">
The script
The work is done by a python script called LatticeMatch.py. As input it reads .cif files. It takes a number of flags
Step3: 3. Site matching
So far the interface matching considered only the magnitude of the lattice vectors. It would be nice to be able to include some measure of how well the dangling bonds can passivate one another. We do this by calculating the site overlap. Basically, we determine the undercoordinated surface atoms on each side and project their positions into a 2D plane.
<img src="Images/site_overlap.gif">
We then lay the planes over each other and slide them around until there is the maximum coincidence. We calculate the overlap factor from
$$ ASO = \frac{2S_C}{S_A + S_B}$$
where $S_C$ is the number of overlapping sites in the interface, and $S_A$ and $S_B$ are the number of sites in each surface.
<img src="Images/ASO.gif">
The script
This section can be run in a stand-alone script called csl.py. It relies on a library of the 2D projections of lattice sites from different surfaces, which is called surface_points.py. Currently this contains a number of common materials types, but sometimes must be expanded as new materials are identified from the electronic and lattice steps.
csl.py takes the following input parameters
Step4: All together
The lattice and site examples above give a a feel for what is going on. For a proper screening procedure it would be nice to be able to run them together. That's exactly what happens with the LatticeSite.py script. It uses a new class Pair to store and pass information about the interface pairings. This includes the materials names, miller indices of matching surfaces, strians, multiplicities etc.
The LatticeSite.py script takes the same variables as LatticeMatch.py. It just takes a little longer to run, so a bit of patience is required.
This script outputs the standard pair information as well as the site matching factor, which is calculated as
$$ \frac{100\times ASO}{1 + |\epsilon|}$$
where the $ASO$ was defined above, and $\epsilon$ in the average of the $u$ and $v$ strains. The number is a measure of the mechanical stability of an interface. A perfect interface of a material with itself would have a fator of 100.
Where lattices match but no information on the structure of the surface exists it is flagged up. You can always add new surfaces as required. | Python Code:
%%bash
cd Electronic/
python scan_energies.py -h
Explanation: The ELS matching procedure
This practical is based on the concepts introduced for optimising electrical contacts in photovoltaic cells. The procedure was published in [J. Mater. Chem. C (2016)]((http://pubs.rsc.org/en/content/articlehtml/2016/tc/c5tc04091d).
<img src="Images/toc.gif">
In this practical we screen electrical contact materials for CH$_3$NH$_3$PbI$_3$. There are three main steps:
* Electronic matching of band energies
* Lattice matching of surface vectors
* Site matching of under-coordinated surface atoms
1. Electronic matching
Background
Effective charge extraction requires a low barrier to electron or hole transport accross an the interface. This barrier is exponential in the discontinuity of the band energies across the interface. To a first approximation the offset or discontinuity can be estimated by comparing the ionisation potentials (IPs) or electron affinities (EAs) of the two materials, this is known as Anderson's rule.
<img src="Images/anderson.gif">
Here we have collected a database of 173 measured or estimated semiconductor IPs and EAs (CollatedData.txt). We use it as the first step in our screening. The screening is performed by the script scan_energies.py. We enforce several criteria:
The IP and EA of the target material are supplied using the flags -i and -e
The IP/EA must be within a certain range from the target material; by default this is set to 0.5 eV, but it can be contolled by the flag -w. The window is the full width so the max offset is 0.5*window
A selective contact should be a semiconductor, so we apply a criterion based on its band gap. If the gap is too large we consider that it would be an insulator. By default this is set to 4.0 eV and is controlled by the flag -g
End of explanation
%%bash
cd Electronic/
python scan_energies.py -i 5.7 -e 4.0 -w 0.5 -g 4.0
Explanation: Now let's do a proper scan
IP = 5.7 eV
EA = 4.0 eV
Window = 0.25 eV
Insulating threshold = 4.0 eV
End of explanation
%%bash
cd Lattice/
for file in *.cif; do python LatticeMatch.py -a MAPI/CH3NH3PbI3.cif -b $file -s 0.03; done
Explanation: 2. Lattice matching
Background
For stable interfaces there should be an integer relation between the lattice constants of the two surfaces in contact, which allows for perfect matching, with minimal strain. Generally a strain value of ~ 3% is considered acceptable, above this the interface will be incoherent.
This section uses the ASE package to construct the low index surfaces of the materials identified in the electronic step, as well as those of the target material. The code LatticeMatch.py to identify optimal matches.
First we need .cif files of the materials obtained from the electronic matching. These are obtained from the Materials Project website. Most of the .cif files are there already, but we should add Cu$_2$O and GaN, just for practice.
Lattice matching routine
The lattice matching routine involves obtaining reduced cells for each surface and looking for multiples of each side which match. The procedure is described in more detail in our paper.
<img src="Images/lattice_match.gif">
The actual clever stuff of the algorithm comes from a paper from Zur and McGill in J. Appl. Physics (1984).
<img src="Images/ZurMcGill.jpg">
The script
The work is done by a python script called LatticeMatch.py. As input it reads .cif files. It takes a number of flags:
* -a the file containing the crystallographic information of the first material
* -b the file containing the crystallographic information of the second material
* -s the strain threshold above which to cutoff, defaults to 0.05
* -l the maximum number of times to expand either surface to find matching conditions, defaults to 5
We will run the script in a bash loop to iterate over all interfaces of our contact materials with the (100) and (110) surfaces of pseudo-cubic CH$_3$NH$_3$PbI$_3$. Note that I have made all lattice parameters of CH$_3$NH$_3$PbI$_3$ exactly equal, this is to facilitate the removal of duplicate surfaces by the script.
End of explanation
%%bash
cd Site/
python csl.py -a CH3NH3PbI3 -b GaN -x 110 -y 010 -u 1,3 -v 2,5
Explanation: 3. Site matching
So far the interface matching considered only the magnitude of the lattice vectors. It would be nice to be able to include some measure of how well the dangling bonds can passivate one another. We do this by calculating the site overlap. Basically, we determine the undercoordinated surface atoms on each side and project their positions into a 2D plane.
<img src="Images/site_overlap.gif">
We then lay the planes over each other and slide them around until there is the maximum coincidence. We calculate the overlap factor from
$$ ASO = \frac{2S_C}{S_A + S_B}$$
where $S_C$ is the number of overlapping sites in the interface, and $S_A$ and $S_B$ are the number of sites in each surface.
<img src="Images/ASO.gif">
The script
This section can be run in a stand-alone script called csl.py. It relies on a library of the 2D projections of lattice sites from different surfaces, which is called surface_points.py. Currently this contains a number of common materials types, but sometimes must be expanded as new materials are identified from the electronic and lattice steps.
csl.py takes the following input parameters:
* -a The first material to consider
* -b The second material to consider
* -x The first materials miller index to consider, format : 001
* -y The second materials miller index to consider, format : 001
* -u The first materials multiplicity, format : 2,2
* -v The second materials multiplicity, format : 2,2
We can run it for one example from the previous step, let's say GaN (010)x(2,5) with CH$_3$NH$_3$PbI$_3$ (110)x(1,3)
End of explanation
%%bash
cd Site/
for file in *cif; do python LatticeSite.py -a MAPI/CH3NH3PbI3.cif -b $file -s 0.03; done
Explanation: All together
The lattice and site examples above give a a feel for what is going on. For a proper screening procedure it would be nice to be able to run them together. That's exactly what happens with the LatticeSite.py script. It uses a new class Pair to store and pass information about the interface pairings. This includes the materials names, miller indices of matching surfaces, strians, multiplicities etc.
The LatticeSite.py script takes the same variables as LatticeMatch.py. It just takes a little longer to run, so a bit of patience is required.
This script outputs the standard pair information as well as the site matching factor, which is calculated as
$$ \frac{100\times ASO}{1 + |\epsilon|}$$
where the $ASO$ was defined above, and $\epsilon$ in the average of the $u$ and $v$ strains. The number is a measure of the mechanical stability of an interface. A perfect interface of a material with itself would have a fator of 100.
Where lattices match but no information on the structure of the surface exists it is flagged up. You can always add new surfaces as required.
End of explanation |
10,001 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sklearn
sklearn.datasets
документация
Step1: Генерация выборок
Способы генерации данных
Step2: datasets.make_classification
Step3: "Игрушечные" наборы данных
Наборы данных
Step4: Визуализация выбокри
Step5: Бонус | Python Code:
from sklearn import datasets
%pylab inline
Explanation: Sklearn
sklearn.datasets
документация: http://scikit-learn.org/stable/datasets/
End of explanation
circles = datasets.make_circles()
print "features: {}".format(circles[0][:10])
print "target: {}".format(circles[1][:10])
from matplotlib.colors import ListedColormap
colors = ListedColormap(['red', 'yellow'])
pyplot.figure(figsize(8, 8))
pyplot.scatter(map(lambda x: x[0], circles[0]), map(lambda x: x[1], circles[0]), c = circles[1], cmap = colors)
def plot_2d_dataset(data, colors):
pyplot.figure(figsize(8, 8))
pyplot.scatter(map(lambda x: x[0], data[0]), map(lambda x: x[1], data[0]), c = data[1], cmap = colors)
noisy_circles = datasets.make_circles(noise = 0.15)
plot_2d_dataset(noisy_circles, colors)
Explanation: Генерация выборок
Способы генерации данных:
* make_classification
* make_regression
* make_circles
* make_checkerboard
* etc
datasets.make_circles
End of explanation
simple_classification_problem = datasets.make_classification(n_features = 2, n_informative = 1,
n_redundant = 1, n_clusters_per_class = 1,
random_state = 1 )
plot_2d_dataset(simple_classification_problem, colors)
classification_problem = datasets.make_classification(n_features = 2, n_informative = 2, n_classes = 4,
n_redundant = 0, n_clusters_per_class = 1, random_state = 1)
colors = ListedColormap(['red', 'blue', 'green', 'yellow'])
plot_2d_dataset(classification_problem, colors)
Explanation: datasets.make_classification
End of explanation
iris = datasets.load_iris()
iris
iris.keys()
print iris.DESCR
print "feature names: {}".format(iris.feature_names)
print "target names: {names}".format(names = iris.target_names)
iris.data[:10]
iris.target
Explanation: "Игрушечные" наборы данных
Наборы данных:
* load_iris
* load_boston
* load_diabetes
* load_digits
* load_linnerud
* etc
datasets.load_iris
End of explanation
from pandas import DataFrame
iris_frame = DataFrame(iris.data)
iris_frame.columns = iris.feature_names
iris_frame['target'] = iris.target
iris_frame.head()
iris_frame.target = iris_frame.target.apply(lambda x : iris.target_names[x])
iris_frame.head()
iris_frame[iris_frame.target == 'setosa'].hist('sepal length (cm)')
pyplot.figure(figsize(20, 24))
plot_number = 0
for feature_name in iris['feature_names']:
for target_name in iris['target_names']:
plot_number += 1
pyplot.subplot(4, 3, plot_number)
pyplot.hist(iris_frame[iris_frame.target == target_name][feature_name])
pyplot.title(target_name)
pyplot.xlabel('cm')
pyplot.ylabel(feature_name[:-4])
Explanation: Визуализация выбокри
End of explanation
import seaborn as sns
sns.pairplot(iris_frame, hue = 'target')
?sns.set()
sns.set(font_scale = 1.3)
data = sns.load_dataset("iris")
sns.pairplot(data, hue = "species")
Explanation: Бонус: библиотека seaborn
End of explanation |
10,002 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<div class="alert alert-block alert-info" style="margin-top
Step1: <a id="ref0"></a>
<h2 align=center>What is Convolution?</h2>
Convolution is a linear operation similar to a linear equation, dot product, or matrix multiplication. Convolution has several advantages for analyzing images. As discussed in the video, convolution preserves the relationship between elements, and it requires fewer parameters than other methods.
You can see the relationship between the different methods that you learned
Step2: Because the parameters in <code>nn.Conv2d</code> are randomly initialized and learned through training, give them some values.
Step3: Create a dummy tensor to represent an image. The shape of the image is (1,1,5,5) where
Step4: Call the object <code>conv</code> on the tensor <code>image</code> as an input to perform the convolution and assign the result to the tensor <code>z</code>.
Step5: The following animation illustrates the process, the kernel performs at the element-level multiplication on every element in the image in the corresponding region. The values are then added together. The kernel is then shifted and the process is repeated.
<img src = "https
Step6: Create an image of size 2
Step7: <img src = "https
Step8: <a id="ref2"></a>
<h2 align=center>Stride parameter</h2>
The parameter stride changes the number of shifts the kernel moves per iteration. As a result, the output size also changes and is given by the following formula
Step9: For an image with a size of 4, calculate the output size
Step10: <a id='ref3'></a>
<h2 align=center>Zero Padding </h2>
As you apply successive convolutions, the image will shrink. You can apply zero padding to keep the image at a reasonable size, which also holds information at the borders.
In addition, you might not get integer values for the size of the kernel. Consider the following image
Step11: Try performing convolutions with the <code>kernel_size=2</code> and a <code>stride=3</code>. Use these values
Step12: You can add rows and columns of zeros around the image. This is called padding. In the constructor <code>Conv2d</code>, you specify the number of rows or columns of zeros that you want to add with the parameter padding.
For a square image, you merely pad an extra column of zeros to the first column and the last column. Repeat the process for the rows. As a result, for a square image, the width and height is the original size plus 2 x the number of padding elements specified. You can then determine the size of the output after subsequent operations accordingly as shown in the following equation where you determine the size of an image after padding and then applying a convolutions kernel of size K.
$$M'=M+2 \times padding$$
$$M_{new}=M'-K+1$$
Consider the following example
Step13: The process is summarized in the following animation
Step14: Question | Python Code:
import torch
import torch.nn as nn
import matplotlib.pyplot as plt
import numpy as np
from scipy import ndimage, misc
Explanation: <div class="alert alert-block alert-info" style="margin-top: 20px">
<a href="http://cocl.us/pytorch_link_top"><img src = "http://cocl.us/Pytorch_top" width = 950, align = "center"></a>
<img src = "https://ibm.box.com/shared/static/ugcqz6ohbvff804xp84y4kqnvvk3bq1g.png" width = 200, align = "center">
<h1 align=center><font size = 5>What's Convolution </h1 >
# Table of Contents
In this lab, you will study convolution and review how the different operations change the relationship between input and output.
<div class="alert alert-block alert-info" style="margin-top: 20px">
<li><a href="#ref0">What is Convolution </a></li>
<li><a href="#ref1">Determining the Size of Output</a></li>
<li><a href="#ref2">Stride</a></li>
<li><a href="#ref3">Zero Padding </a></li>
<li><a href="#ref4">Practice Questions </a></li>
<br>
<p></p>
Estimated Time Needed: <strong>25 min</strong>
</div>
<hr>
Import the following libraries:
End of explanation
conv = nn.Conv2d(in_channels=1, out_channels=1,kernel_size=3)
conv
Explanation: <a id="ref0"></a>
<h2 align=center>What is Convolution?</h2>
Convolution is a linear operation similar to a linear equation, dot product, or matrix multiplication. Convolution has several advantages for analyzing images. As discussed in the video, convolution preserves the relationship between elements, and it requires fewer parameters than other methods.
You can see the relationship between the different methods that you learned:
$$linear \ equation :y=wx+b$$
$$linear\ equation\ with\ multiple \ variables \ where \ \mathbf{x} \ is \ a \ vector \ \mathbf{y}=\mathbf{wx}+b$$
$$ \ matrix\ multiplication \ where \ \mathbf{X} \ in \ a \ matrix \ \mathbf{y}=\mathbf{wX}+\mathbf{b} $$
$$\ convolution \ where \ \mathbf{X} \ and \ \mathbf{Y} \ is \ a \ tensor \ \mathbf{Y}=\mathbf{w}*\mathbf{X}+\mathbf{b}$$
In convolution, the parameter <b>w</b> is called a kernel. You can perform convolution on images where you let the variable image denote the variable X and w denote the parameter.
<img src = "https://ibm.box.com/shared/static/e0xc2oqtolg4p6nfsumcbpix1q5yq2kr.png" width = 500, align = "center">
Create a two-dimensional convolution object by using the constructor Conv2d, the parameter <code>in_channels</code> and <code>out_channels</code> will be used for this section, and the parameter kernel_size will be three.
End of explanation
conv.state_dict()['weight'][0][0]=torch.tensor([[1.0,0,-1.0],[2.0,0,-2.0],[1.0,0.0,-1.0]])
conv.state_dict()['bias'][0]=0.0
conv.state_dict()
Explanation: Because the parameters in <code>nn.Conv2d</code> are randomly initialized and learned through training, give them some values.
End of explanation
image=torch.zeros(1,1,5,5)
image[0,0,:,2]=1
image
Explanation: Create a dummy tensor to represent an image. The shape of the image is (1,1,5,5) where:
(number of inputs, number of channels, number of rows, number of columns )
Set the third column to 1:
End of explanation
z=conv(image)
z
Explanation: Call the object <code>conv</code> on the tensor <code>image</code> as an input to perform the convolution and assign the result to the tensor <code>z</code>.
End of explanation
K=2
conv1 = nn.Conv2d(in_channels=1, out_channels=1,kernel_size=K)
conv1.state_dict()['weight'][0][0]=torch.tensor([[1.0,1.0],[1.0,1.0]])
conv1.state_dict()['bias'][0]=0.0
conv1.state_dict()
conv1
Explanation: The following animation illustrates the process, the kernel performs at the element-level multiplication on every element in the image in the corresponding region. The values are then added together. The kernel is then shifted and the process is repeated.
<img src = "https://ibm.box.com/shared/static/rko7couafcrtq2449g5bgppvz580vvcs.gif" width = 500, align = "center">
<a id="ref1"></a>
<h2 align=center>Determining the Size of the Output</h2>
The size of the output is an important parameter. In this lab, you will assume square images. For rectangular images, the same formula can be used in for each dimension independently.
Let M be the size of the input and K be the size of the kernel. The size of the output is given by the following formula:
$$M_{new}=M-K+1$$
Create a kernel of size 2:
End of explanation
M=4
image1=torch.ones(1,1,M,M)
Explanation: Create an image of size 2:
End of explanation
z1=conv1(image1)
print("z1:",z1)
print("shape:",z1.shape[2:4])
Explanation: <img src = "https://ibm.box.com/shared/static/d6abh5uodf0t5n0dj2imy15nnevpgd2g.png" width = 500, align = "center">
The following equation provides the output:
$$M_{new}=M-K+1$$
$$M_{new}=4-2+1$$
$$M_{new}=3$$
The following animation illustrates the process: The first iteration of the kernel overlay of the images produces one output. As the kernel is of size K, there are M-K elements for the kernel to move in the horizontal direction. The same logic applies to the vertical direction.
<img src = "https://ibm.box.com/shared/static/v1wv0n8jp0xy49fxq9ctonmoii7rtodv.gif" width = 500, align = "center">
Perform the convolution and verify the size is correct:
End of explanation
conv3 = nn.Conv2d(in_channels=1, out_channels=1,kernel_size=2,stride=2)
conv3.state_dict()['weight'][0][0]=torch.tensor([[1.0,1.0],[1.0,1.0]])
conv3.state_dict()['bias'][0]=0.0
conv3.state_dict()
Explanation: <a id="ref2"></a>
<h2 align=center>Stride parameter</h2>
The parameter stride changes the number of shifts the kernel moves per iteration. As a result, the output size also changes and is given by the following formula:
$$M_{new}=\dfrac{M-K}{stride}+1$$
Create a convolution object with a stride of 2:
End of explanation
z3=conv3(image1)
print("z3:",z3)
print("shape:",z3.shape[2:4])
Explanation: For an image with a size of 4, calculate the output size:
$$M_{new}=\dfrac{M-K}{stride}+1$$
$$M_{new}=\dfrac{4-2}{2}+1$$
$$M_{new}=2$$
The following animation illustrates the process: The first iteration of the kernel overlay of the images produces one output. Because the kernel is of size K, there are M-K=2 elements. The stride is 2 because it will move 2 elements at a time. As a result, you divide M-K by the stride value 2:
<img src = "https://ibm.box.com/shared/static/wq8wbqhm4824y1oxpdbol55q645gykg9.gif" width = 500, align = "center">
Perform the convolution and verify the size is correct:
End of explanation
image1
Explanation: <a id='ref3'></a>
<h2 align=center>Zero Padding </h2>
As you apply successive convolutions, the image will shrink. You can apply zero padding to keep the image at a reasonable size, which also holds information at the borders.
In addition, you might not get integer values for the size of the kernel. Consider the following image:
End of explanation
conv4 = nn.Conv2d(in_channels=1, out_channels=1,kernel_size=2,stride=3)
conv4.state_dict()['weight'][0][0]=torch.tensor([[1.0,1.0],[1.0,1.0]])
conv4.state_dict()['bias'][0]=0.0
conv4.state_dict()
z4=conv4(image1)
print("z4:",z4)
print("z4:",z4.shape[2:4])
Explanation: Try performing convolutions with the <code>kernel_size=2</code> and a <code>stride=3</code>. Use these values:
$$M_{new}=\dfrac{M-K}{stride}+1$$
$$M_{new}=\dfrac{4-2}{3}+1$$
$$M_{new}=1.666$$
End of explanation
conv5 = nn.Conv2d(in_channels=1, out_channels=1,kernel_size=2,stride=3,padding=1)
conv5.state_dict()['weight'][0][0]=torch.tensor([[1.0,1.0],[1.0,1.0]])
conv5.state_dict()['bias'][0]=0.0
conv5.state_dict()
z5=conv5(image1)
print("z5:",z5)
print("z5:",z4.shape[2:4])
Explanation: You can add rows and columns of zeros around the image. This is called padding. In the constructor <code>Conv2d</code>, you specify the number of rows or columns of zeros that you want to add with the parameter padding.
For a square image, you merely pad an extra column of zeros to the first column and the last column. Repeat the process for the rows. As a result, for a square image, the width and height is the original size plus 2 x the number of padding elements specified. You can then determine the size of the output after subsequent operations accordingly as shown in the following equation where you determine the size of an image after padding and then applying a convolutions kernel of size K.
$$M'=M+2 \times padding$$
$$M_{new}=M'-K+1$$
Consider the following example:
End of explanation
Image=torch.randn((1,1,4,4))
Image
Explanation: The process is summarized in the following animation:
<img src = "https://ibm.box.com/shared/static/bn8zszz4lygq7xu9sj0e6eguzpv5jfil.gif" width = 500, align = "center">
<a id='ref4'></a>
<h2 align=center>Practice Question </h2>
A kernel of zeros with a kernel size=3 is applied to the following image:
End of explanation
conv = nn.Conv2d(in_channels=1, out_channels=1,kernel_size=3)
conv.state_dict()['weight'][0][0]=torch.tensor([[0,0,0],[0,0,0],[0,0.0,0]])
conv.state_dict()['bias'][0]=0.0
Explanation: Question: Without using the function, determine what the outputs values are as each element:
Double-click here for the solution.
<!-- Your answer is below:
As each element of the kernel is zero, and for every output, the image is multiplied by the kernel, the result is always zero
-->
Question: Use the following convolution object to perform convolution on the tensor <code>Image</code>:
End of explanation |
10,003 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Learning to speak like Alice
A generative character based language model is created by training an RNN on the text of Alice in Wonderland.
Setup Imports
Step1: Read input
Step2: Build vocabulary lookup tables
Step3: Create training data
We want to create fixed size strings of characters as the input sequence and the following character as the label. So for example, if the input is "the sky was falling", then the following sequence of training chars and label chars would be created
Step4: We now vectorize the input and label chars. Each row of input is represented by seqlen characters, each character is represented as a 1-hot encoding of size vocab_size. Thus the shape of X is (len(input_chars), seqlen, vocab_size).
Each row of the label is a single character, represented by a 1-hot encoding of size vocab_size. The corresponding prediction row (output of the network) would be a dense vector of size vocab_size. Hence the shape of y is (len(input_chars), vocab_size).
Step5: Build the model
Step6: Train Model and Evaluate
We train the model in batches and evaluate the output generated at each step. There is no training set here, so evaluation is manual.
In each iteration, we fit the model for a single epoch, then randomly choose a row from the input_chars, then use it to generate text from the model for the next 100 chars. | Python Code:
from __future__ import division, print_function
from keras.layers.recurrent import SimpleRNN
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.utils.visualize_util import plot
import numpy as np
%matplotlib inline
Explanation: Learning to speak like Alice
A generative character based language model is created by training an RNN on the text of Alice in Wonderland.
Setup Imports
End of explanation
fin = open("../data/alice_in_wonderland.txt", "rb")
lines = []
for line in fin:
line = line.strip().lower().decode("ascii", "ignore")
if len(line) == 0:
continue
lines.append(line)
fin.close()
text = "".join(lines)
Explanation: Read input
End of explanation
chars = set([c for c in text])
vocab_size = len(chars)
char2index = dict((c, i) for i, c in enumerate(chars))
index2char = dict((i, c) for i, c in enumerate(chars))
Explanation: Build vocabulary lookup tables
End of explanation
seqlen = 10
step = 1
input_chars = []
label_chars = []
for i in range(0, len(text) - seqlen, step):
input_chars.append(text[i:i+seqlen])
label_chars.append(text[i+seqlen])
Explanation: Create training data
We want to create fixed size strings of characters as the input sequence and the following character as the label. So for example, if the input is "the sky was falling", then the following sequence of training chars and label chars would be created:
the sky wa => s
he sky was =>
e sky was => f
sky was f => a
sky was fa => l
and so on.
End of explanation
X = np.zeros((len(input_chars), seqlen, vocab_size), dtype=np.bool)
y = np.zeros((len(input_chars), vocab_size), dtype=np.bool)
for i, input_char in enumerate(input_chars):
for j, ch in enumerate(input_char):
X[i, j, char2index[ch]] = 1
y[i, char2index[label_chars[i]]] = 1
Explanation: We now vectorize the input and label chars. Each row of input is represented by seqlen characters, each character is represented as a 1-hot encoding of size vocab_size. Thus the shape of X is (len(input_chars), seqlen, vocab_size).
Each row of the label is a single character, represented by a 1-hot encoding of size vocab_size. The corresponding prediction row (output of the network) would be a dense vector of size vocab_size. Hence the shape of y is (len(input_chars), vocab_size).
End of explanation
model = Sequential()
model.add(SimpleRNN(512, return_sequences=False, input_shape=(seqlen, vocab_size)))
model.add(Dense(vocab_size))
model.add(Activation("softmax"))
model.compile(loss="categorical_crossentropy", optimizer="rmsprop")
Explanation: Build the model
End of explanation
batch_size = 128
for iteration in range(51):
print("=" * 50)
print("Iteration #: %d" % (iteration))
model.fit(X, y, batch_size=batch_size, nb_epoch=1, verbose=0)
# test model
test_idx = np.random.randint(len(input_chars))
test_chars = input_chars[test_idx]
print("Seed: %s" % (test_chars))
print(test_chars, end="")
for i in range(100):
Xtest = np.zeros((1, seqlen, vocab_size))
for i, ch in enumerate(test_chars):
Xtest[0, i, char2index[ch]] = 1
pred = model.predict(Xtest, verbose=0)[0]
ypred = index2char[np.argmax(pred)]
print(ypred, end="")
# move the input one step forward
test_chars = test_chars[1:] + ypred
print()
Explanation: Train Model and Evaluate
We train the model in batches and evaluate the output generated at each step. There is no training set here, so evaluation is manual.
In each iteration, we fit the model for a single epoch, then randomly choose a row from the input_chars, then use it to generate text from the model for the next 100 chars.
End of explanation |
10,004 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Extract Landmarks Data in Melbourne from Wikipedia Interactively
<a id=toc>
Extract landmarks data
Step1: URL for the landmarks in the Melbourne city centre.
Step3: Extract POI coordinates from its Wikipedia page
NOTE that there could be more than one coordinate pairs exists in a page, e.g. Yarra River.
Step5: Extract POI data, e.g. category, name, coordinates, from a HTML string retrieved from Wikipedia.
Step6: Extract POI data from landmarks in Melbourne recorded in this Wikipedia page.
Step7: Interactively check if the portion of HTML contains a category and a list of POIs of that category.
Step8: Latitude/Longitude statistics.
Step9: Scatter plot.
Step10: The outlier is the Harbour Town Docklands in category Shopping, with a coordinates actually in Queensland, the Harbour Town shopping centre in Docklands Victoria was sold in 2014, which could likely result changes of its wiki page.
Filtering out the outliers
Step11: Latitude/Longitude statistics.
Step12: Scatter plot.
Step13: Filtering POIs with the same wikipage and coordinates but associated with several names and categories
Step14: This is a place located at Melbourne CBD, let's choose the second item with category 'Shopping'.
Step15: For a Post Office, Let's choose the second item with category 'Institutions'.
Step17: Check distance between POIs
Step18: POI pairs that are less than 50 metres.
Step19: According to the above wikipage,
- "The Australian Centre for the Moving Image (ACMI) is a ... It is located in Federation Square, in Melbourne".
- "The Ian Potter Centre
Step20: Save POI data to file
Step21: Visualise POIs on map
This is a shared Google map. | Python Code:
%matplotlib inline
import requests, re, os
from bs4 import BeautifulSoup
from bs4.element import Tag
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import lxml
from fastkml import kml, styles
from shapely.geometry import Point
Explanation: Extract Landmarks Data in Melbourne from Wikipedia Interactively
<a id=toc>
Extract landmarks data:
- category,
- name,
- (latitude, longitude)
from Wikipedia page landmarks in Melbourne in an interactive way.
End of explanation
#url = 'https://en.wikipedia.org/wiki/Template:Melbourne_landmarks'
url = 'https://en.wikipedia.org/wiki/Template:Melbourne_landmarks?action=render' # cleaner HTML
data_dir = '../data'
fpoi = os.path.join(data_dir, 'poi-Melb-0.csv')
response = requests.get(url, timeout=10)
html = response.text
soup = BeautifulSoup(html, 'html.parser')
#print(soup.prettify())
Explanation: URL for the landmarks in the Melbourne city centre.
End of explanation
def extract_coord(url):
Assume a URL of a location with a Wikipedia page
url1 = url + '?action=render' # cleaner HTML
response = requests.get(url1, timeout=10)
html = response.text
soup = BeautifulSoup(html, 'html.parser')
coords = list(soup.find_all('span', {'class':'geo-dec'}))
if coords is None or len(coords) == 0:
print('No Geo-coordinates found')
return
idx = 0
if len(coords) > 1:
if len(coords) == 2 and coords[0].string == coords[1].string:
idx = 0
else:
print('WARN: more than one geo-coordinates detected!')
print('please check the actual page', url)
for i, c in enumerate(coords):
print('%d: %s' % (i, c.string))
ii = input('Input the index of the correct coordinates... ')
idx = int(ii)
assert(0 <= idx < len(coords))
coord = coords[idx]
children = list(coord.children)
assert(len(children) > 0)
coordstr = children[0]
#print(coordstr)
ss = re.sub(r'\s+', ',', coordstr).split(',') # replace blank spaces with ','
assert(len(ss) == 2)
latstr = ss[0].split('°') # e.g. 37.82167°S
lonstr = ss[1].split('°') # e.g. 144.96778°E
assert(len(latstr) == 2 and len(lonstr) == 2)
lat = float(latstr[0]) if latstr[1] == 'N' else -1 * float(latstr[0])
lon = float(lonstr[0]) if lonstr[1] == 'E' else -1 * float(lonstr[0])
print(lat, lon)
return (lat, lon, url)
extract_coord('https://en.wikipedia.org/wiki/Yarra_River')
Explanation: Extract POI coordinates from its Wikipedia page
NOTE that there could be more than one coordinate pairs exists in a page, e.g. Yarra River.
End of explanation
def extract_poi(html):
Assume POI category is a string in <th>
POI name and hyperlink is in <li> contained in an unordered list <ul>
soup = BeautifulSoup(html, 'html.parser')
th = soup.find('th')
if th is None:
print('NO POI category found')
return
assert(len(th.contents) > 0)
cat = th.contents[0]
print('CAT:', cat)
ul = soup.find('ul')
if ul is None:
print('NO POI found')
return
poi_data = [] # (name, cat, lat, lon, url)
for li in ul.children:
#print(type(li), li)
if isinstance(li, Tag):
addr = ''.join(['https:', li.a['href']])
children = list(li.a.children)
assert(len(children) > 0)
name = children[0]
print(addr, name)
ret = extract_coord(addr)
if ret is not None:
poi_data.append((name, cat, ret[0], ret[1], ret[2]))
return poi_data
Explanation: Extract POI data, e.g. category, name, coordinates, from a HTML string retrieved from Wikipedia.
End of explanation
#columns = ['Name', 'Category', 'Latitude', 'Longitude']
columns = ['poiName', 'poiTheme', 'poiLat', 'poiLon', 'poiURL']
poi_df = pd.DataFrame(columns=columns)
table = soup.find('table', {'class':'navbox-inner'}) # this class info was found by looking at the raw HTML text
Explanation: Extract POI data from landmarks in Melbourne recorded in this Wikipedia page.
End of explanation
cnt = 0
hline = '-'*90
for c in table.children:
print(hline)
print('NODE %d BEGIN' % cnt)
print(c)
print('NODE %d END' % cnt)
print(hline)
k = input('Press [Y] or [y] to extract POI, press any other key to ignore ')
if k == 'Y' or k == 'y':
print('Extracting POI...')
poi_data = extract_poi(str(c))
for t in poi_data: poi_df.loc[poi_df.shape[0]] = [t[i] for i in range(len(t))]
else:
print('IGNORED.')
print('\n\n')
cnt += 1
Explanation: Interactively check if the portion of HTML contains a category and a list of POIs of that category.
End of explanation
poi_df.head()
print('#POIs:', poi_df.shape[0])
print('Latitude Range:', poi_df['poiLat'].max() - poi_df['poiLat'].min())
poi_df['poiLat'].describe()
print('Longitude Range:', poi_df['poiLon'].max() - poi_df['poiLon'].min())
poi_df['poiLon'].describe()
Explanation: Latitude/Longitude statistics.
End of explanation
plt.figure(figsize=[10, 10])
plt.scatter(poi_df['poiLat'], poi_df['poiLon'])
Explanation: Scatter plot.
End of explanation
lat_range = [-39, -36]
lon_range = [143, 147]
poi_df = poi_df[poi_df['poiLat'] > min(lat_range)]
poi_df = poi_df[poi_df['poiLat'] < max(lat_range)]
poi_df = poi_df[poi_df['poiLon'] > min(lon_range)]
poi_df = poi_df[poi_df['poiLon'] < max(lon_range)]
Explanation: The outlier is the Harbour Town Docklands in category Shopping, with a coordinates actually in Queensland, the Harbour Town shopping centre in Docklands Victoria was sold in 2014, which could likely result changes of its wiki page.
Filtering out the outliers
End of explanation
print('#POIs:', poi_df.shape[0])
print('Latitude Range:', poi_df['poiLat'].max() - poi_df['poiLat'].min())
poi_df['poiLat'].describe()
print('Longitude Range:', poi_df['poiLon'].max() - poi_df['poiLon'].min())
poi_df['poiLon'].describe()
Explanation: Latitude/Longitude statistics.
End of explanation
plt.figure(figsize=[10, 10])
plt.scatter(poi_df['poiLat'], poi_df['poiLon'])
Explanation: Scatter plot.
End of explanation
print('#POIs:', poi_df.shape[0])
print('#URLs:', poi_df['poiURL'].unique().shape[0])
duplicated = poi_df['poiURL'].duplicated()
duplicated[duplicated == True]
print(poi_df.loc[15, 'poiURL'])
poi_df[poi_df['poiURL'] == poi_df.loc[15, 'poiURL']]
Explanation: Filtering POIs with the same wikipage and coordinates but associated with several names and categories
End of explanation
poi_df.drop(4, axis=0, inplace=True)
poi_df.head()
print(poi_df.loc[37, 'poiURL'])
poi_df[poi_df['poiURL'] == poi_df.loc[37, 'poiURL']]
Explanation: This is a place located at Melbourne CBD, let's choose the second item with category 'Shopping'.
End of explanation
poi_df.drop(19, axis=0, inplace=True)
poi_df.head(20)
Explanation: For a Post Office, Let's choose the second item with category 'Institutions'.
End of explanation
def calc_dist_vec(longitudes1, latitudes1, longitudes2, latitudes2):
Calculate the distance (unit: km) between two places on earth, vectorised
# convert degrees to radians
lng1 = np.radians(longitudes1)
lat1 = np.radians(latitudes1)
lng2 = np.radians(longitudes2)
lat2 = np.radians(latitudes2)
radius = 6371.0088 # mean earth radius, en.wikipedia.org/wiki/Earth_radius#Mean_radius
# The haversine formula, en.wikipedia.org/wiki/Great-circle_distance
dlng = np.fabs(lng1 - lng2)
dlat = np.fabs(lat1 - lat2)
dist = 2 * radius * np.arcsin( np.sqrt(
(np.sin(0.5*dlat))**2 + np.cos(lat1) * np.cos(lat2) * (np.sin(0.5*dlng))**2 ))
return dist
poi_dist_df = pd.DataFrame(data=np.zeros((poi_df.shape[0], poi_df.shape[0]), dtype=np.float), \
index=poi_df.index, columns=poi_df.index)
for ix in poi_df.index:
dists = calc_dist_vec(poi_df.loc[ix, 'poiLon'], poi_df.loc[ix, 'poiLat'], poi_df['poiLon'], poi_df['poiLat'])
poi_dist_df.loc[ix] = dists
Explanation: Check distance between POIs
End of explanation
check_ix = []
for i in range(poi_df.index.shape[0]):
for j in range(i+1, poi_df.index.shape[0]):
if poi_dist_df.iloc[i, j] < 0.05: # less 50m
check_ix = check_ix + [poi_df.index[i], poi_df.index[j]]
print(poi_df.index[i], poi_df.index[j])
poi_df.loc[check_ix]
print(poi_df.loc[33, 'poiURL'])
print(poi_df.loc[35, 'poiURL'])
print(poi_df.loc[76, 'poiURL'])
Explanation: POI pairs that are less than 50 metres.
End of explanation
poi_df.drop(33, axis=0, inplace=True)
poi_df.drop(35, axis=0, inplace=True)
poi_df.head(35)
Explanation: According to the above wikipage,
- "The Australian Centre for the Moving Image (ACMI) is a ... It is located in Federation Square, in Melbourne".
- "The Ian Potter Centre: NGV Australia houses the Australian part of the art collection of the National Gallery of Victoria (NGV). It is located at Federation Square in Melbourne ..."
So let's just keep the Federation Square.
End of explanation
#poi_ = poi_df[['poiTheme', 'poiLon', 'poiLat']].copy()
poi_ = poi_df.copy()
poi_.reset_index(inplace=True)
poi_.drop('index', axis=1, inplace=True)
poi_.index.name = 'poiID'
poi_
poi_.to_csv(fpoi, index=True)
#poi_df.to_csv(fpoi, index=False)
Explanation: Save POI data to file
End of explanation
def generate_kml(fname, poi_df):
k = kml.KML()
ns = '{http://www.opengis.net/kml/2.2}'
styid = 'style1'
# colors in KML: aabbggrr, aa=00 is fully transparent
sty = styles.Style(id=styid, styles=[styles.LineStyle(color='9f0000ff', width=2)]) # transparent red
doc = kml.Document(ns, '1', 'POIs', 'POIs visualization', styles=[sty])
k.append(doc)
# Placemark for POIs
for ix in poi_df.index:
name = poi_df.loc[ix, 'poiName']
cat = poi_df.loc[ix, 'poiTheme']
lat = poi_df.loc[ix, 'poiLat']
lon = poi_df.loc[ix, 'poiLon']
desc = ''.join(['POI Name: ', name, '<br/>Category: ', cat, '<br/>Coordinates: (%f, %f)' % (lat, lon)])
pm = kml.Placemark(ns, str(ix), name, desc, styleUrl='#' + styid)
pm.geometry = Point(lon, lat)
doc.append(pm)
# save to file
kmlstr = k.to_string(prettyprint=True)
with open(fname, 'w') as f:
f.write('<?xml version="1.0" encoding="UTF-8"?>\n')
f.write(kmlstr)
generate_kml('./poi.kml', poi_df)
Explanation: Visualise POIs on map
This is a shared Google map.
End of explanation |
10,005 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualization 1
Step1: Scatter plots
Learn how to use Matplotlib's plt.scatter function to make a 2d scatter plot.
Generate random data using np.random.randn.
Style the markers (color, size, shape, alpha) appropriately.
Include an x and y label and title.
Step2: Histogram
Learn how to use Matplotlib's plt.hist function to make a 1d histogram.
Generate randpom data using np.random.randn.
Figure out how to set the number of histogram bins and other style options.
Include an x and y label and title. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
Explanation: Visualization 1: Matplotlib Basics Exercises
End of explanation
# This assignment wasn't graded for some reason, having a 0.0/0.0 score. Resubmitting the assignment for grading on this one
# as well as on the Theory and Practice problems.
plt.xlabel('Random x-data')
plt.ylabel('Random y-data')
plt.title('Random Numbers')
plt.scatter(np.random.randn(50),np.random.randn(50), color='b', alpha=.75);
Explanation: Scatter plots
Learn how to use Matplotlib's plt.scatter function to make a 2d scatter plot.
Generate random data using np.random.randn.
Style the markers (color, size, shape, alpha) appropriately.
Include an x and y label and title.
End of explanation
bins=20
plt.hist(np.random.randn(1000),bins, color='r')
plt.ylabel('Occurrence of number')
plt.xlabel('Random Numbers')
plt.title('Random Number Histogram');
Explanation: Histogram
Learn how to use Matplotlib's plt.hist function to make a 1d histogram.
Generate randpom data using np.random.randn.
Figure out how to set the number of histogram bins and other style options.
Include an x and y label and title.
End of explanation |
10,006 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Word embeddings
Import various modules that we need for this notebook (now using Keras 1.0.0)
Step1: Load the MNIST dataset, flatten the images, convert the class labels, and scale the data.
I. Example using word embedding
We read in the IMDB dataset, using the next 500 most commonly used terms.
Step2: Let's look at one sample from X_train and the first 10 elements of y_train. The codes give indicies for the word in the vocabulary (unfortunately, we do not have access to the vocabulary for this set).
Step3: We now construct a model, the layer of which is a vector embedding. We then have a dense layer and then the activation layer. Notice that the output of the Embedding needs to be Flattened.
Step4: The accuracy is not terribly, and certainly better than random guessing, but the model is clearly overfitting. To test your understanding, would you have been able to guess the sizes of the weights in these layers? Where does the 3200 comes from the first Dense layer?
Step5: II. Word embedding with 1D Convolutions
We can use 1-dimensional convolutions to learn local associations between words, rather than having to rely on global associations.
Step6: The performance is significantly improved, and could be much better if we further tweaked the parameters and constructed a deeper model.
III. Reuters classification
Let's use the same approach to do document classification on the Reuters corpus.
Step7: The results are less impressive than they may at first seem, as the majority of the articles are in one of three categories.
IV. word2vec
Let's load in the pre-learned word2vec embeddings.
Step8: Now, let's repeate with country clubs.
Step9: And, just because I think this is fun, let's run this on a smaller set of counties and their capitals.
Step10: Look how the line between country and capital has roughly the same slope and length for all of the pairs.
It is by no means fast (the algorithm is horribly implemented in gensim) but we can also do the reverse, and find the closest words in the embedding space to a given term | Python Code:
%pylab inline
import copy
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from keras.datasets import imdb, reuters
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation, Flatten
from keras.optimizers import SGD, RMSprop
from keras.utils import np_utils
from keras.layers.convolutional import Convolution1D, MaxPooling1D, ZeroPadding1D, AveragePooling1D
from keras.callbacks import EarlyStopping
from keras.layers.normalization import BatchNormalization
from keras.preprocessing import sequence
from keras.layers.embeddings import Embedding
from gensim.models import word2vec
Explanation: Word embeddings
Import various modules that we need for this notebook (now using Keras 1.0.0)
End of explanation
(X_train, y_train), (X_test, y_test) = imdb.load_data(nb_words=500, maxlen=100, test_split=0.2)
X_train = sequence.pad_sequences(X_train, maxlen=100)
X_test = sequence.pad_sequences(X_test, maxlen=100)
Explanation: Load the MNIST dataset, flatten the images, convert the class labels, and scale the data.
I. Example using word embedding
We read in the IMDB dataset, using the next 500 most commonly used terms.
End of explanation
print(X_train[0])
print(y_train[:10])
Explanation: Let's look at one sample from X_train and the first 10 elements of y_train. The codes give indicies for the word in the vocabulary (unfortunately, we do not have access to the vocabulary for this set).
End of explanation
model = Sequential()
model.add(Embedding(500, 32, input_length=100))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(256))
model.add(Dropout(0.25))
model.add(Activation('relu'))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
model.fit(X_train, y_train, batch_size=32, nb_epoch=10, verbose=1,
validation_data=(X_test, y_test))
Explanation: We now construct a model, the layer of which is a vector embedding. We then have a dense layer and then the activation layer. Notice that the output of the Embedding needs to be Flattened.
End of explanation
print(model.layers[0].get_weights()[0].shape) # Embedding
print(model.layers[3].get_weights()[0].shape) # Dense(256)
print(model.layers[6].get_weights()[0].shape) # Dense(1)
Explanation: The accuracy is not terribly, and certainly better than random guessing, but the model is clearly overfitting. To test your understanding, would you have been able to guess the sizes of the weights in these layers? Where does the 3200 comes from the first Dense layer?
End of explanation
model = Sequential()
# embedding
model.add(Embedding(500, 32, input_length=100))
model.add(Dropout(0.25))
# convolution layers
model.add(Convolution1D(nb_filter=32,
filter_length=4,
border_mode='valid',
activation='relu'))
model.add(MaxPooling1D(pool_length=2))
# dense layers
model.add(Flatten())
model.add(Dense(256))
model.add(Dropout(0.25))
model.add(Activation('relu'))
# output layer
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
model.fit(X_train, y_train, batch_size=32, nb_epoch=15, verbose=1,
validation_data=(X_test, y_test))
Explanation: II. Word embedding with 1D Convolutions
We can use 1-dimensional convolutions to learn local associations between words, rather than having to rely on global associations.
End of explanation
(X_train, y_train), (X_test, y_test) = reuters.load_data(nb_words=500, maxlen=100, test_split=0.2)
X_train = sequence.pad_sequences(X_train, maxlen=100)
X_test = sequence.pad_sequences(X_test, maxlen=100)
Y_train = np_utils.to_categorical(y_train, 46)
Y_test = np_utils.to_categorical(y_test, 46)
model = Sequential()
# embedding
model.add(Embedding(500, 32, input_length=100))
model.add(Dropout(0.25))
# convolution layers
model.add(Convolution1D(nb_filter=32,
filter_length=4,
border_mode='valid',
activation='relu'))
model.add(MaxPooling1D(pool_length=2))
# dense layers
model.add(Flatten())
model.add(Dense(256))
model.add(Dropout(0.25))
model.add(Activation('relu'))
# output layer
model.add(Dense(46))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
model.fit(X_train, Y_train, batch_size=32, nb_epoch=15, verbose=1,
validation_data=(X_test, Y_test))
Explanation: The performance is significantly improved, and could be much better if we further tweaked the parameters and constructed a deeper model.
III. Reuters classification
Let's use the same approach to do document classification on the Reuters corpus.
End of explanation
loc = "/Users/taylor/files/word2vec_python/GoogleNews-vectors-negative300.bin"
model = word2vec.Word2Vec.load_word2vec_format(loc, binary=True)
jobs = ["professor", "teacher", "actor", "clergy", "musician", "philosopher",
"writer", "singer", "dancers", "model", "anesthesiologist", "audiologist",
"chiropractor", "optometrist", "pharmacist", "psychologist", "physician",
"architect", "firefighter", "judges", "lawyer", "biologist", "botanist",
"ecologist", "geneticist", "zoologist", "chemist", "programmer", "designer"]
print(model[jobs[0]].shape)
print(model[jobs[0]][:25])
embedding = np.array([model[x] for x in jobs])
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
pca.fit(embedding)
embedding_pca = np.transpose(pca.transform(embedding))
embedding_pca.shape
plt.figure(figsize=(16, 10))
plt.scatter(embedding_pca[0], embedding_pca[1], alpha=0)
for index,(x,y) in enumerate(np.transpose(embedding_pca)):
plt.text(x,y,jobs[index])
Explanation: The results are less impressive than they may at first seem, as the majority of the articles are in one of three categories.
IV. word2vec
Let's load in the pre-learned word2vec embeddings.
End of explanation
country = ["United_States", "Afghanistan", "Albania", "Algeria", "Andorra", "Angola", "Argentina",
"Armenia", "Australia", "Austria", "Azerbaijan", "Bahrain", "Bangladesh",
"Barbados", "Belarus", "Belgium", "Belize", "Benin", "Bhutan",
"Bolivia", "Botswana", "Brazil", "Brunei", "Bulgaria", "Burundi",
"Cambodia", "Cameroon", "Canada", "Chad", "Chile", "Colombia",
"Comoros", "Croatia", "Cuba", "Cyprus", "Denmark", "Djibouti",
"Dominica", "Ecuador", "Egypt", "Eritrea", "Estonia", "Ethiopia",
"Fiji", "Finland", "France", "Gabon", "Georgia", "Germany", "Ghana",
"Greece", "Grenada", "Guatemala", "Guinea",
"Guyana", "Haiti", "Honduras", "Hungary", "Iceland", "India",
"Indonesia", "Iran", "Iraq", "Ireland", "Israel", "Italy", "Jamaica",
"Japan", "Jordan", "Kazakhstan", "Kenya", "Kiribati", "Kuwait",
"Kyrgyzstan", "Laos", "Latvia", "Lebanon", "Lesotho", "Liberia",
"Libya", "Liechtenstein", "Lithuania", "Luxembourg", "Macedonia",
"Madagascar", "Malawi", "Malaysia", "Maldives", "Mali", "Malta",
"Mauritania", "Mauritius", "Mexico", "Micronesia", "Moldova",
"Monaco", "Mongolia", "Montenegro", "Morocco", "Mozambique",
"Namibia", "Nauru", "Nepal", "Netherlands", "Nicaragua", "Niger",
"Nigeria", "Norway", "Oman", "Pakistan", "Palau", "Panama", "Paraguay",
"Peru", "Philippines", "Poland", "Portugal", "Qatar", "Romania",
"Russia", "Rwanda", "Samoa", "Senegal", "Serbia", "Seychelles",
"Singapore", "Slovakia", "Slovenia", "Somalia", "Spain", "Sudan",
"Suriname", "Swaziland", "Sweden", "Switzerland", "Syria", "Tajikistan",
"Tanzania", "Thailand", "Togo", "Tonga", "Tunisia", "Turkey",
"Turkmenistan", "Tuvalu", "Uganda", "Ukraine", "Uruguay", "Uzbekistan",
"Vanuatu", "Venezuela", "Vietnam", "Yemen", "Zambia", "Zimbabwe",
"Abkhazia", "Somaliland", "Mayotte", "Niue",
"Tokelau", "Guernsey", "Jersey", "Anguilla", "Bermuda", "Gibraltar",
"Montserrat", "Guam", "Macau", "Greenland", "Guadeloupe", "Martinique",
"Reunion", "Aland", "Aruba", "Svalbard", "Ascension"]
embedding = np.array([model[x] for x in country])
pca = PCA(n_components=2)
pca.fit(embedding)
embedding_pca = np.transpose(pca.transform(embedding))
embedding_pca.shape
plt.figure(figsize=(16, 10))
plt.scatter(embedding_pca[0], embedding_pca[1], alpha=0)
for index,(x,y) in enumerate(np.transpose(embedding_pca)):
plt.text(x,y,country[index])
Explanation: Now, let's repeate with country clubs.
End of explanation
city_pairs = ["Afghanistan", "Belarus", "Belgium", "Brazil", "Costa_Rica",
"Canada", "Netherlands", "United_Kingdom", "United_States", "Iran", "Kabul",
"Minsk", "Brussels", "Brasilia", "San_Jose", "Ottawa", "Amsterdam",
"London", "Washington", "Tehran"]
embedding = np.array([model[x] for x in city_pairs])
pca = PCA(n_components=2)
pca.fit(embedding)
embedding_pca = np.transpose(pca.transform(embedding))
embedding_pca.shape
plt.figure(figsize=(16, 10))
plt.scatter(embedding_pca[0], embedding_pca[1], alpha=0)
for index,(x,y) in enumerate(np.transpose(embedding_pca)):
plt.text(x,y,city_pairs[index])
Explanation: And, just because I think this is fun, let's run this on a smaller set of counties and their capitals.
End of explanation
these = model.most_similar('Afghanistan', topn=25)
for th in these:
print("%02.04f - %s" % th[::-1])
Explanation: Look how the line between country and capital has roughly the same slope and length for all of the pairs.
It is by no means fast (the algorithm is horribly implemented in gensim) but we can also do the reverse, and find the closest words in the embedding space to a given term:
End of explanation |
10,007 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Load and Split Kaggle Data
Step1: Build baseline text classification model in Sklearn
Step2: This is about as good as the best Kagglers report they did.
Step3: Score Random Wikipedia User Talk Comments
Lets take a random sample of user talk comments, apply the insult model trained on kaggle and see what we find.
Step4: The distribution over insult probabilities in the two datasets is radically different. Insults in the Wikipedia dataset are much rarer
Step5: Check High Scoring Comments
Step6: Score Blocked Users' User Talk Comments
Step7: Check High Scoring Comments
Step8: Scratch | Python Code:
data_filename = '../data/train.csv'
data_df = pd.read_csv(data_filename)
corpus = data_df['Comment']
labels = data_df['Insult']
train_corpus, test_corpus, train_labels, test_labels = \
sklearn.cross_validation.train_test_split(corpus, labels, test_size=0.33)
Explanation: Load and Split Kaggle Data
End of explanation
pipeline = Pipeline([
('vect', sklearn.feature_extraction.text.CountVectorizer()),
('tfidf', sklearn.feature_extraction.text.TfidfTransformer(sublinear_tf=True,norm='l2')),
('clf', sklearn.linear_model.LogisticRegression()),
])
param_grid = {
#'vect__max_df': (0.5, 0.75, 1.0),
#'vect__max_features': (None, 5000, 10000, 50000),
'vect__ngram_range': ((1, 1), (2, 2), (1,4)), # unigrams or bigrams
#'vect_lowercase': (True, False),
'vect__analyzer' : ('char',), #('word', 'char')
#'tfidf__use_idf': (True, False),
#'tfidf__norm': ('l1', 'l2'),
#'clf__penalty': ('l2', 'elasticnet'),
#'clf__n_iter': (10, 50, 80),
'clf__C': [0.1, 1, 5, 50, 100, 1000, 5000],
}
model = cv (train_corpus, train_labels.values, 5, pipeline, param_grid, 'roc_auc', False, n_jobs=8)
# Hold out set Perf
auc(test_labels.values,get_scores(model, test_corpus))
Explanation: Build baseline text classification model in Sklearn
End of explanation
joblib.dump(model, '../models/kaggle_ngram.pkl')
Explanation: This is about as good as the best Kagglers report they did.
End of explanation
d_wiki = pd.read_csv('../../wikipedia/data/100k_user_talk_comments.tsv', sep = '\t').dropna()[:10000]
d_wiki['prob'] = model.predict_proba(d_wiki['diff'])[:,1]
d_wiki.sort('prob', ascending=False, inplace = True)
_ = plt.hist(d_wiki['prob'].values)
plt.xlabel('Insult Prob')
plt.title('Wikipedia Score Distribution')
_ = plt.hist(model.predict_proba(train_corpus)[:, 1])
plt.xlabel('Insult Prob')
plt.title('Kaggle Score Distribution')
Explanation: Score Random Wikipedia User Talk Comments
Lets take a random sample of user talk comments, apply the insult model trained on kaggle and see what we find.
End of explanation
"%0.2f%% of random wiki comments are predicted to be insults" % ((d_wiki['prob'] > 0.5).mean() * 100)
Explanation: The distribution over insult probabilities in the two datasets is radically different. Insults in the Wikipedia dataset are much rarer
End of explanation
for i in range(5):
print(d_wiki.iloc[i]['prob'], d_wiki.iloc[i]['diff'], '\n')
for i in range(50, 55):
print(d_wiki.iloc[i]['prob'], d_wiki.iloc[i]['diff'], '\n')
for i in range(100, 105):
print(d_wiki.iloc[i]['prob'], d_wiki.iloc[i]['diff'], '\n')
Explanation: Check High Scoring Comments
End of explanation
d_wiki_blocked = pd.read_csv('../../wikipedia/data/blocked_users_user_talk_page_comments.tsv', sep = '\t').dropna()[:10000]
d_wiki_blocked['prob'] = model.predict_proba(d_wiki_blocked['diff'])[:,1]
d_wiki_blocked.sort('prob', ascending=False, inplace = True)
"%0.2f%% of random wiki comments are predicted to be insults" % ((d_wiki_blocked['prob'] > 0.5).mean() * 100)
Explanation: Score Blocked Users' User Talk Comments
End of explanation
for i in range(5):
print(d_wiki_blocked.iloc[i]['prob'], d_wiki_blocked.iloc[i]['diff'], '\n')
for i in range(50, 55):
print(d_wiki_blocked.iloc[i]['prob'], d_wiki.iloc[i]['diff'], '\n')
for i in range(100, 105):
print(d_wiki_blocked.iloc[i]['prob'], d_wiki.iloc[i]['diff'], '\n')
Explanation: Check High Scoring Comments
End of explanation
isinstance(y_train, np.ndarray)
y_train = np.array([y_train, 1- y_train]).T
y_test = np.array([y_test, 1- y_test]).T
# Parameters
learning_rate = 0.001
training_epochs = 60
batch_size = 200
display_step = 5
# Network Parameters
n_hidden_1 = 100 # 1st layer num features
n_hidden_2 = 100 # 2nd layer num features
n_hidden_3 = 100 # 2nd layer num features
n_input = X_train.shape[1]
n_classes = 2
# tf Graph input
x = tf.placeholder("float", [None, n_input])
y = tf.placeholder("float", [None, n_classes])
# Create model
def LG(_X, _weights, _biases):
return tf.matmul(_X, _weights['out']) + _biases['out']
# Store layers weight & bias
weights = {
'out': tf.Variable(tf.random_normal([n_input, n_classes]))
}
biases = {
'out': tf.Variable(tf.random_normal([n_classes]))
}
# Construct model
pred = LG(x, weights, biases)
# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y)) # Softmax loss
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) # Adam Optimizer
# Initializing the variables
init = tf.initialize_all_variables()
# Launch the graph
sess = tf.Session()
sess.run(init)
# Training cycle
for epoch in range(training_epochs):
avg_cost = 0.
m = 0
batches = batch_iter(X_train.toarray(), y_train, batch_size)
# Loop over all batches
for batch_xs, batch_ys in batches:
batch_m = len(batch_ys)
m += batch_m
# Fit training using batch data
sess.run(optimizer, feed_dict={x: batch_xs, y: batch_ys})
# Compute average loss
avg_cost += sess.run(cost, feed_dict={x: batch_xs, y: batch_ys}) * batch_m
# Display logs per epoch step
if epoch % display_step == 0:
print ("Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f}".format(avg_cost/m))
correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
# Calculate accuracy
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print ("Accuracy:", accuracy.eval({x: X_train.toarray(), y: y_train}, session=sess))
print ("Accuracy:", accuracy.eval({x: X_test.toarray(), y: y_test}, session=sess))
print ("Optimization Finished!")
# Test model
Explanation: Scratch: Do not keep reading :)
Tensorflow MPL
End of explanation |
10,008 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Pandas
pandas is a Python package providing fast, flexible, and expressive data structures designed to work with relational or labeled data both. It is a fundamental high-level building block for doing practical, real world data analysis in Python.
pandas is well suited for
Step1: Pandas Data Structures
Series
A Series is a single vector of data (like a NumPy array) with an index that labels each element in the vector.
Step2: If an index is not specified, a default sequence of integers is assigned as the index. A NumPy array comprises the values of the Series, while the index is a pandas Index object.
Step3: We can assign meaningful labels to the index, if they are available
Step4: These labels can be used to refer to the values in the Series.
Step5: Notice that the indexing operation preserved the association between the values and the corresponding indices.
We can still use positional indexing if we wish.
Step6: We can give both the array of values and the index meaningful labels themselves
Step7: NumPy's math functions and other operations can be applied to Series without losing the data structure.
Step8: We can also filter according to the values in the Series
Step9: A Series can be thought of as an ordered key-value store. In fact, we can create one from a dict
Step10: Notice that the Series is created in key-sorted order.
If we pass a custom index to Series, it will select the corresponding values from the dict, and treat indices without corrsponding values as missing. Pandas uses the NaN (not a number) type for missing values.
Step11: Critically, the labels are used to align data when used in operations with other Series objects
Step12: Contrast this with NumPy arrays, where arrays of the same length will combine values element-wise; adding Series combined values with the same label in the resulting series. Notice also that the missing values were propogated by addition.
DataFrame
Inevitably, we want to be able to store, view and manipulate data that is multivariate, where for every index there are multiple fields or columns of data, often of varying data type.
A DataFrame is a tabular data structure, encapsulating multiple series like columns in a spreadsheet. Data are stored internally as a 2-dimensional object, but the DataFrame allows us to represent and manipulate higher-dimensional data.
Step13: Notice the DataFrame is sorted by column name. We can change the order by indexing them in the order we desire
Step14: A DataFrame has a second index, representing the columns
Step15: If we wish to access columns, we can do so either by dict-like indexing or by attribute
Step16: Notice this is different than with Series, where dict-like indexing retrieved a particular element (row). If we want access to a row in a DataFrame, we index its ix attribute.
Step17: Alternatively, we can create a DataFrame with a dict of dicts
Step18: We probably want this transposed
Step19: Its important to note that the Series returned when a DataFrame is indexted is merely a view on the DataFrame, and not a copy of the data itself. So you must be cautious when manipulating this data
Step20: We can create or modify columns by assignment
Step21: But note, we cannot use the attribute indexing method to add a new column
Step22: Specifying a Series as a new columns cause its values to be added according to the DataFrame's index
Step23: Other Python data structures (ones without an index) need to be the same length as the DataFrame
Step24: We can use del to remove columns, in the same way dict entries can be removed
Step25: We can extract the underlying data as a simple ndarray by accessing the values attribute
Step26: Notice that because of the mix of string and integer (and NaN) values, the dtype of the array is object. The dtype will automatically be chosen to be as general as needed to accomodate all the columns.
Step27: Pandas uses a custom data structure to represent the indices of Series and DataFrames.
Step28: Index objects are immutable
Step29: This is so that Index objects can be shared between data structures without fear that they will be changed.
Step30: Importing data
A key, but often under-appreciated, step in data analysis is importing the data that we wish to analyze. Though it is easy to load basic data structures into Python using built-in tools or those provided by packages like NumPy, it is non-trivial to import structured data well, and to easily convert this input into a robust data structure
Step31: This table can be read into a DataFrame using read_csv
Step32: Notice that read_csv automatically considered the first row in the file to be a header row.
We can override default behavior by customizing some the arguments, like header, names or index_col.
Step33: read_csv is just a convenience function for read_table, since csv is such a common format
Step34: The sep argument can be customized as needed to accomodate arbitrary separators. For example, we can use a regular expression to define a variable amount of whitespace, which is unfortunately very common in some data formats
Step35: This is called a hierarchical index, which we will revisit later in the tutorial.
If we have sections of data that we do not wish to import (for example, known bad data), we can populate the skiprows argument
Step36: Conversely, if we only want to import a small number of rows from, say, a very large data file we can use nrows
Step37: Alternately, if we want to process our data in reasonable chunks, the chunksize argument will return an iterable object that can be employed in a data processing loop. For example, our microbiome data are organized by bacterial phylum, with 15 patients represented in each
Step38: Most real-world data is incomplete, with values missing due to incomplete observation, data entry or transcription error, or other reasons. Pandas will automatically recognize and parse common missing data indicators, including NA and NULL.
Step39: Above, Pandas recognized NA and an empty field as missing data.
Step40: Unfortunately, there will sometimes be inconsistency with the conventions for missing data. In this example, there is a question mark "?" and a large negative number where there should have been a positive integer. We can specify additional symbols with the na_values argument
Step41: These can be specified on a column-wise basis using an appropriate dict as the argument for na_values.
Microsoft Excel
Since so much financial and scientific data ends up in Excel spreadsheets (regrettably), Pandas' ability to directly import Excel spreadsheets is valuable. This support is contingent on having one or two dependencies (depending on what version of Excel file is being imported) installed
Step42: Then, since modern spreadsheets consist of one or more "sheets", we parse the sheet with the data of interest
Step43: There is now a read_excel conveneince function in Pandas that combines these steps into a single call
Step44: There are several other data formats that can be imported into Python and converted into DataFrames, with the help of buitl-in or third-party libraries. These include JSON, XML, HDF5, relational and non-relational databases, and various web APIs. These are beyond the scope of this tutorial, but are covered in Python for Data Analysis.
Pandas Fundamentals
This section introduces the new user to the key functionality of Pandas that is required to use the software effectively.
For some variety, we will leave our digestive tract bacteria behind and employ some baseball data.
Step45: Notice that we specified the id column as the index, since it appears to be a unique identifier. We could try to create a unique index ourselves by combining player and year
Step46: This looks okay, but let's check
Step47: So, indices need not be unique. Our choice is not unique because some players change teams within years.
Step48: The most important consequence of a non-unique index is that indexing by label will return multiple values for some labels
Step49: We will learn more about indexing below.
We can create a truly unique index by combining player, team and year
Step50: We can create meaningful indices more easily using a hierarchical index; for now, we will stick with the numeric id field as our index.
Manipulating indices
Reindexing allows users to manipulate the data labels in a DataFrame. It forces a DataFrame to conform to the new index, and optionally, fill in missing data if requested.
A simple use of reindex is to alter the order of the rows
Step51: Notice that the id index is not sequential. Say we wanted to populate the table with every id value. We could specify and index that is a sequence from the first to the last id numbers in the database, and Pandas would fill in the missing data with NaN values
Step52: Missing values can be filled as desired, either with selected values, or by rule
Step53: Keep in mind that reindex does not work if we pass a non-unique index series.
We can remove rows or columns via the drop method
Step54: Indexing and Selection
Indexing works analogously to indexing in NumPy arrays, except we can use the labels in the Index object to extract values in addition to arrays of integers.
Step55: We can also slice with data labels, since they have an intrinsic order within the Index
Step56: In a DataFrame we can slice along either or both axes
Step57: The indexing field ix allows us to select subsets of rows and columns in an intuitive way
Step58: Similarly, the cross-section method xs (not a field) extracts a single column or row by label and returns it as a Series
Step59: Operations
DataFrame and Series objects allow for several operations to take place either on a single object, or between two or more objects.
For example, we can perform arithmetic on the elements of two objects, such as combining baseball statistics across years
Step60: Pandas' data alignment places NaN values for labels that do not overlap in the two Series. In fact, there are only 6 players that occur in both years.
Step61: While we do want the operation to honor the data labels in this way, we probably do not want the missing values to be filled with NaN. We can use the add method to calculate player home run totals by using the fill_value argument to insert a zero for home runs where labels do not overlap
Step62: Operations can also be broadcast between rows or columns.
For example, if we subtract the maximum number of home runs hit from the hr column, we get how many fewer than the maximum were hit by each player
Step63: Or, looking at things row-wise, we can see how a particular player compares with the rest of the group with respect to important statistics
Step64: We can also apply functions to each column or row of a DataFrame
Step65: Lets use apply to calculate a meaningful baseball statistics, slugging percentage
Step66: Sorting and Ranking
Pandas objects include methods for re-ordering data.
Step67: We can also use order to sort a Series by value, rather than by label.
Step68: For a DataFrame, we can sort according to the values of one or more columns using the by argument of sort_index
Step69: Ranking does not re-arrange data, but instead returns an index that ranks each value relative to others in the Series.
Step70: Ties are assigned the mean value of the tied ranks, which may result in decimal values.
Step71: Alternatively, you can break ties via one of several methods, such as by the order in which they occur in the dataset
Step72: Calling the DataFrame's rank method results in the ranks of all columns
Step73: Exercise
Calculate on base percentage for each player, and return the ordered series of estimates.
$$OBP = \frac{H + BB + HBP}{AB + BB + HBP + SF}$$
Step74: Hierarchical indexing
In the baseball example, I was forced to combine 3 fields to obtain a unique index that was not simply an integer value. A more elegant way to have done this would be to create a hierarchical index from the three fields.
Step75: This index is a MultiIndex object that consists of a sequence of tuples, the elements of which is some combination of the three columns used to create the index. Where there are multiple repeated values, Pandas does not print the repeats, making it easy to identify groups of values.
Step76: Recall earlier we imported some microbiome data using two index columns. This created a 2-level hierarchical index
Step77: With a hierachical index, we can select subsets of the data based on a partial index
Step78: Hierarchical indices can be created on either or both axes. Here is a trivial example
Step79: If you want to get fancy, both the row and column indices themselves can be given names
Step80: With this, we can do all sorts of custom indexing
Step81: Additionally, the order of the set of indices in a hierarchical MultiIndex can be changed by swapping them pairwise
Step82: Data can also be sorted by any index level, using sortlevel
Step83: Missing data
The occurence of missing data is so prevalent that it pays to use tools like Pandas, which seamlessly integrates missing data handling so that it can be dealt with easily, and in the manner required by the analysis at hand.
Missing data are represented in Series and DataFrame objects by the NaN floating point value. However, None is also treated as missing, since it is commonly used as such in other contexts (e.g. NumPy).
Step84: Missing values may be dropped or indexed out
Step85: By default, dropna drops entire rows in which one or more values are missing.
Step86: This can be overridden by passing the how='all' argument, which only drops a row when every field is a missing value.
Step87: This can be customized further by specifying how many values need to be present before a row is dropped via the thresh argument.
Step88: This is typically used in time series applications, where there are repeated measurements that are incomplete for some subjects.
If we want to drop missing values column-wise instead of row-wise, we use axis=1.
Step89: Rather than omitting missing data from an analysis, in some cases it may be suitable to fill the missing value in, either with a default value (such as zero) or a value that is either imputed or carried forward/backward from similar data points. We can do this programmatically in Pandas with the fillna argument.
Step90: Notice that fillna by default returns a new object with the desired filling behavior, rather than changing the Series or DataFrame in place (in general, we like to do this, by the way!).
Step91: We can alter values in-place using inplace=True.
Step92: Missing values can also be interpolated, using any one of a variety of methods
Step93: Data summarization
We often wish to summarize data in Series or DataFrame objects, so that they can more easily be understood or compared with similar data. The NumPy package contains several functions that are useful here, but several summarization or reduction methods are built into Pandas data structures.
Step94: Clearly, sum is more meaningful for some columns than others. For methods like mean for which application to string variables is not just meaningless, but impossible, these columns are automatically exculded
Step95: The important difference between NumPy's functions and Pandas' methods is that the latter have built-in support for handling missing data.
Step96: Sometimes we may not want to ignore missing values, and allow the nan to propagate.
Step97: Passing axis=1 will summarize over rows instead of columns, which only makes sense in certain situations.
Step98: A useful summarization that gives a quick snapshot of multiple statistics for a Series or DataFrame is describe
Step99: describe can detect non-numeric data and sometimes yield useful information about it.
Step100: We can also calculate summary statistics across multiple columns, for example, correlation and covariance.
$$cov(x,y) = \sum_i (x_i - \bar{x})(y_i - \bar{y})$$
Step101: $$corr(x,y) = \frac{cov(x,y)}{(n-1)s_x s_y} = \frac{\sum_i (x_i - \bar{x})(y_i - \bar{y})}{\sqrt{\sum_i (x_i - \bar{x})^2 \sum_i (y_i - \bar{y})^2}}$$
Step102: If we have a DataFrame with a hierarchical index (or indices), summary statistics can be applied with respect to any of the index levels
Step103: Writing Data to Files
As well as being able to read several data input formats, Pandas can also export data to a variety of storage formats. We will bring your attention to just a couple of these.
Step104: The to_csv method writes a DataFrame to a comma-separated values (csv) file. You can specify custom delimiters (via sep argument), how missing values are written (via na_rep argument), whether the index is writen (via index argument), whether the header is included (via header argument), among other options.
An efficient way of storing data to disk is in binary format. Pandas supports this using Python’s built-in pickle serialization.
Step105: The complement to to_pickle is the read_pickle function, which restores the pickle to a DataFrame or Series | Python Code:
from IPython.core.display import HTML
HTML("<iframe src=http://pandas.pydata.org width=800 height=350></iframe>")
%matplotlib inline
import pandas as pd
import numpy as np
# Set some Pandas options
pd.set_option('html', False)
pd.set_option('max_columns', 30)
pd.set_option('max_rows', 20)
Explanation: Introduction to Pandas
pandas is a Python package providing fast, flexible, and expressive data structures designed to work with relational or labeled data both. It is a fundamental high-level building block for doing practical, real world data analysis in Python.
pandas is well suited for:
Tabular data with heterogeneously-typed columns, as in an SQL table or Excel spreadsheet
Ordered and unordered (not necessarily fixed-frequency) time series data.
Arbitrary matrix data (homogeneously typed or heterogeneous) with row and column labels
Any other form of observational / statistical data sets. The data actually need not be labeled at all to be placed into a pandas data structure
Key features:
Easy handling of missing data
Size mutability: columns can be inserted and deleted from DataFrame and higher dimensional objects
Automatic and explicit data alignment: objects can be explicitly aligned to a set of labels, or the data can be aligned automatically
Powerful, flexible group by functionality to perform split-apply-combine operations on data sets
Intelligent label-based slicing, fancy indexing, and subsetting of large data sets
Intuitive merging and joining data sets
Flexible reshaping and pivoting of data sets
Hierarchical labeling of axes
Robust IO tools for loading data from flat files, Excel files, databases, and HDF5
Time series functionality: date range generation and frequency conversion, moving window statistics, moving window linear regressions, date shifting and lagging, etc.
End of explanation
counts = pd.Series([632, 1638, 569, 115])
counts
Explanation: Pandas Data Structures
Series
A Series is a single vector of data (like a NumPy array) with an index that labels each element in the vector.
End of explanation
counts.values
counts.index
Explanation: If an index is not specified, a default sequence of integers is assigned as the index. A NumPy array comprises the values of the Series, while the index is a pandas Index object.
End of explanation
bacteria = pd.Series([632, 1638, 569, 115],
index=['Firmicutes', 'Proteobacteria', 'Actinobacteria', 'Bacteroidetes'])
bacteria
Explanation: We can assign meaningful labels to the index, if they are available:
End of explanation
bacteria['Actinobacteria']
bacteria[[name.endswith('bacteria') for name in bacteria.index]]
[name.endswith('bacteria') for name in bacteria.index]
Explanation: These labels can be used to refer to the values in the Series.
End of explanation
bacteria[0]
Explanation: Notice that the indexing operation preserved the association between the values and the corresponding indices.
We can still use positional indexing if we wish.
End of explanation
bacteria.name = 'counts'
bacteria.index.name = 'phylum'
bacteria
Explanation: We can give both the array of values and the index meaningful labels themselves:
End of explanation
np.log(bacteria)
Explanation: NumPy's math functions and other operations can be applied to Series without losing the data structure.
End of explanation
bacteria[bacteria>1000]
Explanation: We can also filter according to the values in the Series:
End of explanation
bacteria_dict = {'Firmicutes': 632, 'Proteobacteria': 1638, 'Actinobacteria': 569, 'Bacteroidetes': 115}
pd.Series(bacteria_dict)
Explanation: A Series can be thought of as an ordered key-value store. In fact, we can create one from a dict:
End of explanation
bacteria2 = pd.Series(bacteria_dict, index=['Cyanobacteria','Firmicutes','Proteobacteria','Actinobacteria'])
bacteria2
bacteria2.isnull()
Explanation: Notice that the Series is created in key-sorted order.
If we pass a custom index to Series, it will select the corresponding values from the dict, and treat indices without corrsponding values as missing. Pandas uses the NaN (not a number) type for missing values.
End of explanation
bacteria + bacteria2
Explanation: Critically, the labels are used to align data when used in operations with other Series objects:
End of explanation
data = pd.DataFrame({'value':[632, 1638, 569, 115, 433, 1130, 754, 555],
'patient':[1, 1, 1, 1, 2, 2, 2, 2],
'phylum':['Firmicutes', 'Proteobacteria', 'Actinobacteria',
'Bacteroidetes', 'Firmicutes', 'Proteobacteria', 'Actinobacteria', 'Bacteroidetes']})
data
Explanation: Contrast this with NumPy arrays, where arrays of the same length will combine values element-wise; adding Series combined values with the same label in the resulting series. Notice also that the missing values were propogated by addition.
DataFrame
Inevitably, we want to be able to store, view and manipulate data that is multivariate, where for every index there are multiple fields or columns of data, often of varying data type.
A DataFrame is a tabular data structure, encapsulating multiple series like columns in a spreadsheet. Data are stored internally as a 2-dimensional object, but the DataFrame allows us to represent and manipulate higher-dimensional data.
End of explanation
data[['phylum','value','patient']]
Explanation: Notice the DataFrame is sorted by column name. We can change the order by indexing them in the order we desire:
End of explanation
data.columns
Explanation: A DataFrame has a second index, representing the columns:
End of explanation
data['value']
data.value
type(data.value)
type(data[['value']])
Explanation: If we wish to access columns, we can do so either by dict-like indexing or by attribute:
End of explanation
data.ix[3]
Explanation: Notice this is different than with Series, where dict-like indexing retrieved a particular element (row). If we want access to a row in a DataFrame, we index its ix attribute.
End of explanation
data = pd.DataFrame({0: {'patient': 1, 'phylum': 'Firmicutes', 'value': 632},
1: {'patient': 1, 'phylum': 'Proteobacteria', 'value': 1638},
2: {'patient': 1, 'phylum': 'Actinobacteria', 'value': 569},
3: {'patient': 1, 'phylum': 'Bacteroidetes', 'value': 115},
4: {'patient': 2, 'phylum': 'Firmicutes', 'value': 433},
5: {'patient': 2, 'phylum': 'Proteobacteria', 'value': 1130},
6: {'patient': 2, 'phylum': 'Actinobacteria', 'value': 754},
7: {'patient': 2, 'phylum': 'Bacteroidetes', 'value': 555}})
data
Explanation: Alternatively, we can create a DataFrame with a dict of dicts:
End of explanation
data = data.T
data
Explanation: We probably want this transposed:
End of explanation
vals = data.value
vals
vals[5] = 0
vals
data
vals = data.value.copy()
vals[5] = 1000
data
Explanation: Its important to note that the Series returned when a DataFrame is indexted is merely a view on the DataFrame, and not a copy of the data itself. So you must be cautious when manipulating this data:
End of explanation
data.value[3] = 14
data
data['year'] = 2013
data
Explanation: We can create or modify columns by assignment:
End of explanation
data.treatment = 1
data
data.treatment
Explanation: But note, we cannot use the attribute indexing method to add a new column:
End of explanation
treatment = pd.Series([0]*4 + [1]*2)
treatment
data['treatment'] = treatment
data
Explanation: Specifying a Series as a new columns cause its values to be added according to the DataFrame's index:
End of explanation
month = ['Jan', 'Feb', 'Mar', 'Apr']
data['month'] = month
data['month'] = ['Jan']*len(data)
data
Explanation: Other Python data structures (ones without an index) need to be the same length as the DataFrame:
End of explanation
del data['month']
data
Explanation: We can use del to remove columns, in the same way dict entries can be removed:
End of explanation
data.values
Explanation: We can extract the underlying data as a simple ndarray by accessing the values attribute:
End of explanation
df = pd.DataFrame({'foo': [1,2,3], 'bar':[0.4, -1.0, 4.5]})
df.values
Explanation: Notice that because of the mix of string and integer (and NaN) values, the dtype of the array is object. The dtype will automatically be chosen to be as general as needed to accomodate all the columns.
End of explanation
data.index
Explanation: Pandas uses a custom data structure to represent the indices of Series and DataFrames.
End of explanation
data.index[0] = 15
Explanation: Index objects are immutable:
End of explanation
bacteria2.index = bacteria.index
bacteria2
Explanation: This is so that Index objects can be shared between data structures without fear that they will be changed.
End of explanation
!cat data/microbiome.csv
Explanation: Importing data
A key, but often under-appreciated, step in data analysis is importing the data that we wish to analyze. Though it is easy to load basic data structures into Python using built-in tools or those provided by packages like NumPy, it is non-trivial to import structured data well, and to easily convert this input into a robust data structure:
genes = np.loadtxt("genes.csv", delimiter=",", dtype=[('gene', '|S10'), ('value', '<f4')])
Pandas provides a convenient set of functions for importing tabular data in a number of formats directly into a DataFrame object. These functions include a slew of options to perform type inference, indexing, parsing, iterating and cleaning automatically as data are imported.
Let's start with some more bacteria data, stored in csv format.
End of explanation
mb = pd.read_csv("data/microbiome.csv")
mb
Explanation: This table can be read into a DataFrame using read_csv:
End of explanation
pd.read_csv("data/microbiome.csv", header=None).head()
Explanation: Notice that read_csv automatically considered the first row in the file to be a header row.
We can override default behavior by customizing some the arguments, like header, names or index_col.
End of explanation
mb = pd.read_table("data/microbiome.csv", sep=',')
Explanation: read_csv is just a convenience function for read_table, since csv is such a common format:
End of explanation
mb = pd.read_csv("data/microbiome.csv", index_col=['Taxon','Patient'])
mb.head()
Explanation: The sep argument can be customized as needed to accomodate arbitrary separators. For example, we can use a regular expression to define a variable amount of whitespace, which is unfortunately very common in some data formats:
sep='\s+'
For a more useful index, we can specify the first two columns, which together provide a unique index to the data.
End of explanation
pd.read_csv("data/microbiome.csv", skiprows=[3,4,6]).head()
Explanation: This is called a hierarchical index, which we will revisit later in the tutorial.
If we have sections of data that we do not wish to import (for example, known bad data), we can populate the skiprows argument:
End of explanation
pd.read_csv("data/microbiome.csv", nrows=4)
Explanation: Conversely, if we only want to import a small number of rows from, say, a very large data file we can use nrows:
End of explanation
data_chunks = pd.read_csv("data/microbiome.csv", chunksize=15)
mean_tissue = {chunk.Taxon[0]:chunk.Tissue.mean() for chunk in data_chunks}
mean_tissue
Explanation: Alternately, if we want to process our data in reasonable chunks, the chunksize argument will return an iterable object that can be employed in a data processing loop. For example, our microbiome data are organized by bacterial phylum, with 15 patients represented in each:
End of explanation
!cat data/microbiome_missing.csv
pd.read_csv("data/microbiome_missing.csv").head(20)
Explanation: Most real-world data is incomplete, with values missing due to incomplete observation, data entry or transcription error, or other reasons. Pandas will automatically recognize and parse common missing data indicators, including NA and NULL.
End of explanation
pd.isnull(pd.read_csv("data/microbiome_missing.csv")).head(20)
Explanation: Above, Pandas recognized NA and an empty field as missing data.
End of explanation
pd.read_csv("data/microbiome_missing.csv", na_values=['?', -99999]).head(20)
Explanation: Unfortunately, there will sometimes be inconsistency with the conventions for missing data. In this example, there is a question mark "?" and a large negative number where there should have been a positive integer. We can specify additional symbols with the na_values argument:
End of explanation
mb_file = pd.ExcelFile('data/microbiome/MID1.xls')
mb_file
Explanation: These can be specified on a column-wise basis using an appropriate dict as the argument for na_values.
Microsoft Excel
Since so much financial and scientific data ends up in Excel spreadsheets (regrettably), Pandas' ability to directly import Excel spreadsheets is valuable. This support is contingent on having one or two dependencies (depending on what version of Excel file is being imported) installed: xlrd and openpyxl (these may be installed with either pip or easy_install).
Importing Excel data to Pandas is a two-step process. First, we create an ExcelFile object using the path of the file:
End of explanation
mb1 = mb_file.parse("Sheet 1", header=None)
mb1.columns = ["Taxon", "Count"]
mb1.head()
Explanation: Then, since modern spreadsheets consist of one or more "sheets", we parse the sheet with the data of interest:
End of explanation
mb2 = pd.read_excel('data/microbiome/MID2.xls', sheetname='Sheet 1', header=None)
mb2.head()
Explanation: There is now a read_excel conveneince function in Pandas that combines these steps into a single call:
End of explanation
baseball = pd.read_csv("data/baseball.csv", index_col='id')
baseball.head()
Explanation: There are several other data formats that can be imported into Python and converted into DataFrames, with the help of buitl-in or third-party libraries. These include JSON, XML, HDF5, relational and non-relational databases, and various web APIs. These are beyond the scope of this tutorial, but are covered in Python for Data Analysis.
Pandas Fundamentals
This section introduces the new user to the key functionality of Pandas that is required to use the software effectively.
For some variety, we will leave our digestive tract bacteria behind and employ some baseball data.
End of explanation
player_id = baseball.player + baseball.year.astype(str)
baseball_newind = baseball.copy()
baseball_newind.index = player_id
baseball_newind.head()
Explanation: Notice that we specified the id column as the index, since it appears to be a unique identifier. We could try to create a unique index ourselves by combining player and year:
End of explanation
baseball_newind.index.is_unique
Explanation: This looks okay, but let's check:
End of explanation
pd.Series(baseball_newind.index).value_counts()
Explanation: So, indices need not be unique. Our choice is not unique because some players change teams within years.
End of explanation
baseball_newind.ix['wickmbo012007']
Explanation: The most important consequence of a non-unique index is that indexing by label will return multiple values for some labels:
End of explanation
player_unique = baseball.player + baseball.team + baseball.year.astype(str)
baseball_newind = baseball.copy()
baseball_newind.index = player_unique
baseball_newind.head()
baseball_newind.index.is_unique
Explanation: We will learn more about indexing below.
We can create a truly unique index by combining player, team and year:
End of explanation
baseball.reindex(baseball.index[::-1]).head()
Explanation: We can create meaningful indices more easily using a hierarchical index; for now, we will stick with the numeric id field as our index.
Manipulating indices
Reindexing allows users to manipulate the data labels in a DataFrame. It forces a DataFrame to conform to the new index, and optionally, fill in missing data if requested.
A simple use of reindex is to alter the order of the rows:
End of explanation
id_range = range(baseball.index.values.min(), baseball.index.values.max())
baseball.reindex(id_range).head()
Explanation: Notice that the id index is not sequential. Say we wanted to populate the table with every id value. We could specify and index that is a sequence from the first to the last id numbers in the database, and Pandas would fill in the missing data with NaN values:
End of explanation
baseball.reindex(id_range, method='ffill', columns=['player','year']).head()
baseball.reindex(id_range, fill_value='mr.nobody', columns=['player']).head()
Explanation: Missing values can be filled as desired, either with selected values, or by rule:
End of explanation
baseball.shape
baseball.drop([89525, 89526])
baseball.drop(['ibb','hbp'], axis=1)
Explanation: Keep in mind that reindex does not work if we pass a non-unique index series.
We can remove rows or columns via the drop method:
End of explanation
# Sample Series object
hits = baseball_newind.h
hits
# Numpy-style indexing
hits[:3]
# Indexing by label
hits[['womacto01CHN2006','schilcu01BOS2006']]
Explanation: Indexing and Selection
Indexing works analogously to indexing in NumPy arrays, except we can use the labels in the Index object to extract values in addition to arrays of integers.
End of explanation
hits['womacto01CHN2006':'gonzalu01ARI2006']
hits['womacto01CHN2006':'gonzalu01ARI2006'] = 5
hits
Explanation: We can also slice with data labels, since they have an intrinsic order within the Index:
End of explanation
baseball_newind[['h','ab']]
baseball_newind[baseball_newind.ab>500]
Explanation: In a DataFrame we can slice along either or both axes:
End of explanation
baseball_newind.ix['gonzalu01ARI2006', ['h','X2b', 'X3b', 'hr']]
baseball_newind.ix[['gonzalu01ARI2006','finlest01SFN2006'], 5:8]
baseball_newind.ix[:'myersmi01NYA2006', 'hr']
Explanation: The indexing field ix allows us to select subsets of rows and columns in an intuitive way:
End of explanation
baseball_newind.xs('myersmi01NYA2006')
Explanation: Similarly, the cross-section method xs (not a field) extracts a single column or row by label and returns it as a Series:
End of explanation
hr2006 = baseball[baseball.year==2006].xs('hr', axis=1)
hr2006.index = baseball.player[baseball.year==2006]
hr2007 = baseball[baseball.year==2007].xs('hr', axis=1)
hr2007.index = baseball.player[baseball.year==2007]
hr2006 = pd.Series(baseball.hr[baseball.year==2006].values, index=baseball.player[baseball.year==2006])
hr2007 = pd.Series(baseball.hr[baseball.year==2007].values, index=baseball.player[baseball.year==2007])
hr_total = hr2006 + hr2007
hr_total
Explanation: Operations
DataFrame and Series objects allow for several operations to take place either on a single object, or between two or more objects.
For example, we can perform arithmetic on the elements of two objects, such as combining baseball statistics across years:
End of explanation
hr_total[hr_total.notnull()]
Explanation: Pandas' data alignment places NaN values for labels that do not overlap in the two Series. In fact, there are only 6 players that occur in both years.
End of explanation
hr2007.add(hr2006, fill_value=0)
Explanation: While we do want the operation to honor the data labels in this way, we probably do not want the missing values to be filled with NaN. We can use the add method to calculate player home run totals by using the fill_value argument to insert a zero for home runs where labels do not overlap:
End of explanation
baseball.hr - baseball.hr.max()
Explanation: Operations can also be broadcast between rows or columns.
For example, if we subtract the maximum number of home runs hit from the hr column, we get how many fewer than the maximum were hit by each player:
End of explanation
baseball.ix[89521]["player"]
stats = baseball[['h','X2b', 'X3b', 'hr']]
diff = stats - stats.xs(89521)
diff[:10]
Explanation: Or, looking at things row-wise, we can see how a particular player compares with the rest of the group with respect to important statistics
End of explanation
stats.apply(np.median)
stat_range = lambda x: x.max() - x.min()
stats.apply(stat_range)
Explanation: We can also apply functions to each column or row of a DataFrame
End of explanation
slg = lambda x: (x['h']-x['X2b']-x['X3b']-x['hr'] + 2*x['X2b'] + 3*x['X3b'] + 4*x['hr'])/(x['ab']+1e-6)
baseball.apply(slg, axis=1).apply(lambda x: '%.3f' % x)
Explanation: Lets use apply to calculate a meaningful baseball statistics, slugging percentage:
$$SLG = \frac{1B + (2 \times 2B) + (3 \times 3B) + (4 \times HR)}{AB}$$
And just for fun, we will format the resulting estimate.
End of explanation
baseball_newind.sort_index().head()
baseball_newind.sort_index(ascending=False).head()
baseball_newind.sort_index(axis=1).head()
Explanation: Sorting and Ranking
Pandas objects include methods for re-ordering data.
End of explanation
baseball.hr.order(ascending=False)
Explanation: We can also use order to sort a Series by value, rather than by label.
End of explanation
baseball[['player','sb','cs']].sort_index(ascending=[False,True], by=['sb', 'cs']).head(10)
Explanation: For a DataFrame, we can sort according to the values of one or more columns using the by argument of sort_index:
End of explanation
baseball.hr.rank()
Explanation: Ranking does not re-arrange data, but instead returns an index that ranks each value relative to others in the Series.
End of explanation
pd.Series([100,100]).rank()
Explanation: Ties are assigned the mean value of the tied ranks, which may result in decimal values.
End of explanation
baseball.hr.rank(method='first')
Explanation: Alternatively, you can break ties via one of several methods, such as by the order in which they occur in the dataset:
End of explanation
baseball.rank(ascending=False).head()
baseball[['r','h','hr']].rank(ascending=False).head()
Explanation: Calling the DataFrame's rank method results in the ranks of all columns:
End of explanation
# Write your answer here
Explanation: Exercise
Calculate on base percentage for each player, and return the ordered series of estimates.
$$OBP = \frac{H + BB + HBP}{AB + BB + HBP + SF}$$
End of explanation
baseball_h = baseball.set_index(['year', 'team', 'player'])
baseball_h.head(10)
Explanation: Hierarchical indexing
In the baseball example, I was forced to combine 3 fields to obtain a unique index that was not simply an integer value. A more elegant way to have done this would be to create a hierarchical index from the three fields.
End of explanation
baseball_h.index[:10]
baseball_h.index.is_unique
baseball_h.ix[(2007, 'ATL', 'francju01')]
Explanation: This index is a MultiIndex object that consists of a sequence of tuples, the elements of which is some combination of the three columns used to create the index. Where there are multiple repeated values, Pandas does not print the repeats, making it easy to identify groups of values.
End of explanation
mb = pd.read_csv("data/microbiome.csv", index_col=['Taxon','Patient'])
mb.head(10)
mb.index
Explanation: Recall earlier we imported some microbiome data using two index columns. This created a 2-level hierarchical index:
End of explanation
mb.ix['Proteobacteria']
Explanation: With a hierachical index, we can select subsets of the data based on a partial index:
End of explanation
frame = pd.DataFrame(np.arange(12).reshape(( 4, 3)),
index =[['a', 'a', 'b', 'b'], [1, 2, 1, 2]],
columns =[['Ohio', 'Ohio', 'Colorado'], ['Green', 'Red', 'Green']])
frame
Explanation: Hierarchical indices can be created on either or both axes. Here is a trivial example:
End of explanation
frame.index.names = ['key1', 'key2']
frame.columns.names = ['state', 'color']
frame
Explanation: If you want to get fancy, both the row and column indices themselves can be given names:
End of explanation
frame.ix['a']['Ohio']
frame.ix['b', 2]['Colorado']
Explanation: With this, we can do all sorts of custom indexing:
End of explanation
mb.swaplevel('Patient', 'Taxon').head()
Explanation: Additionally, the order of the set of indices in a hierarchical MultiIndex can be changed by swapping them pairwise:
End of explanation
mb.sortlevel('Patient', ascending=False).head()
Explanation: Data can also be sorted by any index level, using sortlevel:
End of explanation
foo = pd.Series([np.nan, -3, None, 'foobar'])
foo
foo.isnull()
Explanation: Missing data
The occurence of missing data is so prevalent that it pays to use tools like Pandas, which seamlessly integrates missing data handling so that it can be dealt with easily, and in the manner required by the analysis at hand.
Missing data are represented in Series and DataFrame objects by the NaN floating point value. However, None is also treated as missing, since it is commonly used as such in other contexts (e.g. NumPy).
End of explanation
bacteria2
bacteria2.dropna()
bacteria2[bacteria2.notnull()]
Explanation: Missing values may be dropped or indexed out:
End of explanation
data
data.dropna()
Explanation: By default, dropna drops entire rows in which one or more values are missing.
End of explanation
data.dropna(how='all')
Explanation: This can be overridden by passing the how='all' argument, which only drops a row when every field is a missing value.
End of explanation
data.ix[7, 'year'] = np.nan
data
data.dropna(thresh=4)
Explanation: This can be customized further by specifying how many values need to be present before a row is dropped via the thresh argument.
End of explanation
data.dropna(axis=1)
Explanation: This is typically used in time series applications, where there are repeated measurements that are incomplete for some subjects.
If we want to drop missing values column-wise instead of row-wise, we use axis=1.
End of explanation
bacteria2.fillna(0)
data.fillna({'year': 2013, 'treatment':2})
Explanation: Rather than omitting missing data from an analysis, in some cases it may be suitable to fill the missing value in, either with a default value (such as zero) or a value that is either imputed or carried forward/backward from similar data points. We can do this programmatically in Pandas with the fillna argument.
End of explanation
data
Explanation: Notice that fillna by default returns a new object with the desired filling behavior, rather than changing the Series or DataFrame in place (in general, we like to do this, by the way!).
End of explanation
_ = data.year.fillna(2013, inplace=True)
data
Explanation: We can alter values in-place using inplace=True.
End of explanation
bacteria2.fillna(method='bfill')
bacteria2.fillna(bacteria2.mean())
Explanation: Missing values can also be interpolated, using any one of a variety of methods:
End of explanation
baseball.sum()
Explanation: Data summarization
We often wish to summarize data in Series or DataFrame objects, so that they can more easily be understood or compared with similar data. The NumPy package contains several functions that are useful here, but several summarization or reduction methods are built into Pandas data structures.
End of explanation
baseball.mean()
Explanation: Clearly, sum is more meaningful for some columns than others. For methods like mean for which application to string variables is not just meaningless, but impossible, these columns are automatically exculded:
End of explanation
bacteria2
bacteria2.mean()
Explanation: The important difference between NumPy's functions and Pandas' methods is that the latter have built-in support for handling missing data.
End of explanation
bacteria2.mean(skipna=False)
Explanation: Sometimes we may not want to ignore missing values, and allow the nan to propagate.
End of explanation
extra_bases = baseball[['X2b','X3b','hr']].sum(axis=1)
extra_bases.order(ascending=False)
Explanation: Passing axis=1 will summarize over rows instead of columns, which only makes sense in certain situations.
End of explanation
baseball.describe()
Explanation: A useful summarization that gives a quick snapshot of multiple statistics for a Series or DataFrame is describe:
End of explanation
baseball.player.describe()
Explanation: describe can detect non-numeric data and sometimes yield useful information about it.
End of explanation
baseball.hr.cov(baseball.X2b)
Explanation: We can also calculate summary statistics across multiple columns, for example, correlation and covariance.
$$cov(x,y) = \sum_i (x_i - \bar{x})(y_i - \bar{y})$$
End of explanation
baseball.hr.corr(baseball.X2b)
baseball.ab.corr(baseball.h)
baseball.corr()
Explanation: $$corr(x,y) = \frac{cov(x,y)}{(n-1)s_x s_y} = \frac{\sum_i (x_i - \bar{x})(y_i - \bar{y})}{\sqrt{\sum_i (x_i - \bar{x})^2 \sum_i (y_i - \bar{y})^2}}$$
End of explanation
mb.head()
mb.sum(level='Taxon')
Explanation: If we have a DataFrame with a hierarchical index (or indices), summary statistics can be applied with respect to any of the index levels:
End of explanation
mb.to_csv("mb.csv")
Explanation: Writing Data to Files
As well as being able to read several data input formats, Pandas can also export data to a variety of storage formats. We will bring your attention to just a couple of these.
End of explanation
baseball.to_pickle("baseball_pickle")
Explanation: The to_csv method writes a DataFrame to a comma-separated values (csv) file. You can specify custom delimiters (via sep argument), how missing values are written (via na_rep argument), whether the index is writen (via index argument), whether the header is included (via header argument), among other options.
An efficient way of storing data to disk is in binary format. Pandas supports this using Python’s built-in pickle serialization.
End of explanation
pd.read_pickle("baseball_pickle")
Explanation: The complement to to_pickle is the read_pickle function, which restores the pickle to a DataFrame or Series:
End of explanation |
10,009 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: 高级自动微分
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: 控制梯度记录
在自动微分指南中,您已了解构建梯度计算时如何控制条带监视变量和张量。
条带还具有操作记录的方法。
如果您希望停止记录梯度,可以使用 GradientTape.stop_recording() 暂时挂起记录。
如果您不希望在模型中间对复杂运算微分,这可能有助于减少开销。其中可能包括计算指标或中间结果:
Step3: 如果您希望完全重新开始,请使用 reset()。通常,直接退出梯度带块并重新开始比较易于读取,但在退出梯度带块有困难或不可行时,可以使用 reset。
Step4: 停止梯度
与上面的全局条带控制相比,tf.stop_gradient 函数更加精确。它可以用来阻止梯度沿着特定路径流动,而不需要访问条带本身:
Step5: 自定义梯度
在某些情况下,您可能需要精确控制梯度的计算方式,而不是使用默认值。这些情况包括:
正在编写的新运算没有定义的梯度。
默认计算在数值上不稳定。
您希望从前向传递缓存开销大的计算。
您想修改一个值(例如使用:tf.clip_by_value、tf.math.round)而不修改梯度。
对于编写新运算,您可以使用 tf.RegisterGradient 自行设置。请参阅其页面了解详细信息。(注意,梯度注册为全局,需谨慎更改。)
对于后三种情况,可以使用 tf.custom_gradient。
以下示例将 tf.clip_by_norm 应用于中间梯度。
Step6: 请参见 tf.custom_gradient 装饰器了解更多详细信息。
多个条带
多个条带无缝交互。例如,下面每个条带监视不同的张量集:
Step7: 高阶梯度
GradientTape 上下文管理器内的运算会被记录下来,以供自动微分。如果在该上下文中计算梯度,梯度计算也会被记录。因此,完全相同的 API 也适用于高阶梯度。例如:
Step8: 虽然这确实可以得到标量函数的二次导数,但这种模式并不能通用于生成黑塞矩阵,因为 GradientTape.gradient 只计算标量的梯度。要构造黑塞矩阵,请参见“雅可比矩阵”部分下的“黑塞矩阵”示例。
当您从梯度计算标量,然后产生的标量作为第二个梯度计算的源时,“嵌套调用 GradientTape.gradient”是一种不错的模式,如以下示例所示。
示例:输入梯度正则化
许多模型容易受到“对抗性示例”影响,这种技术的集合会修改模型的输入,进而混淆模型输出。最简单的实现沿着输出相对于输入的梯度(即“输入梯度”) 迈出一步。
一种增强相对于对抗性示例的稳健性的方法是输入梯度正则化,这种方法会尝试将输入梯度的幅度最小化。如果输入梯度较小,那么输出的变化也应该较小。
以下是输入梯度正则化的简单实现:
使用内条带计算输出相对于输入的梯度。
计算该输入梯度的幅度。
计算该幅度相对于模型的梯度。
Step9: 雅可比矩阵
以上所有示例都取标量目标相对于某些源张量的梯度。
雅可比矩阵代表向量值函数的梯度。每行都包含其中一个向量元素的梯度。
GradientTape.jacobian 方法让您能够有效计算雅可比矩阵。
注意:
类似于 gradient:sources 参数可以是张量或张量的容器。
不同于 gradient:target 张量必须是单个张量。
标量源
作为第一个示例,以下是矢量目标相对于标量源的雅可比矩阵。
Step10: 当您相对于标量取雅可比矩阵时,结果为目标的形状,并给出每个元素相对于源的梯度:
Step11: 张量源
无论输入是标量还是张量,GradientTape.jacobian 都能有效计算源的每个元素相对于目标的每个元素的梯度。
例如,此层的输出的形状为 (10,7)。
Step12: 层内核的形状是 (5,10)。
Step13: 将这两个形状连在一起就是输出相对于内核的雅可比矩阵的形状:
Step14: 如果您在目标的维度上求和,会得到由 GradientTape.gradient 计算的总和的梯度。
Step15: <a id="hessian"> </a>
示例:黑塞矩阵
虽然 tf.GradientTape 并没有给出构造黑塞矩阵的显式方法,但可以使用 GradientTape.jacobian 方法进行构建。
注:黑塞矩阵包含 N**2 个参数。由于这个原因和其他原因,它对于大多数模型都不实际。此示例主要是为了演示如何使用 GradientTape.jacobian 方法,并不是对直接黑塞矩阵优化的认可。黑塞矩阵向量积可以通过嵌套条带有效计算,这也是一种更有效的二阶优化方法。
Step16: 要将此黑塞矩阵用于牛顿方法步骤,首先需要将其轴展平为矩阵,然后将梯度展平为向量:
Step17: 黑塞矩阵应当对称:
Step18: 牛顿方法更新步骤如下所示。
Step19: 注:实际上不反转矩阵。
Step20: 虽然这对于单个 tf.Variable 来说相对简单,但将其应用于非平凡模型则需要仔细的级联和切片,以产生跨多个变量的完整黑塞矩阵。
批量雅可比矩阵
在某些情况下,您需要取各个目标堆栈相对于源堆栈的雅可比矩阵,其中每个目标-源对的雅可比矩阵都是独立的。
例如,此处的输入 x 形状为 (batch, ins) ,输出 y 形状为 (batch, outs)。
Step21: y 相对 x 的完整雅可比矩阵的形状为 (batch, ins, batch, outs),即使您只想要 (batch, ins, outs)。
Step22: 如果堆栈中各项的梯度相互独立,那么此张量的每一个 (batch, batch) 切片都是对角矩阵:
Step23: 要获取所需结果,您可以对重复的 batch 维度求和,或者使用 tf.einsum 选择对角线。
Step24: 没有额外维度时,计算会更加高效。GradientTape.batch_jacobian 方法就是如此运作的。
Step25: 小心:GradientTape.batch_jacobian 只验证源和目标的第一维是否匹配,并不会检查梯度是否独立。用户需要确保仅在合理条件下使用 batch_jacobian。例如,添加 layers.BatchNormalization 将破坏独立性,因为它在 batch 维度进行了归一化:
Step26: 在此示例中,batch_jacobian 仍然可以运行并返回某些信息与预期形状,但其内容具有不明确的含义。 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
import tensorflow as tf
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rcParams['figure.figsize'] = (8, 6)
Explanation: 高级自动微分
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://tensorflow.google.cn/guide/advanced_autodiff"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png" class=""> 在 TensorFlow.org 上查看</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/guide/advanced_autodiff.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png" class="">在 Google Colab 中运行 </a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/guide/advanced_autodiff.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png" class=""> 在 GitHub 上查看源代码</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/guide/advanced_autodiff.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png" class=""> 下载笔记本</a></td>
</table>
自动微分指南包括计算梯度所需的全部内容。本文重点介绍 tf.GradientTape API 更深入、更不常见的功能。
设置
End of explanation
x = tf.Variable(2.0)
y = tf.Variable(3.0)
with tf.GradientTape() as t:
x_sq = x * x
with t.stop_recording():
y_sq = y * y
z = x_sq + y_sq
grad = t.gradient(z, {'x': x, 'y': y})
print('dz/dx:', grad['x']) # 2*x => 4
print('dz/dy:', grad['y'])
Explanation: 控制梯度记录
在自动微分指南中,您已了解构建梯度计算时如何控制条带监视变量和张量。
条带还具有操作记录的方法。
如果您希望停止记录梯度,可以使用 GradientTape.stop_recording() 暂时挂起记录。
如果您不希望在模型中间对复杂运算微分,这可能有助于减少开销。其中可能包括计算指标或中间结果:
End of explanation
x = tf.Variable(2.0)
y = tf.Variable(3.0)
reset = True
with tf.GradientTape() as t:
y_sq = y * y
if reset:
# Throw out all the tape recorded so far
t.reset()
z = x * x + y_sq
grad = t.gradient(z, {'x': x, 'y': y})
print('dz/dx:', grad['x']) # 2*x => 4
print('dz/dy:', grad['y'])
Explanation: 如果您希望完全重新开始,请使用 reset()。通常,直接退出梯度带块并重新开始比较易于读取,但在退出梯度带块有困难或不可行时,可以使用 reset。
End of explanation
x = tf.Variable(2.0)
y = tf.Variable(3.0)
with tf.GradientTape() as t:
y_sq = y**2
z = x**2 + tf.stop_gradient(y_sq)
grad = t.gradient(z, {'x': x, 'y': y})
print('dz/dx:', grad['x']) # 2*x => 4
print('dz/dy:', grad['y'])
Explanation: 停止梯度
与上面的全局条带控制相比,tf.stop_gradient 函数更加精确。它可以用来阻止梯度沿着特定路径流动,而不需要访问条带本身:
End of explanation
# Establish an identity operation, but clip during the gradient pass
@tf.custom_gradient
def clip_gradients(y):
def backward(dy):
return tf.clip_by_norm(dy, 0.5)
return y, backward
v = tf.Variable(2.0)
with tf.GradientTape() as t:
output = clip_gradients(v * v)
print(t.gradient(output, v)) # calls "backward", which clips 4 to 2
Explanation: 自定义梯度
在某些情况下,您可能需要精确控制梯度的计算方式,而不是使用默认值。这些情况包括:
正在编写的新运算没有定义的梯度。
默认计算在数值上不稳定。
您希望从前向传递缓存开销大的计算。
您想修改一个值(例如使用:tf.clip_by_value、tf.math.round)而不修改梯度。
对于编写新运算,您可以使用 tf.RegisterGradient 自行设置。请参阅其页面了解详细信息。(注意,梯度注册为全局,需谨慎更改。)
对于后三种情况,可以使用 tf.custom_gradient。
以下示例将 tf.clip_by_norm 应用于中间梯度。
End of explanation
x0 = tf.constant(0.0)
x1 = tf.constant(0.0)
with tf.GradientTape() as tape0, tf.GradientTape() as tape1:
tape0.watch(x0)
tape1.watch(x1)
y0 = tf.math.sin(x0)
y1 = tf.nn.sigmoid(x1)
y = y0 + y1
ys = tf.reduce_sum(y)
tape0.gradient(ys, x0).numpy() # cos(x) => 1.0
tape1.gradient(ys, x1).numpy() # sigmoid(x1)*(1-sigmoid(x1)) => 0.25
Explanation: 请参见 tf.custom_gradient 装饰器了解更多详细信息。
多个条带
多个条带无缝交互。例如,下面每个条带监视不同的张量集:
End of explanation
x = tf.Variable(1.0) # Create a Tensorflow variable initialized to 1.0
with tf.GradientTape() as t2:
with tf.GradientTape() as t1:
y = x * x * x
# Compute the gradient inside the outer `t2` context manager
# which means the gradient computation is differentiable as well.
dy_dx = t1.gradient(y, x)
d2y_dx2 = t2.gradient(dy_dx, x)
print('dy_dx:', dy_dx.numpy()) # 3 * x**2 => 3.0
print('d2y_dx2:', d2y_dx2.numpy()) # 6 * x => 6.0
Explanation: 高阶梯度
GradientTape 上下文管理器内的运算会被记录下来,以供自动微分。如果在该上下文中计算梯度,梯度计算也会被记录。因此,完全相同的 API 也适用于高阶梯度。例如:
End of explanation
x = tf.random.normal([7, 5])
layer = tf.keras.layers.Dense(10, activation=tf.nn.relu)
with tf.GradientTape() as t2:
# The inner tape only takes the gradient with respect to the input,
# not the variables.
with tf.GradientTape(watch_accessed_variables=False) as t1:
t1.watch(x)
y = layer(x)
out = tf.reduce_sum(layer(x)**2)
# 1. Calculate the input gradient.
g1 = t1.gradient(out, x)
# 2. Calculate the magnitude of the input gradient.
g1_mag = tf.norm(g1)
# 3. Calculate the gradient of the magnitude with respect to the model.
dg1_mag = t2.gradient(g1_mag, layer.trainable_variables)
[var.shape for var in dg1_mag]
Explanation: 虽然这确实可以得到标量函数的二次导数,但这种模式并不能通用于生成黑塞矩阵,因为 GradientTape.gradient 只计算标量的梯度。要构造黑塞矩阵,请参见“雅可比矩阵”部分下的“黑塞矩阵”示例。
当您从梯度计算标量,然后产生的标量作为第二个梯度计算的源时,“嵌套调用 GradientTape.gradient”是一种不错的模式,如以下示例所示。
示例:输入梯度正则化
许多模型容易受到“对抗性示例”影响,这种技术的集合会修改模型的输入,进而混淆模型输出。最简单的实现沿着输出相对于输入的梯度(即“输入梯度”) 迈出一步。
一种增强相对于对抗性示例的稳健性的方法是输入梯度正则化,这种方法会尝试将输入梯度的幅度最小化。如果输入梯度较小,那么输出的变化也应该较小。
以下是输入梯度正则化的简单实现:
使用内条带计算输出相对于输入的梯度。
计算该输入梯度的幅度。
计算该幅度相对于模型的梯度。
End of explanation
x = tf.linspace(-10.0, 10.0, 200+1)
delta = tf.Variable(0.0)
with tf.GradientTape() as tape:
y = tf.nn.sigmoid(x+delta)
dy_dx = tape.jacobian(y, delta)
Explanation: 雅可比矩阵
以上所有示例都取标量目标相对于某些源张量的梯度。
雅可比矩阵代表向量值函数的梯度。每行都包含其中一个向量元素的梯度。
GradientTape.jacobian 方法让您能够有效计算雅可比矩阵。
注意:
类似于 gradient:sources 参数可以是张量或张量的容器。
不同于 gradient:target 张量必须是单个张量。
标量源
作为第一个示例,以下是矢量目标相对于标量源的雅可比矩阵。
End of explanation
print(y.shape)
print(dy_dx.shape)
plt.plot(x.numpy(), y, label='y')
plt.plot(x.numpy(), dy_dx, label='dy/dx')
plt.legend()
_ = plt.xlabel('x')
Explanation: 当您相对于标量取雅可比矩阵时,结果为目标的形状,并给出每个元素相对于源的梯度:
End of explanation
x = tf.random.normal([7, 5])
layer = tf.keras.layers.Dense(10, activation=tf.nn.relu)
with tf.GradientTape(persistent=True) as tape:
y = layer(x)
y.shape
Explanation: 张量源
无论输入是标量还是张量,GradientTape.jacobian 都能有效计算源的每个元素相对于目标的每个元素的梯度。
例如,此层的输出的形状为 (10,7)。
End of explanation
layer.kernel.shape
Explanation: 层内核的形状是 (5,10)。
End of explanation
j = tape.jacobian(y, layer.kernel)
j.shape
Explanation: 将这两个形状连在一起就是输出相对于内核的雅可比矩阵的形状:
End of explanation
g = tape.gradient(y, layer.kernel)
print('g.shape:', g.shape)
j_sum = tf.reduce_sum(j, axis=[0, 1])
delta = tf.reduce_max(abs(g - j_sum)).numpy()
assert delta < 1e-3
print('delta:', delta)
Explanation: 如果您在目标的维度上求和,会得到由 GradientTape.gradient 计算的总和的梯度。
End of explanation
x = tf.random.normal([7, 5])
layer1 = tf.keras.layers.Dense(8, activation=tf.nn.relu)
layer2 = tf.keras.layers.Dense(6, activation=tf.nn.relu)
with tf.GradientTape() as t2:
with tf.GradientTape() as t1:
x = layer1(x)
x = layer2(x)
loss = tf.reduce_mean(x**2)
g = t1.gradient(loss, layer1.kernel)
h = t2.jacobian(g, layer1.kernel)
print(f'layer.kernel.shape: {layer1.kernel.shape}')
print(f'h.shape: {h.shape}')
Explanation: <a id="hessian"> </a>
示例:黑塞矩阵
虽然 tf.GradientTape 并没有给出构造黑塞矩阵的显式方法,但可以使用 GradientTape.jacobian 方法进行构建。
注:黑塞矩阵包含 N**2 个参数。由于这个原因和其他原因,它对于大多数模型都不实际。此示例主要是为了演示如何使用 GradientTape.jacobian 方法,并不是对直接黑塞矩阵优化的认可。黑塞矩阵向量积可以通过嵌套条带有效计算,这也是一种更有效的二阶优化方法。
End of explanation
n_params = tf.reduce_prod(layer1.kernel.shape)
g_vec = tf.reshape(g, [n_params, 1])
h_mat = tf.reshape(h, [n_params, n_params])
Explanation: 要将此黑塞矩阵用于牛顿方法步骤,首先需要将其轴展平为矩阵,然后将梯度展平为向量:
End of explanation
def imshow_zero_center(image, **kwargs):
lim = tf.reduce_max(abs(image))
plt.imshow(image, vmin=-lim, vmax=lim, cmap='seismic', **kwargs)
plt.colorbar()
imshow_zero_center(h_mat)
Explanation: 黑塞矩阵应当对称:
End of explanation
eps = 1e-3
eye_eps = tf.eye(h_mat.shape[0])*eps
Explanation: 牛顿方法更新步骤如下所示。
End of explanation
# X(k+1) = X(k) - (∇²f(X(k)))^-1 @ ∇f(X(k))
# h_mat = ∇²f(X(k))
# g_vec = ∇f(X(k))
update = tf.linalg.solve(h_mat + eye_eps, g_vec)
# Reshape the update and apply it to the variable.
_ = layer1.kernel.assign_sub(tf.reshape(update, layer1.kernel.shape))
Explanation: 注:实际上不反转矩阵。
End of explanation
x = tf.random.normal([7, 5])
layer1 = tf.keras.layers.Dense(8, activation=tf.nn.elu)
layer2 = tf.keras.layers.Dense(6, activation=tf.nn.elu)
with tf.GradientTape(persistent=True, watch_accessed_variables=False) as tape:
tape.watch(x)
y = layer1(x)
y = layer2(y)
y.shape
Explanation: 虽然这对于单个 tf.Variable 来说相对简单,但将其应用于非平凡模型则需要仔细的级联和切片,以产生跨多个变量的完整黑塞矩阵。
批量雅可比矩阵
在某些情况下,您需要取各个目标堆栈相对于源堆栈的雅可比矩阵,其中每个目标-源对的雅可比矩阵都是独立的。
例如,此处的输入 x 形状为 (batch, ins) ,输出 y 形状为 (batch, outs)。
End of explanation
j = tape.jacobian(y, x)
j.shape
Explanation: y 相对 x 的完整雅可比矩阵的形状为 (batch, ins, batch, outs),即使您只想要 (batch, ins, outs)。
End of explanation
imshow_zero_center(j[:, 0, :, 0])
_ = plt.title('A (batch, batch) slice')
def plot_as_patches(j):
# Reorder axes so the diagonals will each form a contiguous patch.
j = tf.transpose(j, [1, 0, 3, 2])
# Pad in between each patch.
lim = tf.reduce_max(abs(j))
j = tf.pad(j, [[0, 0], [1, 1], [0, 0], [1, 1]],
constant_values=-lim)
# Reshape to form a single image.
s = j.shape
j = tf.reshape(j, [s[0]*s[1], s[2]*s[3]])
imshow_zero_center(j, extent=[-0.5, s[2]-0.5, s[0]-0.5, -0.5])
plot_as_patches(j)
_ = plt.title('All (batch, batch) slices are diagonal')
Explanation: 如果堆栈中各项的梯度相互独立,那么此张量的每一个 (batch, batch) 切片都是对角矩阵:
End of explanation
j_sum = tf.reduce_sum(j, axis=2)
print(j_sum.shape)
j_select = tf.einsum('bxby->bxy', j)
print(j_select.shape)
Explanation: 要获取所需结果,您可以对重复的 batch 维度求和,或者使用 tf.einsum 选择对角线。
End of explanation
jb = tape.batch_jacobian(y, x)
jb.shape
error = tf.reduce_max(abs(jb - j_sum))
assert error < 1e-3
print(error.numpy())
Explanation: 没有额外维度时,计算会更加高效。GradientTape.batch_jacobian 方法就是如此运作的。
End of explanation
x = tf.random.normal([7, 5])
layer1 = tf.keras.layers.Dense(8, activation=tf.nn.elu)
bn = tf.keras.layers.BatchNormalization()
layer2 = tf.keras.layers.Dense(6, activation=tf.nn.elu)
with tf.GradientTape(persistent=True, watch_accessed_variables=False) as tape:
tape.watch(x)
y = layer1(x)
y = bn(y, training=True)
y = layer2(y)
j = tape.jacobian(y, x)
print(f'j.shape: {j.shape}')
plot_as_patches(j)
_ = plt.title('These slices are not diagonal')
_ = plt.xlabel("Don't use `batch_jacobian`")
Explanation: 小心:GradientTape.batch_jacobian 只验证源和目标的第一维是否匹配,并不会检查梯度是否独立。用户需要确保仅在合理条件下使用 batch_jacobian。例如,添加 layers.BatchNormalization 将破坏独立性,因为它在 batch 维度进行了归一化:
End of explanation
jb = tape.batch_jacobian(y, x)
print(f'jb.shape: {jb.shape}')
Explanation: 在此示例中,batch_jacobian 仍然可以运行并返回某些信息与预期形状,但其内容具有不明确的含义。
End of explanation |
10,010 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Digit Recognizer
Import Libraries
Step1: Loading Data
Step2: Plotting images and their class values
Step3: Viewing shape and content of data
Step4: Flattening images
The neural-network takes a single vector for training. Therefore, we convert the 28x28 pixels images into a single 784 (28 * 28 = 784) dimensional vector.
Step5: Normalizing input values
As we can see above, the pixel values for each image are gray scaled between 0 and 255. We now, normalize those values from 0-255 to 0-1.
Step6: Converting target variable values into one-hot format
The output/target variable is in the format 0 to 9. As this is a multi-class classification problem, we convert the output class values into one-hot format which is simply a binary matrix, i.e.
value 0 will be converted to one-hot format as [1, 0, 0, 0, 0, 0, 0, 0, 0]
value 1 will be converted to one-hot format as [0, 1, 0, 0, 0, 0, 0, 0, 0]
value 2 will be converted to one-hot format as [0, 0, 1, 0, 0, 0, 0, 0, 0]
and so on...
Step7: Define Simple Perceptron Model
Generally, neural networks have the following properties
Step8: Fit and Evaluate Model
The model is fit over 5 epochs/iteration. It takes a batch of 200 images in each iteration. Test data is used as validation set. The epochs may be increased to improve accuracy.
Finally, test data is used to evaluate the model by calculating the model's classification accuracy.
Step9: Plot correctly and incorrectly predicted images
Let's plot some images which are correctly predicted and some images which are incorrectly predicted on our test dataset.
Step10: Confusion Matrix
Step11: The above confusion matrix heatmap shows that
Step12: Reshaping images
The image dimension expected by Keras for 2D (two-dimensional) convolution is in the format of [pixels][width][height].
For RGB color image, the first dimension (pixel) value would be 3 for the red, green and blue components. It's like having 3 image inputs for every single color image. In our case (for MNIST handwritten images), we have gray scale images. Hence, the pixel dimension is set as 1.
Step13: Normalizing input values
As we can see above, the pixel values for each image are gray scaled between 0 and 255. We now, normalize those values from 0-255 to 0-1.
Step14: Converting target variable values into one-hot format
The output/target variable is in the format 0 to 9. As this is a multi-class classification problem, we convert the output class values into one-hot format which is simply a binary matrix, i.e.
value 0 will be converted to one-hot format as [1, 0, 0, 0, 0, 0, 0, 0, 0]
value 1 will be converted to one-hot format as [0, 1, 0, 0, 0, 0, 0, 0, 0]
value 2 will be converted to one-hot format as [0, 0, 1, 0, 0, 0, 0, 0, 0]
and so on...
Step15: Define Convolutional Neural Network (CNN) Model
Convolution Layer
- We define 32 feature maps with the size of 5x5 matrix
- We use ReLU (Rectified Linear Units) as the activation function
- This layer expects input image size of 1x28x28 ([pixels][height][weight])
Max Pooling Layer
- It has a pool size of 2x2
Dropout Layer
- Configured to randomly exclude 20% of neurons in the layer to reduce overfitting
Flatten
- Flattens the image into a single dimensional vector which is required as input by the fully connected layer
Fully connected Layer
- Contains 128 neurons
- relu is used as an activation function
- Output layer has num_classes=10 neurons for the 10 classes
- softmax activation function is used in the output layer
- adam gradient descent algorithm is used as optimizer to learn and update weights
Step16: Fit and Evaluate Model
The model is fit over 5 epochs/iteration. It takes a batch of 200 images in each iteration. Test data is used as validation set. The epochs may be increased to improve accuracy.
Finally, test data is used to evaluate the model by calculating the model's classification accuracy.
Step17: Accuracy (98.75%) of Convolution Neural Network (CNN) model has improved as compared to the accuracy (97.91%) of Multi-layer Perceptron (MLP) model.
The accuracy of CNN model can be further increased by
Step18: Confusion Matrix
Step19: Using Multi-layer Perceptron (MLP) Model, we had the following heatmap outcome | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
from keras.utils import np_utils
from keras.datasets import mnist
# for Multi-layer Perceptron (MLP) model
from keras.models import Sequential
from keras.layers import Dense
# for Convolutional Neural Network (CNN) model
from keras.layers import Dropout, Flatten
from keras.layers.convolutional import Conv2D
from keras.layers.convolutional import MaxPooling2D
# fix for issue: https://github.com/fchollet/keras/issues/2681
from keras import backend as K
K.set_image_dim_ordering('th')
Explanation: Digit Recognizer
Import Libraries
End of explanation
(X_train, y_train), (X_test, y_test) = mnist.load_data()
Explanation: Loading Data
End of explanation
plt.figure(figsize=[20,8])
for i in range(6):
plt.subplot(1,6,i+1)
#plt.imshow(X_train[i])
plt.imshow(X_train[i], cmap='gray', interpolation='none')
plt.title("Class {}".format(y_train[i]))
# fix random seed for reproducibility
seed = 7
np.random.seed(seed)
Explanation: Plotting images and their class values
End of explanation
print (X_train.shape)
print (y_train.shape)
# print first train image values
# it contains a matrix of 28 rows and 28 cols
print (X_train[0])
Explanation: Viewing shape and content of data
End of explanation
# flatten 28*28 images to a 784 vector for each image
num_pixels = X_train.shape[1] * X_train.shape[2]
X_train = X_train.reshape(X_train.shape[0], num_pixels).astype('float32')
X_test = X_test.reshape(X_test.shape[0], num_pixels).astype('float32')
print (num_pixels, X_train.shape, X_test.shape)
print (X_train[1])
Explanation: Flattening images
The neural-network takes a single vector for training. Therefore, we convert the 28x28 pixels images into a single 784 (28 * 28 = 784) dimensional vector.
End of explanation
# pixel values are gray scale between 0 and 255
# normalize inputs from 0-255 to 0-1
X_train = X_train / 255
X_test = X_test / 255
print (X_train[1])
Explanation: Normalizing input values
As we can see above, the pixel values for each image are gray scaled between 0 and 255. We now, normalize those values from 0-255 to 0-1.
End of explanation
print (y_train.shape)
print (y_train[0])
# one hot encode outputs
# note that we have new variables with capital Y
# Y_train is different than y_train
Y_train = np_utils.to_categorical(y_train)
Y_test = np_utils.to_categorical(y_test)
num_classes = Y_test.shape[1]
print (y_train.shape, Y_train.shape)
print (y_train[0], Y_train[0])
Explanation: Converting target variable values into one-hot format
The output/target variable is in the format 0 to 9. As this is a multi-class classification problem, we convert the output class values into one-hot format which is simply a binary matrix, i.e.
value 0 will be converted to one-hot format as [1, 0, 0, 0, 0, 0, 0, 0, 0]
value 1 will be converted to one-hot format as [0, 1, 0, 0, 0, 0, 0, 0, 0]
value 2 will be converted to one-hot format as [0, 0, 1, 0, 0, 0, 0, 0, 0]
and so on...
End of explanation
def baseline_model():
# create model
model = Sequential()
model.add(Dense(num_pixels, input_dim=num_pixels, kernel_initializer='normal', activation='relu'))
model.add(Dense(num_classes, kernel_initializer='normal', activation='softmax'))
# compile model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
Explanation: Define Simple Perceptron Model
Generally, neural networks have the following properties:
- an input layer as a single vector
- zero or multiple hidden layers after input layer
- an output layer after hidden layers which represents class scores in classification problem
- each neuron in a hidden layer is fully connected to all neurons in the previous layer
- neurons in a single layer function independently and do not have any connection with other neurons of the same layer
A single-layer perceptron model is the simplest kind of neural network where there are only two layers: input layer and output layer. The inputs are directly fed into the outputs via a series of weights. It's a feed-forward network where the information moves in only one direction, i.e. forward direction from input nodes to output nodes.
A multi-layer perceptron model is the other kind of neural network where there are one or more hidden layers in between input and output layers. The information flows from input layer to hidden layers and then to output layers. These models can be of feed-forward type or they can also use back-propagation method. In back-propagation, the error is calculated in the output layer by computing the difference of actual output and predicted output. The error is then distributed back to the network layers. Based on this error, the algorithm will adjust the weights of each connection in order to reduce the error value. This type of learning is also referred as deep learning.
We create a simple neural network model with one hidden layer with 784 neurons. Our input layer will also have 784 neurons as we have flattened out training dataset into a single 784 dimensional vector.
softmax activation is used in the output layer.
adam gradient descent optimizer is used to learn weights.
End of explanation
model = baseline_model()
model.fit(X_train, Y_train, validation_data=(X_test, Y_test), epochs=5, batch_size=200, verbose=1)
model.summary()
scores = model.evaluate(X_test, Y_test, verbose=0)
print (scores)
print ('Score: {}'.format(scores[0]))
print ('Accuracy: {}'.format(scores[1]))
Explanation: Fit and Evaluate Model
The model is fit over 5 epochs/iteration. It takes a batch of 200 images in each iteration. Test data is used as validation set. The epochs may be increased to improve accuracy.
Finally, test data is used to evaluate the model by calculating the model's classification accuracy.
End of explanation
# get predicted values
predicted_classes = model.predict_classes(X_test)
# get index list of all correctly predicted values
correct_indices = np.nonzero(np.equal(predicted_classes, y_test))[0]
# get index list of all incorrectly predicted values
incorrect_indices = np.nonzero(np.not_equal(predicted_classes, y_test))[0]
print ('Correctly predicted: %i' % np.size(correct_indices))
print ('Incorrectly predicted: %i' % np.size(incorrect_indices))
plt.figure(figsize=[20,8])
for i, correct in enumerate(correct_indices[:6]):
plt.subplot(1,6,i+1)
plt.imshow(X_test[correct].reshape(28,28), cmap='gray', interpolation='none')
plt.title("Predicted {}, Class {}".format(predicted_classes[correct], y_test[correct]))
plt.figure(figsize=[20,8])
for i, incorrect in enumerate(incorrect_indices[:6]):
plt.subplot(1,6,i+1)
plt.imshow(X_test[incorrect].reshape(28,28), cmap='gray', interpolation='none')
plt.title("Predicted {}, Class {}".format(predicted_classes[incorrect], y_test[incorrect]))
Explanation: Plot correctly and incorrectly predicted images
Let's plot some images which are correctly predicted and some images which are incorrectly predicted on our test dataset.
End of explanation
from sklearn.metrics import confusion_matrix
import seaborn as sns
sns.set() # setting seaborn default for plots
class_names = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
# Compute confusion matrix
cnf_matrix = confusion_matrix(y_test, predicted_classes)
np.set_printoptions(precision=2)
print ('Confusion Matrix in Numbers')
print (cnf_matrix)
print ('')
cnf_matrix_percent = cnf_matrix.astype('float') / cnf_matrix.sum(axis=1)[:, np.newaxis]
print ('Confusion Matrix in Percentage')
print (cnf_matrix_percent)
print ('')
true_class_names = class_names
predicted_class_names = class_names
df_cnf_matrix = pd.DataFrame(cnf_matrix,
index = true_class_names,
columns = predicted_class_names)
df_cnf_matrix_percent = pd.DataFrame(cnf_matrix_percent,
index = true_class_names,
columns = predicted_class_names)
plt.figure(figsize = (8,6))
#plt.subplot(121)
ax = sns.heatmap(df_cnf_matrix, annot=True, fmt='d')
ax.set_ylabel('True values')
ax.set_xlabel('Predicted values')
ax.set_title('Confusion Matrix in Numbers')
'''
plt.subplot(122)
ax = sns.heatmap(df_cnf_matrix_percent, annot=True)
ax.set_ylabel('True values')
ax.set_xlabel('Predicted values')
'''
Explanation: Confusion Matrix
End of explanation
# load data
(X_train, y_train), (X_test, y_test) = mnist.load_data()
Explanation: The above confusion matrix heatmap shows that:
- Most of value 5 was predicted as 3. 21 images of digit 5 were predicted as 3.
- The second most incorrect prediction was of number 8. 17 images of digit 8 were predicted as 3.
- The third highest wrong prediction was of number 9. 12 images of digit 9 were predicted as 4.
Improve Accuracy using Convolution Neural Network (CNN) Model
Convolutional Neural Networks (CNN) are similar to Multi-layer Perceptron Neural Networks. They are also made up of neurons that have learnable weights and biases. CNNs have been successfully applied to analyzing visual imagery. They are mostly being applied in image and video recognition, recommender systems and natural language processing.
A CNN consists of multiple hidden layers. The hidden layers are either convolutional, pooling or fully connected.
Convolution layer: Feature extraction is done in this layer. This layer applies convolution operation to the input and pass the result to the next layer. In the image classification problem, a weight matrix is defined in the convolution layer. A dot product is computed between the weight matrix and a small part (as the size of the weight matrix) of the input image. The weight runs across the image such that all the pixels are covered at least once, to give a convolved output.
The weight matrix behaves like a filter in an image extracting particular information from the original image matrix.
A weight combination might be extracting edges, while another one might a particular color, while another one might just blur the unwanted noise.
The weights are learnt such that the loss function is minimized similar to a Multi-layer Perceptron.
Therefore weights are learnt to extract features from the original image which help the network in correct prediction.
When we have multiple convolutional layers, the initial layer extract more generic features, while as the network gets deeper, the features extracted by the weight matrices are more and more complex and more suited to the problem at hand.
Reference: Architecture of Convolutional Neural Networks (CNNs) demystified
Stride: While computing the dot product, if the weight matrix moves 1 pixel at a time then we call it a stride of 1. Size of the image keeps on reducing as we increase the stride value.
Padding: Padding one or more layer of zeros across the image helps to resolve the output image size reduction issue caused by stride. Initial size of the image is retained after the padding is done.
Pooling layer: Reduction in number of feature parameters is done in this layer. When the image size is too larger, then we need a pooling layer in-between two convolution layers. This layer helps to reduce the number of trainable parameters of the input image. The sole purpose of pooling is to reduce the spatial size of the image. This layer is also used to control overfitting.
- Max pooling: Uses maximum value from each of the cluster of the prior layer
- Average pooling: Uses the average value from each of the cluster of the prior layer
Fully connected layer: This layer comes after convolution and pooling layers. This layer connects each neuron in one layer to every neuron in another layer. This is similar to the concept of layer connection of Multi-layer perceptron model. Error is computed in the output layer by computing the difference in actual output and predicted output. After that, back-propagation is used to update the weight and biases for error and loss reduction.
Load data
End of explanation
# reshape to be [samples][pixels][width][height]
X_train = X_train.reshape(X_train.shape[0], 1, 28, 28).astype('float32')
X_test = X_test.reshape(X_test.shape[0], 1, 28, 28).astype('float32')
print (num_pixels, X_train.shape, X_test.shape)
print (X_train[1])
Explanation: Reshaping images
The image dimension expected by Keras for 2D (two-dimensional) convolution is in the format of [pixels][width][height].
For RGB color image, the first dimension (pixel) value would be 3 for the red, green and blue components. It's like having 3 image inputs for every single color image. In our case (for MNIST handwritten images), we have gray scale images. Hence, the pixel dimension is set as 1.
End of explanation
# pixel values are gray scale between 0 and 255
# normalize inputs from 0-255 to 0-1
X_train = X_train / 255
X_test = X_test / 255
print (X_train[1])
Explanation: Normalizing input values
As we can see above, the pixel values for each image are gray scaled between 0 and 255. We now, normalize those values from 0-255 to 0-1.
End of explanation
print (y_train.shape)
print (y_train[0])
# one hot encode outputs
# note that we have new variables with capital Y
# Y_train is different than y_train
Y_train = np_utils.to_categorical(y_train)
Y_test = np_utils.to_categorical(y_test)
num_classes = Y_test.shape[1]
print (y_train.shape, Y_train.shape)
print (y_train[0], Y_train[0])
Explanation: Converting target variable values into one-hot format
The output/target variable is in the format 0 to 9. As this is a multi-class classification problem, we convert the output class values into one-hot format which is simply a binary matrix, i.e.
value 0 will be converted to one-hot format as [1, 0, 0, 0, 0, 0, 0, 0, 0]
value 1 will be converted to one-hot format as [0, 1, 0, 0, 0, 0, 0, 0, 0]
value 2 will be converted to one-hot format as [0, 0, 1, 0, 0, 0, 0, 0, 0]
and so on...
End of explanation
# baseline model for CNN
def baseline_model():
# create model
model = Sequential()
model.add(Conv2D(32, (5, 5), input_shape=(1, 28, 28), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
# compile model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
Explanation: Define Convolutional Neural Network (CNN) Model
Convolution Layer
- We define 32 feature maps with the size of 5x5 matrix
- We use ReLU (Rectified Linear Units) as the activation function
- This layer expects input image size of 1x28x28 ([pixels][height][weight])
Max Pooling Layer
- It has a pool size of 2x2
Dropout Layer
- Configured to randomly exclude 20% of neurons in the layer to reduce overfitting
Flatten
- Flattens the image into a single dimensional vector which is required as input by the fully connected layer
Fully connected Layer
- Contains 128 neurons
- relu is used as an activation function
- Output layer has num_classes=10 neurons for the 10 classes
- softmax activation function is used in the output layer
- adam gradient descent algorithm is used as optimizer to learn and update weights
End of explanation
model = baseline_model()
model.fit(X_train, Y_train, validation_data=(X_test, Y_test), epochs=5, batch_size=200, verbose=1)
model.summary()
scores = model.evaluate(X_test, Y_test, verbose=0)
print (scores)
print ('Score: {}'.format(scores[0]))
print ('Accuracy: {}'.format(scores[1]))
Explanation: Fit and Evaluate Model
The model is fit over 5 epochs/iteration. It takes a batch of 200 images in each iteration. Test data is used as validation set. The epochs may be increased to improve accuracy.
Finally, test data is used to evaluate the model by calculating the model's classification accuracy.
End of explanation
# get predicted values
predicted_classes = model.predict_classes(X_test)
# get index list of all correctly predicted values
correct_indices = np.nonzero(np.equal(predicted_classes, y_test))[0]
# get index list of all incorrectly predicted values
incorrect_indices = np.nonzero(np.not_equal(predicted_classes, y_test))[0]
print ('Correctly predicted: %i' % np.size(correct_indices))
print ('Incorrectly predicted: %i' % np.size(incorrect_indices))
plt.figure(figsize=[20,8])
for i, correct in enumerate(correct_indices[:6]):
plt.subplot(1,6,i+1)
plt.imshow(X_test[correct].reshape(28,28), cmap='gray', interpolation='none')
plt.title("Predicted {}, Class {}".format(predicted_classes[correct], y_test[correct]))
plt.figure(figsize=[20,8])
for i, incorrect in enumerate(incorrect_indices[:6]):
plt.subplot(1,6,i+1)
plt.imshow(X_test[incorrect].reshape(28,28), cmap='gray', interpolation='none')
plt.title("Predicted {}, Class {}".format(predicted_classes[incorrect], y_test[incorrect]))
Explanation: Accuracy (98.75%) of Convolution Neural Network (CNN) model has improved as compared to the accuracy (97.91%) of Multi-layer Perceptron (MLP) model.
The accuracy of CNN model can be further increased by:
- increasing the epoch number while fitting the model
- adding more convolution and pooling layers to the model
Plot correctly and incorrectly predicted images
Let's plot some images which are correctly predicted and some images which are incorrectly predicted on our test dataset.
End of explanation
from sklearn.metrics import confusion_matrix
import seaborn as sns
sns.set() # setting seaborn default for plots
class_names = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
# Compute confusion matrix
cnf_matrix = confusion_matrix(y_test, predicted_classes)
np.set_printoptions(precision=2)
print ('Confusion Matrix in Numbers')
print (cnf_matrix)
print ('')
cnf_matrix_percent = cnf_matrix.astype('float') / cnf_matrix.sum(axis=1)[:, np.newaxis]
print ('Confusion Matrix in Percentage')
print (cnf_matrix_percent)
print ('')
true_class_names = class_names
predicted_class_names = class_names
df_cnf_matrix = pd.DataFrame(cnf_matrix,
index = true_class_names,
columns = predicted_class_names)
df_cnf_matrix_percent = pd.DataFrame(cnf_matrix_percent,
index = true_class_names,
columns = predicted_class_names)
plt.figure(figsize = (8,6))
#plt.subplot(121)
ax = sns.heatmap(df_cnf_matrix, annot=True, fmt='d')
ax.set_ylabel('True values')
ax.set_xlabel('Predicted values')
ax.set_title('Confusion Matrix in Numbers')
'''
plt.subplot(122)
ax = sns.heatmap(df_cnf_matrix_percent, annot=True)
ax.set_ylabel('True values')
ax.set_xlabel('Predicted values')
'''
Explanation: Confusion Matrix
End of explanation
submissions = pd.DataFrame({'ImageId':list(range(1,len(predicted_classes) + 1)), "Label": predicted_classes})
#submissions.to_csv("submission.csv", index=False, header=True)
Explanation: Using Multi-layer Perceptron (MLP) Model, we had the following heatmap outcome:
- Most of value 5 was predicted as 3. 21 images of digit 5 were predicted as 3.
- The second most incorrect prediction was of number 8. 17 images of digit 8 were predicted as 3.
- The third highest wrong prediction was of number 9. 12 images of digit 9 were predicted as 4.
Using Convolutional Neural Network (CNN) Model, we had the following improvements:
- Number 5 predicted as 3 has been reduced from 21 to 12.
- Number 8 predicted as 3 has been reduced from 17 to 9.
- Number 9 predicted as 3 has been reduced from 12 to 9.
The accuracy of CNN model can be further increased by:
- increasing the epoch number while fitting the model
- adding more convolution and pooling layers to the model
Submission to Kaggle
End of explanation |
10,011 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
基本的登陆
Step1: sending cookie
Step2: Re-direction
302 means the URL has been redirected to some other location. We could use allow_redirects=False to disable this feature.
Step3: time out
Step4: session
session can persist cookies across requests
Step5: session can be overriden
Step6: 这里的req是server返回的response
当requests.get()或者session.get(),首先发生的事情是构造了一个request,这个request将被发送个server,同时req里也将保存这个request
接下来req就接收server返回的response | Python Code:
url='http://httpbin.org'
req=requests.get(url+'/basic-auth/user/passwd',auth=('user','passwd'))
print(req.text)
print(req.url)
print(req.status_code)
import json
payload={'some':'data'}
headers={'Content-Type':'application/json','Authorization':'some token'}
req=requests.post(url+'/post',data=json.dumps(payload),headers=headers)
print(req.text)
req=requests.post(url+'/post',data=payload)
print(req.text)
files={'file':open('dump.txt','rb')}
req=requests.post(url+'/post',files=files)
print(req.text)
print(req.status_code)
req.status_code==requests.codes.ok
print(req.headers)
req.headers['content-type']
Explanation: 基本的登陆
End of explanation
cookies={}
cookies['cookie']='cookie-value'
req=requests.get(url+'/cookies',cookies=cookies)
print(req.text)
Explanation: sending cookie
End of explanation
req=requests.head('http://www.google.com',allow_redirects=True)
print(req.url)
req.history
req=requests.head('http://www.google.com',allow_redirects=False)
print(req.text)
Explanation: Re-direction
302 means the URL has been redirected to some other location. We could use allow_redirects=False to disable this feature.
End of explanation
try:
req=requests.get('http://google.com',timeout=0.03)
except BaseException as e:
print("It is exceeding timeout")
print(str(e))
Explanation: time out
End of explanation
import requests
session=requests.Session()
session.headers.update(headers)
session.data='some data here'
session.params={'key1':'value1','key2':'value2'}
session.auth=('user','passwd')
req=session.get(url+'/basic-auth/user/passwd')
print(req.text)
req=session.get(url+'/get')
print(req.text)
req=session.post(url+'/post',data='some data here')
print(req.text)
Explanation: session
session can persist cookies across requests
End of explanation
req=session.get(url+'/get',headers={'Content-Type':'application','User-Agent':'Mozilla/5.0 (Windows; U; Windows NT 5.1; de; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3'})
print(req.text)
Explanation: session can be overriden
End of explanation
print(req.request)
print(req.request.headers)
print(req.request.body)
print(req.headers)
print(req.text)
Explanation: 这里的req是server返回的response
当requests.get()或者session.get(),首先发生的事情是构造了一个request,这个request将被发送个server,同时req里也将保存这个request
接下来req就接收server返回的response
End of explanation |
10,012 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
tacotron2
Step1: Run tacotron2
Step2: speech contains the raw waveform and sampling rate, which can be played back.
Step3: You can also plot the waveform. | Python Code:
%tensorflow_version 1.x
!pip3 install --quiet ml4a
Explanation: tacotron2: Text-to-speech synthesis
Generates speech audio from a text string. See the original code and paper.
Set up ml4a and enable GPU
If you don't already have ml4a installed, or you are opening this in Colab, first enable GPU (Runtime > Change runtime type), then run the following cell to install ml4a and its dependencies.
End of explanation
from ml4a import audio
from ml4a.models import tacotron2
text = 'Hello everyone!'
speech = tacotron2.run(text)
Explanation: Run tacotron2
End of explanation
audio.display(speech.wav, speech.sampling_rate)
Explanation: speech contains the raw waveform and sampling rate, which can be played back.
End of explanation
%matplotlib inline
audio.plot(speech.wav, speech.sampling_rate)
Explanation: You can also plot the waveform.
End of explanation |
10,013 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Manual sanity checks for the formula and text within mmrd.tex
Check fitting formula by hand, and compare with raw data
Step1: Load data file that contains QNM amplitudes from fitting algorithm described in arXiv
Step2: Implement Final Mass and Spin Fits from arXiv
Step3: Implement Model for $M_f \omega (\eta)$
Step4: Plot Fit on Raw Data as well as residuals | Python Code:
%matplotlib inline
from numpy import exp,sqrt,log,linspace,pi,sin
import kerr
from os import system
import matplotlib as mpl
from matplotlib.pyplot import *
mpl.rcParams['lines.linewidth'] = 2
mpl.rcParams['font.family'] = 'serif'
mpl.rcParams['font.size'] = 12
mpl.rcParams['axes.labelsize'] = 20
mpl.rcParams['font.weight'] = 'normal'
# print mpl.rcParams.keys()
Explanation: Manual sanity checks for the formula and text within mmrd.tex
Check fitting formula by hand, and compare with raw data
End of explanation
########################################################################
# NOTE THAT THESE MUST BE CONSISTENT WITH THE HARD CODED EQUATIONS BELOW
########################################################################
# Define QNM indeces of interest
l = 2; m = 2; n = 0;
# Define data storage location and full path of data file
storage_dir = '/Users/book/GARREG/Spectroscopy/Ylm_Depictions/NonPrecessing/MULTI_DATA_6/Misc/data/'
if l==m:
data_file_string = '../bin/data/complex_A_on_eta_2212%i%i%i1_l_eq_m.asc' % (l,m,n)
else:
data_file_string = '../bin/data/complex_A_on_eta_2212%i%i%i1_l_eq_m_minus_1.asc' % (l,m,n)
# Copy the data to the local repository location
system( 'cp %s/*.asc ../bin/data/' % storage_dir )
# Load the ascii data
data = np.loadtxt(data_file_string)
# Raw data values
raw_eta = data[:,0]
raw_A = data[:,2] + 1j*data[:,3]
A_err = data[:,4]
raw_jf = data[:,8]
raw_Mf = data[:,6]
# Domain over which to evaluate fits
eta = linspace(0,0.25,200)
Explanation: Load data file that contains QNM amplitudes from fitting algorithm described in arXiv:1404.3197
End of explanation
# Implement Final Mass and Spin Fits from arXiv:1404.3197
jfit = lambda ETA: ETA * ( 3.4339 - 3.7988*ETA + 5.7733*ETA**2 - 6.3780*ETA**3 )
Mfit = lambda ETA: 1.0 + ETA * ( -0.046297 + -0.71006*ETA + 1.5028*ETA**2 + -4.0124*ETA**3 + -0.28448*ETA**4 )
# Verify Fits with plot
figure(figsize=1.2*np.array((11, 5)), dpi=120, facecolor='w', edgecolor='k')
subplot(1,2,1)
plot( raw_eta, raw_jf, 'o', alpha=0.6, label=r'$j_f$', color=0.5*np.array([1,1,1]),markersize=8 )
plot( eta, jfit(eta), '-r', label='Fit' )
xlabel(r'$\eta$')
ylabel(r'$j_f$')
legend(loc='upper left',numpoints=1,frameon=False)
a = subplot(1,2,2)
plot( raw_eta[:-2], raw_Mf[:-2], 'o', alpha=0.6, label=r'$M_f$', color=0.5*np.array([1,1,1]),markersize=8 )
plot( raw_eta[-2:], raw_Mf[-2:], 'x', label=r'Outliers', color='k',markersize=8 )
plot( eta, Mfit(eta), '-r', label='Fit' )
xlabel(r'$\eta$')
ylabel(r'$M_f$')
legend(loc='upper right',numpoints=1,frameon=False)
savefig('review_jf_Mf.pdf')
Explanation: Implement Final Mass and Spin Fits from arXiv:1404.3197 and Plot against data
End of explanation
K = lambda jf: ( log(2.0-jf)/log(3.0) )**(1.0 / (2.0 + l - m) )
Mwfit= { (2,2,0) : lambda JF: 2.0/2 + K(JF) * ( 1.5557*exp(2.9034j) + 1.9311*exp(5.9219j)*K(JF) + 2.0417*exp(2.7627j)*K(JF)**2 + 1.3436*exp(5.9187j)*K(JF)**3 + 0.3835*exp(2.8029j)*K(JF)**4 ),
(3,2,0) : lambda JF: 2.0/2 + K(JF) * ( 0.5182*exp(0.3646j) + 3.1469*exp(3.1371j)*K(JF) + 4.5196*exp(6.2184j)*K(JF)**2 + 3.4336*exp(3.0525j)*K(JF)**3 + 1.0929*exp(6.1713j)*K(JF)**4 ) }
wfit = lambda ETA: Mwfit[(l,m,n)](jfit(ETA)) / Mfit(ETA)
#
figure(figsize=0.8*np.array((8, 5)), dpi=120, facecolor='w', edgecolor='k')
#
jf_test = sin( 0.5*pi*linspace( -1,1, 1e3 ) )
plot( K(jf_test), Mwfit[(l,m,n)](jf_test).real, 'k' )
xlabel('$\kappa(j_f)$')
ylabel('$\omega_{%i%i%i}(\kappa)$'%(l,m,n))
Explanation: Implement Model for $M_f \omega (\eta)$
End of explanation
#
Afit = { (2,2,0) : lambda ETA: (wfit(ETA)**2) * ( 0.9252*ETA + 0.1323*ETA**2 ),
(3,2,0) : lambda ETA: (wfit(ETA)**2) * ( 0.1957*exp(5.8008j)*ETA + 1.5830*exp(3.2194j)*ETA**2 + 5.0338*exp(0.6843j)*ETA**3 + 3.7366*exp(4.1217j)*ETA**4 ) }
#
figure(figsize=1.2*np.array((13, 6)), dpi=120, facecolor='w', edgecolor='k')
# Make Subplot
ax1 = subplot(1,2,1)
errorbar( raw_eta, abs(raw_A),fmt='o', yerr=A_err, alpha=0.9, label=r'$A_{lmn}$', color=0.5*np.array([1,1,1]),markersize=8 )
plot( eta, abs(Afit[(l,m,n)](eta)), '-r' )
# Label Axes
xlabel(r'$\eta$')
xlim( [0,0.251] )
ylabel(r'$|A_{%i%i%i}|$'%(l,m,n))
# Make Subplot
subplot(1,2,2)
plot( raw_eta, 100*(abs(raw_A)-abs(Afit[(l,m,n)](raw_eta)))/abs(raw_A), 'ok', alpha=0.6 )
plot( ax1.get_xlim(), [0,0], '--k', alpha=0.5 )
# Label Axes
xlabel(r'$\eta$')
xlim(ax1.get_xlim())
ylabel(r'$|A_{%i%i%i}|$'%(l,m,n))
# Save the plot
savefig('review_A%i%i%i_Amp.pdf'%(l,m,n))
# What about the phases?
Explanation: Plot Fit on Raw Data as well as residuals: Note that error bars do not take into account NR error, only cross-validation errors
End of explanation |
10,014 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Encontro 05
Step1: Configurando a biblioteca
Step2: O objetivo desta atividade é realizar $24$ simulações de centralidade diferentes, para avaliar o desempenho de medidas clássicas em relação a diferentes processos de fluxo. Dessas $24$ simulações, são $12$ sobre o grafo g1 e $12$ sobre o grafo g2.
Step3: O primeiro grafo corresponde aos casamentos entre famílias de Florença durante a Renascença.
J. F. Padgett, C. K. Ansell, 1993. Robust action and the rise of the Medici, 1400–1434. American Journal of
Sociology 98, págs. 1259-1319.
Step4: O segundo grafo corresponde ao estudo de caso do primeiro encontro.
Step5: Das $12$ simulações sobre um dos grafos, são $6$ de closeness e $6$ de betweenness
Step6: Em uma simulação de closeness, para cada origem s e destino t, mede-se o tempo que o fluxo leva para chegar de s a t. O closeness simulado de um nó s é a soma de todos os tempos medidos quando esse nó é uma origem. Como o fluxo pode ter passos aleatórios, o processo é repetido TIMES vezes e considera-se a média.
A função abaixo calcula o closeness simulado em relação a transferência através de geodésicas.
Step7: Vamos comparar o closeness simulado com o closeness teórico.
Para calcular o segundo, basta usar o algoritmo de busca em largura que vimos em encontros anteriores.
Step8: Comparação de closeness no grafo g1
Step9: Comparação de closeness no grafo g2
Step10: Em uma simulação de betweenness, para cada origem s, destino t e intermediário n, mede-se a quantidade de vezes que o fluxo passa por n antes de chegar de s a t. O betweenness simulado de um nó n é a soma de todas as quantidades medidas quando esse nó é um intermediário. Como o fluxo pode ter passos aleatórios, o processo é repetido TIMES vezes e considera-se a média.
A função abaixo calcula o betweenness simulado em relação a transferência através de geodésicas.
Step11: Vamos comparar o betweenness simulado com o betweennesss teórico.
Para calcular o segundo, basta usar a função caixa-preta build_betweenness. Vocês vão aprender a abrir essa caixa-preta em encontros posteriores.
Comparação de betweenness no grafo g1
Step12: Comparação de betweenness no grafo g2
Step13: Entregáveis
Para quinta 24/8, você deve entregar todas as funções abaixo.
Funções auxiliares para evitar repetição de código são permitidas e encorajadas. | Python Code:
import sys
sys.path.append('..')
import socnet as sn
Explanation: Encontro 05: Simulação de Centralidades
Importando a biblioteca:
End of explanation
sn.node_size = 10
sn.edge_width = 1
sn.edge_color = (192, 192, 192)
sn.node_label_position = 'top center'
Explanation: Configurando a biblioteca:
End of explanation
g1 = sn.load_graph('renaissance.gml', has_pos=True)
g2 = sn.load_graph('../encontro02/1-introducao.gml', has_pos=True)
Explanation: O objetivo desta atividade é realizar $24$ simulações de centralidade diferentes, para avaliar o desempenho de medidas clássicas em relação a diferentes processos de fluxo. Dessas $24$ simulações, são $12$ sobre o grafo g1 e $12$ sobre o grafo g2.
End of explanation
sn.show_graph(g1, nlab=True)
Explanation: O primeiro grafo corresponde aos casamentos entre famílias de Florença durante a Renascença.
J. F. Padgett, C. K. Ansell, 1993. Robust action and the rise of the Medici, 1400–1434. American Journal of
Sociology 98, págs. 1259-1319.
End of explanation
sn.show_graph(g2, nlab=True)
Explanation: O segundo grafo corresponde ao estudo de caso do primeiro encontro.
End of explanation
from random import choice
TIMES = 1000
Explanation: Das $12$ simulações sobre um dos grafos, são $6$ de closeness e $6$ de betweenness:
duplicação serial através de caminhos;
transferência através de caminhos;
duplicação serial através de trilhas;
transferência através de trilhas;
duplicação serial através de passeios;
transferência através de passeios.
End of explanation
def simulate_closeness_transfer_geodesic(g):
# Inicialização das médias.
for n in g.nodes():
g.node[n]['simulated_closeness'] = 0
for _ in range(TIMES):
for s in g.nodes():
# Inicialização do closeness de s.
g.node[s]['closeness'] = 0
for t in g.nodes():
if s != t:
# Função caixa-preta que calcula, para cada nó, seu subconjunto
# de vizinhos que pertencem a geodésicas de s a t. Esse subconjunto
# é armazenado no atributo shortest_neighbors. Vocês vão aprender
# a abrir essa caixa-preta em encontros posteriores.
sn.build_shortest_paths(g, s, t)
# Dependendo do processo, o fluxo pode não ter sucesso, ou seja,
# pode ficar preso em uma parte do grafo sem nunca atingir t.
# Quando isso acontece, simplesmente tenta-se novamente.
success = False
while not success:
# Chamamos de "dono" um nó que possui o bem conduzido pelo
# fluxo. No início do processo, sabemos que o único dono é s.
for n in g.nodes():
g.node[n]['owner'] = False
g.node[s]['owner'] = True
time = 1
while True:
# O conjunto nodes_reached indica os nós que o fluxo
# alcança ao "avançar mais um passo".
nodes_reached = set()
for n in g.nodes():
if g.node[n]['owner']:
# TRANSFERÊNCIA: Escolhemos aleatoriamente um dos vizinhos válidos.
# GEODÉSICA: Os vizinhos válidos são os que pertencem a geodésicas.
m = choice(g.node[n]['shortest_neighbors'])
nodes_reached.add(m)
# TRANSFERÊNCIA: O fluxo transfere o bem para os nós que o fluxo
# alcança, portanto o nó deixa de ser dono. Nos processos baseados
# em duplicação, a linha abaixo não pode ser executada.
g.node[n]['owner'] = False
# Todos os nós que o fluxo alcança tornam-se donos.
for n in nodes_reached:
g.node[n]['owner'] = True
# Se alcançamos t, interrompemos o fluxo e paramos de tentar.
if t in nodes_reached:
success = True
break
# Se não alcançamos ninguém, interrompemos o fluxo e tentamos novamente.
if not nodes_reached:
break
time += 1
# Soma do tempo de s a t ao closeness de s.
g.node[s]['closeness'] += time
# Incremento das médias.
for n in g.nodes():
g.node[n]['simulated_closeness'] += g.node[n]['closeness']
# Finalização das médias.
for n in g.nodes():
g.node[n]['simulated_closeness'] /= TIMES
Explanation: Em uma simulação de closeness, para cada origem s e destino t, mede-se o tempo que o fluxo leva para chegar de s a t. O closeness simulado de um nó s é a soma de todos os tempos medidos quando esse nó é uma origem. Como o fluxo pode ter passos aleatórios, o processo é repetido TIMES vezes e considera-se a média.
A função abaixo calcula o closeness simulado em relação a transferência através de geodésicas.
End of explanation
from math import inf, isinf
from queue import Queue
def build_closeness(g):
for s in g.nodes():
# início da busca em largura
q = Queue()
for n in g.nodes():
g.node[n]['d'] = inf
g.node[s]['d'] = 0
q.put(s)
while not q.empty():
n = q.get()
for m in g.neighbors(n):
if isinf(g.node[m]['d']):
g.node[m]['d'] = g.node[n]['d'] + 1
q.put(m)
# fim da busca em largura
g.node[s]['theoretical_closeness'] = 0
for n in g.nodes():
g.node[s]['theoretical_closeness'] += g.node[n]['d']
Explanation: Vamos comparar o closeness simulado com o closeness teórico.
Para calcular o segundo, basta usar o algoritmo de busca em largura que vimos em encontros anteriores.
End of explanation
build_closeness(g1)
simulate_closeness_transfer_geodesic(g1)
for n in g1.nodes():
print(g1.node[n]['label'], g1.node[n]['theoretical_closeness'], g1.node[n]['simulated_closeness'])
Explanation: Comparação de closeness no grafo g1:
(vai demorar alguns segundos)
End of explanation
build_closeness(g2)
simulate_closeness_transfer_geodesic(g2)
for n in g2.nodes():
print(g2.node[n]['label'], g2.node[n]['theoretical_closeness'], g2.node[n]['simulated_closeness'])
Explanation: Comparação de closeness no grafo g2:
(vai demorar alguns segundos)
End of explanation
def simulate_betweenness_transfer_geodesic(g):
# Inicialização das médias.
for n in g.nodes():
g.node[n]['simulated_betweenness'] = 0
for _ in range(TIMES):
# Inicialização de todos os betweenness.
for n in g.nodes():
g.node[n]['betweenness'] = 0
for s in g.nodes():
for t in g.nodes():
if s != t:
# Função caixa-preta que calcula, para cada nó, seu subconjunto
# de vizinhos que pertencem a geodésicas de s a t. Esse subconjunto
# é armazenado no atributo shortest_neighbors. Vocês vão aprender
# a abrir essa caixa-preta em encontros posteriores.
sn.build_shortest_paths(g, s, t)
# Dependendo do processo, o fluxo pode não ter sucesso, ou seja,
# pode ficar preso em uma parte do grafo sem nunca atingir t.
# Quando isso acontece, simplesmente tenta-se novamente.
success = False
while not success:
# Chamamos de "dono" um nó que possui o bem conduzido pelo
# fluxo. No início do processo, sabemos que o único dono é s.
for n in g.nodes():
g.node[n]['owner'] = False
g.node[s]['owner'] = True
for n in g.nodes():
if n != s and n != t:
g.node[n]['partial_betweenness'] = 0
while True:
# O conjunto nodes_reached indica os nós que o fluxo
# alcança ao "avançar mais um passo".
nodes_reached = set()
for n in g.nodes():
if g.node[n]['owner']:
# TRANSFERÊNCIA: Escolhemos aleatoriamente um dos vizinhos válidos.
# GEODÉSICA: Os vizinhos válidos são os que pertencem a geodésicas.
m = choice(g.node[n]['shortest_neighbors'])
nodes_reached.add(m)
# TRANSFERÊNCIA: O fluxo transfere o bem para os nós que o fluxo
# alcança, portanto o nó deixa de ser dono. Nos processos baseados
# em duplicação, a linha abaixo não pode ser executada.
g.node[n]['owner'] = False
# Todos os nós que o fluxo alcança tornam-se donos.
for n in nodes_reached:
g.node[n]['owner'] = True
# Se alcançamos t, interrompemos o fluxo e paramos de tentar.
if t in nodes_reached:
success = True
break
# Se não alcançamos ninguém, interrompemos o fluxo e tentamos novamente.
if not nodes_reached:
break
for n in nodes_reached:
if n != s and n != t:
g.node[n]['partial_betweenness'] += 1
# Soma de todos os betweenness parciais dos intermediários.
for n in g.nodes():
if n != s and n != t:
g.node[n]['betweenness'] += g.node[n]['partial_betweenness']
# Incremento das médias. Divide-se o valor por 2 para
# desconsiderar a simetria de um grafo não-dirigido.
for n in g.nodes():
g.node[n]['simulated_betweenness'] += g.node[n]['betweenness'] / 2
# Finalização das médias.
for n in g.nodes():
g.node[n]['simulated_betweenness'] /= TIMES
Explanation: Em uma simulação de betweenness, para cada origem s, destino t e intermediário n, mede-se a quantidade de vezes que o fluxo passa por n antes de chegar de s a t. O betweenness simulado de um nó n é a soma de todas as quantidades medidas quando esse nó é um intermediário. Como o fluxo pode ter passos aleatórios, o processo é repetido TIMES vezes e considera-se a média.
A função abaixo calcula o betweenness simulado em relação a transferência através de geodésicas.
End of explanation
sn.build_betweenness(g1)
simulate_betweenness_transfer_geodesic(g1)
for n in g1.nodes():
print(g1.node[n]['label'], g1.node[n]['theoretical_betweenness'], g1.node[n]['simulated_betweenness'])
Explanation: Vamos comparar o betweenness simulado com o betweennesss teórico.
Para calcular o segundo, basta usar a função caixa-preta build_betweenness. Vocês vão aprender a abrir essa caixa-preta em encontros posteriores.
Comparação de betweenness no grafo g1:
(vai demorar alguns segundos)
End of explanation
sn.build_betweenness(g2)
simulate_betweenness_transfer_geodesic(g2)
for n in g2.nodes():
print(g2.node[n]['label'], g2.node[n]['theoretical_betweenness'], g2.node[n]['simulated_betweenness'])
Explanation: Comparação de betweenness no grafo g2:
(vai demorar alguns segundos)
End of explanation
def simulate_closeness_serial_path(g):
pass
def simulate_closeness_transfer_path(g):
pass
def simulate_closeness_serial_trail(g):
pass
def simulate_closeness_transfer_trail(g):
pass
def simulate_closeness_serial_walk(g):
pass
def simulate_closeness_transfer_walk(g):
pass
def simulate_betweenness_serial_path(g):
pass
def simulate_betweenness_transfer_path(g):
pass
def simulate_betweenness_serial_trail(g):
pass
def simulate_betweenness_transfer_trail(g):
pass
def simulate_betweenness_serial_walk(g):
pass
def simulate_betweenness_transfer_walk(g):
pass
Explanation: Entregáveis
Para quinta 24/8, você deve entregar todas as funções abaixo.
Funções auxiliares para evitar repetição de código são permitidas e encorajadas.
End of explanation |
10,015 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exercise 3
We're going to switch gears a little and talk about the astrophysical part of Astrophysical Machine Learning. This exercise will have you examine two different forms of data. The first is an actual image of the sky, and the second a catalog of sources (galaxies).
PART I
Step1: Now the image is just a numpy array (matrix) that can be indexed like any other array. The np.flipup function ("flip up-down") was used so that when you display the array it will have the same orientation as when you look at the fits image with DS9. Once you read in the image, apply each filter to the image and display the image. To display, you will need to use the matplotlib module. For example, to display the image above, you could use | Python Code:
from astropy.io import fits as fits
fitsimage=fits.open('filename.fits')
image=np.flipud(fitsimage[0].data)
Explanation: Exercise 3
We're going to switch gears a little and talk about the astrophysical part of Astrophysical Machine Learning. This exercise will have you examine two different forms of data. The first is an actual image of the sky, and the second a catalog of sources (galaxies).
PART I: Astronomical images (and catalogs for the that matter) are most often stored in FITS format, which stands for Flexible Image Transport System. There are several programs for opening and examining FITS images. Probably the easiest one to install would be the SAOImage DS9 Astronomical Data Visualization Application. I recommend installing DS9 on your system. For this part of the exercise, download this image of a region of the sky (near the Coma cluster). For this, you can use the SkyView virtual observatory page. Go to the page and enter "coma cluster" in the “Coordinates or Source” field, then under the Optical:DSS: section select the "DSS1 Red" and press submit. This should open another page which has an image that looks like this:
Download the FITS file associated with the image (it should say “FITS” below the image) and save it in your working python directory. For the following exercise, you will need to have the Astopy package installed.
MEDIAN, MEAN, MAX AND MIN: A common way to manipulate an image in order to highlight features that might not be obvious at first glance, is to modify the pixel values by applying a filter-function to the image. The way these filter-functions are applied is to replace the value of each pixel by another value that is related in some way to the values of surrounding pixels. For example, a $maxFilter()$ function might replace each pixel value by the maximum pixel value in a 3×3 or 5×5 box surrounding the pixel (the pixel itself is also included). A minFilter() would do the same thing, except replace each pixel by the minimum value in the box. For this part of the exercise, you must:
A) Start a python script labeled image_filters.py. This script should contain four functions for computing a $medianFilter()$, $meanFilter()$, $maxFilter()$, and $minFilter()$ of an input image. For now, make the filter size 5×5 pixels, and ignore the edge of the image where the filter would run over the edge.
B) After the functions, read in the FITS file that you got above into an array. This can be done by using astropy like so:
End of explanation
import matplotlib.pyplot as plt
plt.imshow(image)
plt.show()
Explanation: Now the image is just a numpy array (matrix) that can be indexed like any other array. The np.flipup function ("flip up-down") was used so that when you display the array it will have the same orientation as when you look at the fits image with DS9. Once you read in the image, apply each filter to the image and display the image. To display, you will need to use the matplotlib module. For example, to display the image above, you could use:
End of explanation |
10,016 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
6b Calculate binned gradient-network overlap
This file works out the average z-score inside a gradient percentile area
written by Jan Freyberg for the Brainhack 2017 Project_
This should reproduce this analysis
Step1: Define the variables for this analysis.
1. how many percentiles the data is divided into
2. where the Z-Maps (from neurosynth) lie
3. where the binned gradient maps lie
4. where a mask of the brain lies (not used at the moment).
Step2: Next define a function to take the average of an image inside a mask and return it
Step3: This next cell will step through each combination of gradient, subject and network file to calculate the average z-score inside the mask defined by the gradient percentile. This will take a long time to run!
Step4: To save time next time, we'll save the result of this to file
Step5: Extract a list of which group contains which participants.
Step6: Make a plot of the z-scores inside each parcel for each gradient, split by group! | Python Code:
% matplotlib inline
from __future__ import print_function
import nibabel as nib
from nilearn.image import resample_img
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import os
import os.path
# The following are a progress bar, these are not strictly necessary:
from ipywidgets import FloatProgress
from IPython.display import display
Explanation: 6b Calculate binned gradient-network overlap
This file works out the average z-score inside a gradient percentile area
written by Jan Freyberg for the Brainhack 2017 Project_
This should reproduce this analysis
End of explanation
percentiles = range(10)
# unthresholded z-maps from neurosynth:
zmaps = [os.path.join(os.getcwd(), 'ROIs_Mask', fname) for fname in os.listdir(os.path.join(os.getcwd(), 'ROIs_Mask'))
if 'z.nii' in fname]
# individual, binned gradient maps, in a list of lists:
gradmaps = [[os.path.join(os.getcwd(), 'data', 'Outputs', 'Bins', str(percentile), fname)
for fname in os.listdir(os.path.join(os.getcwd(), 'data', 'Outputs', 'Bins', str(percentile)))]
for percentile in percentiles]
# a brain mask file:
brainmaskfile = os.path.join(os.getcwd(), 'ROIs_Mask', 'rbgmask.nii')
Explanation: Define the variables for this analysis.
1. how many percentiles the data is divided into
2. where the Z-Maps (from neurosynth) lie
3. where the binned gradient maps lie
4. where a mask of the brain lies (not used at the moment).
End of explanation
def zinsidemask(zmap, mask):
#
zaverage = zmap.dataobj[
np.logical_and(np.not_equal(mask.dataobj, 0), brainmask.dataobj>0)
].mean()
return zaverage
Explanation: Next define a function to take the average of an image inside a mask and return it:
End of explanation
zaverages = np.zeros([len(zmaps), len(gradmaps), len(gradmaps[0])])
# load first gradmap just for resampling
gradmap = nib.load(gradmaps[0][0])
# Load a brainmask
brainmask = nib.load(brainmaskfile)
brainmask = resample_img(brainmask, target_affine=gradmap.affine, target_shape=gradmap.shape)
# Initialise a progress bar:
progbar = FloatProgress(min=0, max=zaverages.size)
display(progbar)
# loop through the network files:
for i1, zmapfile in enumerate(zmaps):
# load the neurosynth activation file:
zmap = nib.load(zmapfile)
# make sure the images are in the same space:
zmap = resample_img(zmap,
target_affine=gradmap.affine,
target_shape=gradmap.shape)
# loop through the bins:
for i2, percentile in enumerate(percentiles):
# loop through the subjects:
for i3, gradmapfile in enumerate(gradmaps[percentile]):
gradmap = nib.load(gradmapfile) # load image
zaverages[i1, i2, i3] = zinsidemask(zmap, gradmap) # calculate av. z-score
progbar.value += 1 # update progressbar (only works in jupyter notebooks)
Explanation: This next cell will step through each combination of gradient, subject and network file to calculate the average z-score inside the mask defined by the gradient percentile. This will take a long time to run!
End of explanation
# np.save(os.path.join(os.getcwd(), 'data', 'average-abs-z-scores'), zaverages)
zaverages = np.load(os.path.join(os.getcwd(), 'data', 'average-z-scores.npy'))
Explanation: To save time next time, we'll save the result of this to file:
End of explanation
df_phen = pd.read_csv('data' + os.sep + 'SelectedSubjects.csv')
diagnosis = df_phen.loc[:, 'DX_GROUP']
fileids = df_phen.loc[:, 'FILE_ID']
groupvec = np.zeros(len(gradmaps[0]))
for filenum, filename in enumerate(gradmaps[0]):
fileid = os.path.split(filename)[-1][5:-22]
groupvec[filenum] = (diagnosis[fileids.str.contains(fileid)])
print(groupvec.shape)
Explanation: Extract a list of which group contains which participants.
End of explanation
fig = plt.figure(figsize=(15, 8))
grouplabels = ['Control group', 'Autism group']
for group in np.unique(groupvec):
ylabels = [os.path.split(fname)[-1][0:-23].replace('_', ' ') for fname in zmaps]
# remove duplicates!
includenetworks = []
seen = set()
for string in ylabels:
includenetworks.append(string not in seen)
seen.add(string)
ylabels = [string for index, string in enumerate(ylabels) if includenetworks[index]]
tmp_zaverages = zaverages[includenetworks, :, :]
tmp_zaverages = tmp_zaverages[:, :, groupvec==group]
tmp_zaverages = tmp_zaverages[np.argsort(np.argmax(tmp_zaverages.mean(axis=2), axis=1)), :, :]
# make the figure
plt.subplot(1, 2, group)
cax = plt.imshow(tmp_zaverages.mean(axis=2),
cmap='bwr', interpolation='nearest',
vmin=zaverages.mean(axis=2).min(),
vmax=zaverages.mean(axis=2).max())
ax = plt.gca()
plt.title(grouplabels[int(group-1)])
plt.xlabel('Percentile of principle gradient')
ax.set_xticks(np.arange(0, len(percentiles), 3))
ax.set_xticklabels(['100-90', '70-60', '40-30', '10-0'])
ax.set_yticks(np.arange(0, len(seen), 1))
ax.set_yticklabels(ylabels)
ax.set_yticks(np.arange(-0.5, len(seen), 1), minor=True)
ax.set_xticks(np.arange(-0.5, 10, 1), minor=True)
ax.grid(which='minor', color='w', linewidth=2)
fig.subplots_adjust(right=0.8)
cbar_ax = fig.add_axes([0.85, 0.15, 0.01, 0.7])
fig.colorbar(cax, cax=cbar_ax, label='Average Z-Score')
#fig.colorbar(cax, cmap='bwr', orientation='horizontal')
plt.savefig('./figures/z-scores-inside-gradient-bins.png')
Explanation: Make a plot of the z-scores inside each parcel for each gradient, split by group!
End of explanation |
10,017 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Model Evaluation
Pipeline and Feature Unions
It is always a good decision to make your code as readable as possible. Not only so that others can pick it up and use it easily, but so that you don't end up hating your past self when you take a look at something you wrote several months prior. Pipelines are a great tool for integrating and reusing a series of data transformations and fits into a workflow. In this exercise you'll build some pipelines and feature unions using the Concrete Compressive Data Set.
1 - Head over to the Machine Learning Repository, download the Concrete Compressive Data Set, put it into a dataframe, and split into training and test sets. Be sure to familiarize yourself with the data before proceeding.
Step1: 2 - Build a pipeline for polynomial fitting, fit polynomials of degree 1 to the number of features, and plot your training and testing errors for each. Comment on your results
Step2: Increasing the number of features increases the training score but we get very bad generalization (with a really strange peak for the fifth degree polynomial). The best score is for the third degree (84%).
3 - Build a pipeline that will perform feature selection on the dataset using the F-Statistic, producing the same plots as in part (2). Comment on your results.
Step3: After adding the sixth variable we have little improvement in the results.
4 - Build a pipeline that standardizes your data, performs feature selection via regularization, and then fits a model of your choice. Produce the same plots as above and comment on your results.
Step4: Test score is better than training using LassoCV as an estimator, using random forest we get more overfitting but also better overall performances than all the previous methods. The best choice for the threshold seems to be around 0.5.
5 - Create two pipelines for feature selection of a technique of your choice, scaling the data before hand. Then, join these two pipelines with a FeatureUnion, and fit a polynomial model, also in a pipeline. Comment on your results.
Step5: By joining the pipelines I get the features from both the feature selections (which is quite strange as a model) but I get a very good test score of 81% (which is still worse than the 84% obtained only with poly features).
Evaluation Metrics
It is very important that you have more than one tool in your toolbox for evaluating the usefulness of your model, as in different context, different metrics are preferred. For example, suppose a new medical test is developed for detecting cancer that has a 0.25 probability of incorrectly labeling a patient of having cancer when they in fact do not, but a 0.001 probability of labeling a cancer patient as cancer free. With this sort of test, you can be sure that those who do have cancer will almost certainly be classified correctly, but a positive does not necessarily mean that the patient has cancer, meaning additional tests are in order. These metrics have different names and depending on the situation, you may be interested in minimizing different quantities, which is the topic we will explore in this exercise.
1 - Head over the Machine Learning Repository and download the Breast Cancer Diagnostic Dataset, put it in a dataframe, and split into testing and training sets.
Step6: Bare nuclei is an object
Step7: Also, I'm encoding the class column with 0s and 1s
Step8: 2 - Using a classification algorithm of your choice, fit a model to the data and predict the results, making your results as accurate as possible.
Step9: Seven is the number of neighbors with the best CV score
Step10: The accuracy is almost 98%, which is quite good!
3 - Using your model in part (2), compute the following quantities, without using skelarn.metrics.
- True Positives
- True Negatives
- False Positives
- False Negatives
Step11: 4 - Using your results in part (3), compute the following quantities.
- Sensitivity, recall, hit rate, or true positive rate (TPR)
- Specificity or true negative rate (TNR)
- Miss rate or false negative rate (FNR)
- Fall-out or false positive rate (FPR)
- Precision
- F1
- Accuracy
Step12: 5 - Check your results in part (4) using sklearn.
Step13: 6 - Plot the precision and recall curve for your fit.
Step14: 7 - Plot the ROC curve for your fit.
Step15: Learning and Evaluation Curves, Hyperparameter Tuning, and Bootstrapping
A problem that you will see crop up time and time again in Data Science is overfitting. Much like how people can sometimes see structure where there is none, machine learning algorithms suffer from the same. If you have overfit your model to your data, it has learned a "pattern" in the noise rather than the signal you were looking for, and thus will not generalize well to data it has not seen.
Consider the LASSO Regression model which we have used previously. Like all parametric models, fitting a LASSO Regression model can be reduced to the problem of finding a set of $\hat{\theta_i}$ which minimize the cost function given the data. But notice that unlike a standard linear regression model, LASSO Regression has an additional regularization parameter. The result of this is that the $\hat{\theta_i}$ are dependent not only on our data, but also on this additional hyperparameter.
So now we have three different problems to juggle while we are fitting our models
Step16: 2 - Separate your data into train and test sets, of portions ranging from test_size = 0.99 to test_size = 0.01, fitting a logistic regression model to each and computing the training and test errors. Plot the errors of the training and test sets on the same plot. Do this without using sklearn.model_select.learning_curve. Comment on your results.
Step17: As expected the test accuracy goes up when the train size is increased; strangely we always get 100% accuracy on the training set.
In the solution without scaling the results are worse, let's try it out
Step18: Still a bit different...
3 - Repeat part (2) but this time using sklearn.model_select.learning_curve. Comment on your results.
Step19: Using learning curves the accuracy on the training set is always perfect, while the test accuracy increases without getting to 100%
Step20: Training scores are very similar in each of the plots, there are some differences between the test accuracy curves where the best value for C seems to go towards 1.
Step21: 5 - Tune the regularization strength parameter as in part (4), but this time using sklearn.model_select.evaluation_curve. Comment on your results.
Step22: The plots are the same as above.
6 - Fit another classification algorithm to the data, tuning the parameter using LOOCV. Comment on your results.
Step23: Using LOOCV I get 0.3 as the best choice for C and an accuracy of almost 99%, which is slightly higher than before.
7 - Suppose that the wine data the we received was incomplete, containing, say, only 20% of the full set, but due to a fast approaching deadline, we need to still compute some statistics and fit a model. Use bootstrapping to compute the mean and variance of the features, and fit the classification model you used in part (4), comparing and commenting on your results with those from the full dataset.
Step24: In all the cases I get 100% accuracy!
Model Selection
All that has been within the previous sections is part of the much broader topic known as model selection. Model selection is not only about choosing the right machine learning algorithm to use, but also about tuning parameters while keeping overfitting and scalability issues in mind. In this exercise, we'll build models for a couple different datasets, using all of the concepts you've worked with previously.
Classification
1 - Head over to the Machine Learning Repository, download the Mushroom Data Set, and put it into a dataframe. Be sure to familiarize yourself with the data before proceeding. Break the data into training and testing sets. Be sure to keep this testing set the same for the duration of this exercise as we will be using to test various algorithms!
Step25: We have 2480 missing values for stalk_root (denoted by ?), which I'm going to keep as they are since this is a category.
I'm dropping veil_type instead because it has only 1 class.
Step26: Also, I'm relabeling the poisonous mushrooms as 1 and not poisonous ones as 0.
Step27: 2 - Fit a machine learning algorithm of your choice to the data, tuning hyperparameters using the Better Holdout Method. Generate training and validation plots and comment on your results.
Step28: This is a class to perform label encoding on multiple columns
Step29: The best results are for a number of neighbors between 2 and 7 (100% accuracy on both train and test set), so I'm going with the default of 5.
3 - Repeat part (2), this time using cross validation. Comment on your results.
Step30: We get perfect accuracy for a number of neighbors between 2 and 8 this time; we can also see that we have no variance for this range of values.
4 - Repeat part (3) using a different machine learning algorithm. Comment on your results.
Step31: We get perfect accuracy for about 8 trees or more, so I'm going with the default value of 10.
5 - Which ever of your two algorithms in parts (3) and (4) performed more poorly, perform a variable selection to see if you can improve your results.
Step32: Let's try 7 components
Step33: Well it's actually worse!
6 - Pick a classification algorithm that has at least two hyperparameters and tune them using GridSeachCV. Comment on your results.
I'm trying logistic regression + PCA tuning number of components and regularization
Step34: So, apparently GridSearchCV gets stuck if I use custom scorers or if I'm multithreading or something like that, so I'm going to try to encode my labels outside the pipeline...
Step35: We get 95% accuracy with simple logistic regression, not bad!
Regression
1 - Head over to the Machine Learning Repository, download the Parkinson's Telemonitoring Data Set, and put it into a dataframe. Be sure to familiarize yourself with the data before proceeding, removing the columns related to time, age, and sex. We're going to be predicting motor_UPDRS, so drop the total_UPDRS variable as well. Break the data into training and testing sets. Be sure to keep this testing set the same for the duration of this exercise as we will be using to test various algorithms!
Step36: In solution he uses dummies for subject#, which I don't think is really "honest".
Step37: 2 - Fit a machine learning algorithm of your choice to the data, tuning hyperparameters using the Better Holdout Method. Generate training and validation plots and comment on your results.
Step38: These doesn't seems so randomly scattered...
Step39: The score is very bad, the best choice for the regularization parameter seems to be 100 but there seems to be a problem of serious bias in this dataset. We may need more features to get accurate predictions.
3 - Repeat part (2), this time using cross validation. Comment on your results.
Again in the solution he uses cross_validation_curve...
Step40: The two curves are even closer, but again performances are overall very poor. This plot confirms that we have a high bias for this dataset.
4 - Repeat part (3) using a different machine learning algorithm. Comment on your results.
Step41: Using a random forest we get better performances but we have a serious problem of high variance in this case.
5 - Which ever of your two algorithms in parts (3) and (4) performed more poorly, perform a variable selection to see if you can improve your results.
I'm going to use PCA to try and improve the R regression, but I don't think there is much to do...
In solution he uses LassoCV with SelectFromModel
Step42: I guess we can try with 12 components
Step43: Not much of an improvement...
6 - Pick a regression algorithm that has at least two hyperparameters and tune them using GridSeachCV. Comment on your results.
I'm going to try and add some polynomial features to the Ridge regression to see if it can perform better | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.utils import resample
from sklearn.preprocessing import PolynomialFeatures, StandardScaler, LabelEncoder, OneHotEncoder
from sklearn.feature_selection import f_regression, SelectKBest, SelectFromModel
from sklearn.decomposition import PCA
from sklearn.model_selection import train_test_split, cross_val_score, validation_curve, GridSearchCV,\
learning_curve, StratifiedKFold, LeaveOneOut, KFold
from sklearn.metrics import mean_squared_error, confusion_matrix, precision_score, accuracy_score, recall_score,\
f1_score, precision_recall_curve, roc_curve, roc_auc_score
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.linear_model import LinearRegression, LassoCV, LogisticRegression, Ridge
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor
concrete = pd.read_excel('http://archive.ics.uci.edu/ml/machine-learning-databases/concrete/compressive/Concrete_Data.xls')
concrete.columns = ['cement', 'blast_furnace_slag', 'fly_ash', 'water', 'superplasticizer',
'coarse_aggregate', 'fine_aggregate', 'age', 'concrete_compressive_strength']
concrete.head()
concrete.info()
concrete.describe().T
Xtrain, Xtest, ytrain, ytest = train_test_split(concrete.drop('concrete_compressive_strength', axis=1),
concrete.concrete_compressive_strength,
test_size=0.2, random_state=42)
Explanation: Model Evaluation
Pipeline and Feature Unions
It is always a good decision to make your code as readable as possible. Not only so that others can pick it up and use it easily, but so that you don't end up hating your past self when you take a look at something you wrote several months prior. Pipelines are a great tool for integrating and reusing a series of data transformations and fits into a workflow. In this exercise you'll build some pipelines and feature unions using the Concrete Compressive Data Set.
1 - Head over to the Machine Learning Repository, download the Concrete Compressive Data Set, put it into a dataframe, and split into training and test sets. Be sure to familiarize yourself with the data before proceeding.
End of explanation
# scaling+pol+linear regression
pipe = Pipeline([
('scl', StandardScaler()),
('poly', PolynomialFeatures()),
('lr', LinearRegression())
])
scores = []
# looping through degrees
for n in range(1, Xtrain.shape[1]+1):
# set pipeline params
pipe.set_params(poly__degree=n)
# fit and score
pipe.fit(Xtrain, ytrain)
scores.append([n, pipe.score(Xtrain, ytrain), pipe.score(Xtest, ytest),
mean_squared_error(pipe.predict(Xtrain), ytrain), mean_squared_error(pipe.predict(Xtest), ytest)])
scores = pd.DataFrame(scores)
scores.columns = ['degree', 'train score', 'test score', 'train mse', 'test mse']
scores
# plot on two separate axes cause the scales are very different
fig, ax = plt.subplots(2, figsize=(10, 8))
ax[0].plot(scores.degree, scores['train score'], label='train score')
ax[1].plot(scores.degree, scores['test score'], label='test score');
# single plot for n=1,2,3
fig, ax = plt.subplots(figsize=(10, 8))
ax.plot(scores.iloc[:3, 0], scores.iloc[:3, 1], label='train score')
ax.plot(scores.iloc[:3, 0], scores.iloc[:3, 2], label='test score');
Explanation: 2 - Build a pipeline for polynomial fitting, fit polynomials of degree 1 to the number of features, and plot your training and testing errors for each. Comment on your results
End of explanation
# scaling, select k best and linear regression
pipe = Pipeline([
('scl', StandardScaler()),
('best', SelectKBest(f_regression)),
('lr', LinearRegression())
])
scores = []
# looping over number of features
for n in range(1, Xtrain.shape[1]+1):
# set pipe params
pipe.set_params(best__k=n)
# fit and score
pipe.fit(Xtrain, ytrain)
scores.append([n, pipe.score(Xtrain, ytrain), pipe.score(Xtest, ytest),
mean_squared_error(pipe.predict(Xtrain), ytrain), mean_squared_error(pipe.predict(Xtest), ytest)])
scores = pd.DataFrame(scores)
scores.columns = ['n_features', 'train score', 'test score', 'train mse', 'test mse']
scores
# train and test on a single plot
fig, ax = plt.subplots(figsize=(10, 8))
ax.plot(scores.n_features, scores['train score'], label='train score')
ax.plot(scores.n_features, scores['test score'], label='test score')
ax.legend();
Explanation: Increasing the number of features increases the training score but we get very bad generalization (with a really strange peak for the fifth degree polynomial). The best score is for the third degree (84%).
3 - Build a pipeline that will perform feature selection on the dataset using the F-Statistic, producing the same plots as in part (2). Comment on your results.
End of explanation
# scaling, select from model using lassoCV and lassoCV
pipe = Pipeline([
('scl', StandardScaler()),
('sfmod', SelectFromModel(LassoCV())),
# ('lasso', LassoCV())
('forest', RandomForestRegressor()) # used in solution
])
scores = []
# looping through threshold values
for c in np.arange(0.1, 2.1, 0.1):
pipe.set_params(sfmod__threshold=str(c) + '*mean')
pipe.fit(Xtrain, ytrain)
scores.append([c, pipe.score(Xtrain, ytrain), pipe.score(Xtest, ytest),
mean_squared_error(pipe.predict(Xtrain), ytrain), mean_squared_error(pipe.predict(Xtest), ytest)])
# sel = pipe.named_steps['sfmod']
# print(Xtrain.columns[sel.transform(np.arange(len(Xtrain.columns)).reshape(1, -1))])
scores = pd.DataFrame(scores)
scores.columns = ['threshold', 'train score', 'test score', 'train mse', 'test mse']
scores
fig, ax = plt.subplots(figsize=(10, 8))
ax.plot(scores.threshold, scores['train score'], label='train score')
ax.plot(scores.threshold, scores['test score'], label='test score')
ax.legend();
fig, ax = plt.subplots(figsize=(10, 8))
ax.plot(scores.threshold, scores['train mse'], label='train mse')
ax.plot(scores.threshold, scores['test mse'], label='test mse')
ax.legend();
Explanation: After adding the sixth variable we have little improvement in the results.
4 - Build a pipeline that standardizes your data, performs feature selection via regularization, and then fits a model of your choice. Produce the same plots as above and comment on your results.
End of explanation
# joining select from model using lasso and select k best using the best parameters from before
pipe = Pipeline([
('scl', StandardScaler()),
('featsel', FeatureUnion([
('sfmod', SelectFromModel(LassoCV(), threshold='0.5*mean')),
('best', SelectKBest(k=6))
])),
('poly', PolynomialFeatures(degree=3)),
('lr', LinearRegression())
])
pipe.fit(Xtrain, ytrain)
print(pipe.score(Xtrain, ytrain))
print(pipe.score(Xtest, ytest))
print(mean_squared_error(pipe.predict(Xtrain), ytrain))
print(mean_squared_error(pipe.predict(Xtest), ytest))
pipe.named_steps['poly'].n_input_features_
Explanation: Test score is better than training using LassoCV as an estimator, using random forest we get more overfitting but also better overall performances than all the previous methods. The best choice for the threshold seems to be around 0.5.
5 - Create two pipelines for feature selection of a technique of your choice, scaling the data before hand. Then, join these two pipelines with a FeatureUnion, and fit a polynomial model, also in a pipeline. Comment on your results.
End of explanation
breast = pd.read_csv('http://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/breast-cancer-wisconsin.data',
header=None)
breast.columns = ['id','clump_thickness','uniformity_cell_size','uniformity_cell_shape','marginal_adhesion',
'single_epithelial_cell_size','bare_nuclei','bland_chromatin','normal_nucleoli','mitoses','class']
breast.info()
Explanation: By joining the pipelines I get the features from both the feature selections (which is quite strange as a model) but I get a very good test score of 81% (which is still worse than the 84% obtained only with poly features).
Evaluation Metrics
It is very important that you have more than one tool in your toolbox for evaluating the usefulness of your model, as in different context, different metrics are preferred. For example, suppose a new medical test is developed for detecting cancer that has a 0.25 probability of incorrectly labeling a patient of having cancer when they in fact do not, but a 0.001 probability of labeling a cancer patient as cancer free. With this sort of test, you can be sure that those who do have cancer will almost certainly be classified correctly, but a positive does not necessarily mean that the patient has cancer, meaning additional tests are in order. These metrics have different names and depending on the situation, you may be interested in minimizing different quantities, which is the topic we will explore in this exercise.
1 - Head over the Machine Learning Repository and download the Breast Cancer Diagnostic Dataset, put it in a dataframe, and split into testing and training sets.
End of explanation
breast.bare_nuclei.value_counts()
breast.bare_nuclei = breast.bare_nuclei.apply(lambda x: x.replace('?', '1'))
breast.bare_nuclei = pd.to_numeric(breast.bare_nuclei)
Explanation: Bare nuclei is an object: this is due to the presence of ? for missing values, which I'm going to replace with 1s.
End of explanation
# 2 for benign, 4 for malignant
le = LabelEncoder()
le.fit([2, 4])
breast['class'] = le.transform(breast['class'])
breast.describe().T
Xtrain, Xtest, ytrain, ytest = train_test_split(breast.drop('class', axis=1), breast['class'], test_size=0.2, random_state=0)
Explanation: Also, I'm encoding the class column with 0s and 1s:
End of explanation
# scale and KNN
pipe = Pipeline([
('scl', StandardScaler()),
('knn', KNeighborsClassifier())
])
scores = []
# looping through number of neighbors
for k in range(1, 20):
pipe.set_params(knn__n_neighbors=k)
temp = cross_val_score(estimator=pipe, X=Xtrain, y=ytrain, cv=10)
scores.append([k, np.mean(temp), np.std(temp)])
scores = pd.DataFrame(scores)
scores.columns = ['k', 'kfold mean', 'kfold std']
scores.sort_values(by='kfold mean', ascending=False)
Explanation: 2 - Using a classification algorithm of your choice, fit a model to the data and predict the results, making your results as accurate as possible.
End of explanation
pipe.set_params(knn__n_neighbors=7)
pipe.fit(Xtrain, ytrain)
pipe.score(Xtest, ytest)
Explanation: Seven is the number of neighbors with the best CV score:
End of explanation
ypred = pipe.predict(Xtest)
TP = sum((ypred[ytest==1]==ytest.values[ytest.values==1]))
TN = sum((ypred[ytest==0]==ytest.values[ytest.values==0]))
FP = sum((ypred[ytest==0]!=ytest.values[ytest.values==0]))
FN = sum((ypred[ytest==1]!=ytest.values[ytest.values==1]))
print('True Positives: {}\nTrue Negatives: {}\nFalse Positives: {}\nFalse Negatives: {}'.format(TP, TN, FP, FN))
Explanation: The accuracy is almost 98%, which is quite good!
3 - Using your model in part (2), compute the following quantities, without using skelarn.metrics.
- True Positives
- True Negatives
- False Positives
- False Negatives
End of explanation
TPR = TP / (FN+TP)
FPR = FP / (FP+TN)
TNR = 1 - FPR
FNR = 1 - TPR
PRE = TP / (TP+FP)
F1 = 2*PRE*TPR / (PRE+TPR)
ACC = (TP+TN) / len(ypred)
print('TPR: {}\nTNR: {}\nFNR: {}\nFPR: {}\nPRE: {}\nF1: {}\nACC: {}'.format(TPR, TNR, FNR, FPR, PRE, F1, ACC))
Explanation: 4 - Using your results in part (3), compute the following quantities.
- Sensitivity, recall, hit rate, or true positive rate (TPR)
- Specificity or true negative rate (TNR)
- Miss rate or false negative rate (FNR)
- Fall-out or false positive rate (FPR)
- Precision
- F1
- Accuracy
End of explanation
print(confusion_matrix(y_true=ytest, y_pred=ypred))
print('TPR', recall_score(ytest, ypred))
print('PRE', precision_score(y_true=ytest, y_pred=ypred))
print('F1', f1_score(y_true=ytest, y_pred=ypred))
print('ACC', accuracy_score(y_true=ytest, y_pred=ypred))
Explanation: 5 - Check your results in part (4) using sklearn.
End of explanation
ypredprob = pipe.predict_proba(Xtest)
# get precision and recall values
precision, recall, _ = precision_recall_curve(ytest, ypredprob[:, 1])
# plot them and fill the curve
plt.figure(figsize=(16, 8))
plt.step(recall, precision, color='b', alpha=0.2, where='post')
plt.fill_between(recall, precision, step='post', alpha=0.2, color='b')
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.ylim([0.0, 1.05])
plt.xlim([0.0, 1.0])
plt.title('Precision-Recall curve');
Explanation: 6 - Plot the precision and recall curve for your fit.
End of explanation
# getting FPR and TPR for the ROC curve and the area under it
fpr, tpr, _ = roc_curve(ytest, ypredprob[:, 1])
roc_auc = roc_auc_score(ytest, ypredprob[:, 1])
# plot and fill the curve
plt.figure(figsize=(16, 8))
plt.step(fpr, tpr, color='b', alpha=0.2, where='post')
plt.fill_between(fpr, tpr, step='post', alpha=0.2, color='b')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.ylim([0.0, 1.05])
plt.xlim([0.0, 1.0])
plt.title('ROC curve, area = {:.5f}'.format(roc_auc));
Explanation: 7 - Plot the ROC curve for your fit.
End of explanation
wine = pd.read_csv('http://archive.ics.uci.edu/ml/machine-learning-databases/wine/wine.data', header=None)
# wine = pd.read_csv('wine.data', header=None)
wine.columns = ['class', 'alcohol', 'malic_acid', 'ash', 'alcalinity_ash', 'magnesium',
'total_phenols', 'flavanoids', 'nonflavanoid_phenols', 'proanthocyanins', 'color_intensity',
'hue', 'OD280_OD315', 'proline']
wine.head()
wine.info()
wine.describe().T
# Xtrain, Xtest, ytrain, ytest = train_test_split(wine.drop('class', axis=1),
# wine['class'],
# test_size=0.2,
# random_state=5,
# stratify=wine['class'])
Explanation: Learning and Evaluation Curves, Hyperparameter Tuning, and Bootstrapping
A problem that you will see crop up time and time again in Data Science is overfitting. Much like how people can sometimes see structure where there is none, machine learning algorithms suffer from the same. If you have overfit your model to your data, it has learned a "pattern" in the noise rather than the signal you were looking for, and thus will not generalize well to data it has not seen.
Consider the LASSO Regression model which we have used previously. Like all parametric models, fitting a LASSO Regression model can be reduced to the problem of finding a set of $\hat{\theta_i}$ which minimize the cost function given the data. But notice that unlike a standard linear regression model, LASSO Regression has an additional regularization parameter. The result of this is that the $\hat{\theta_i}$ are dependent not only on our data, but also on this additional hyperparameter.
So now we have three different problems to juggle while we are fitting our models: overfitting, underfitting, and hyperparameter tuning. A common technique to deal with all three is cross-validation which we will explore in this exercise.
Also, you may find yourself in a situation where you do not have enough data to build a good model. Again, with some simple assumptions, we can "pull ourselves up by our bootstraps", and get by reasonably well with the data we have using a method called bootstrapping.
In this exercise, you'll explore these topics using the wine dataset.
1 - Head over to the Machine Learning Repository, download the Wine Dataset, and put it in a dataframe, being sure to label the columns properly.
End of explanation
# scaling and logistic regression
pipe = Pipeline([
('scl', StandardScaler()),
('lr', LogisticRegression())
])
scores = []
# looping through test sizes from 0.98 to 0.02 (0.99 gave error because I didn't have all the classes in the split)
# for tsize in np.arange(0.98, 0.01, -0.01):
for tsize in np.arange(0.01, 0.99, 0.01):
# splitting
Xtr, Xts, ytr, yts = train_test_split(wine.drop('class', axis=1),
wine['class'],
test_size=tsize,
random_state=1,
# stratify=wine['class'] # not used in solutions
)
# fit and score
pipe.fit(Xtr, ytr)
scores.append([tsize, pipe.score(Xtr, ytr), pipe.score(Xts, yts)])
scores = np.array(scores)
# plot train and test scores by increasing train size (1-test size)
fig, ax = plt.subplots(figsize=(16, 8))
ax.plot(1 - scores[:, 0], scores[:, 1], label='Train')
ax.plot(1 - scores[:, 0], scores[:, 2], label='Test')
ax.set_xlim([0, 1])
ax.set_ylim([0, 1.05])
ax.set_xlabel('Train size (%)')
ax.set_ylabel('Accuracy')
ax.set_title('Learning Curves')
ax.legend();
Explanation: 2 - Separate your data into train and test sets, of portions ranging from test_size = 0.99 to test_size = 0.01, fitting a logistic regression model to each and computing the training and test errors. Plot the errors of the training and test sets on the same plot. Do this without using sklearn.model_select.learning_curve. Comment on your results.
End of explanation
lr = LogisticRegression()
scores = []
for tsize in np.arange(0.01, 0.99, 0.01):
# splitting
Xtr, Xts, ytr, yts = train_test_split(wine.drop('class', axis=1),
wine['class'],
test_size=tsize,
random_state=1,
)
# fit and score
lr.fit(Xtr, ytr)
scores.append([tsize, lr.score(Xtr, ytr), lr.score(Xts, yts)])
scores = np.array(scores)
# plot train and test scores by increasing train size (1-test size)
fig, ax = plt.subplots(figsize=(16, 8))
ax.plot(1 - scores[:, 0], scores[:, 1], label='Train')
ax.plot(1 - scores[:, 0], scores[:, 2], label='Test')
ax.set_xlim([0, 1])
ax.set_ylim([0, 1.05])
ax.set_xlabel('Train size (%)')
ax.set_ylabel('Accuracy')
ax.set_title('Learning Curves')
ax.legend();
Explanation: As expected the test accuracy goes up when the train size is increased; strangely we always get 100% accuracy on the training set.
In the solution without scaling the results are worse, let's try it out:
End of explanation
train_sizes, train_scores, test_scores = learning_curve(pipe,
wine.drop('class', axis=1),
wine['class'],
train_sizes=np.arange(0.35, 0.99, 0.01),
cv=10,
random_state=5)
fig, ax = plt.subplots(figsize=(16, 8))
ax.plot(train_sizes, train_scores.mean(axis=1), label='Train')
ax.plot(train_sizes, test_scores.mean(axis=1), label='Test')
ax.set_ylim([0.4, 1.05])
ax.set_xlabel('Train size')
ax.set_ylabel('Accuracy')
ax.set_title('Learning Curves')
ax.legend();
Explanation: Still a bit different...
3 - Repeat part (2) but this time using sklearn.model_select.learning_curve. Comment on your results.
End of explanation
# using grid search which uses k-fold cv by default
param_range = [0.0001, 0.0003, 0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1, 3, 10, 30, 100, 300, 1000, 3000]
param_grid = [{'lr__C': param_range}]
gs = GridSearchCV(estimator=pipe,
param_grid=param_grid)
scores = {}
best_scores = {}
# looping through number of folds
for k in [2, 3, 4, 5]:
# set params
gs.set_params(cv=k)
# fit
gs.fit(wine.drop('class', axis=1), wine['class'])
# saving best scores and mean scores for all params
best_scores[k] = [gs.best_params_['lr__C'], gs.best_score_]
scores[k] = np.array([param_range, gs.cv_results_['mean_train_score'], gs.cv_results_['mean_test_score']]).T
best_scores
# creating 4 plots, one for each number of fold
fig, ax = plt.subplots(2, 2, figsize=(16, 8), sharex='col', sharey='row', subplot_kw=dict(xscale='log'))
i, j = 0, 0
for k in scores:
# plotting train and test scores and the best score
ax[i, j].plot(scores[k][:, 0], scores[k][:, 1], label='Train')
ax[i, j].plot(scores[k][:, 0], scores[k][:, 2], label='Test')
ax[i, j].plot(best_scores[k][0], best_scores[k][1], 'ro', label='Best Score')
ax[i, j].set_title('{}-fold CV'.format(k))
if j == 0:
ax[i, j].set_ylabel('Accuracy')
if i == 1:
ax[i, j].set_xlabel('C value')
ax[i, j].legend()
j += 1
if j == 2:
j = 0
i += 1
Explanation: Using learning curves the accuracy on the training set is always perfect, while the test accuracy increases without getting to 100%: this may be because learning curve uses cross validation and so we are less dependent on the splits.
4 - Use K-Fold cross validation to tune the regularization strength parameter of the logistic regression. Do this for values of k ranging from 2 to 5, make relevant plots, and comment on your results. Do this without using sklearn.model_select.evaluation_curve.
End of explanation
# solution:
X = wine.drop('class', axis=1)
y = wine['class']
k_vals = np.arange(2, 6, 1)
c_vals = np.arange(0.5, 1.5, 0.05)
err = []
for k in k_vals:
for c in c_vals:
lm = LogisticRegression(C=c)
temp_err = cross_val_score(lm, X=X, y=y, cv=k)
err_dict = {}
err_dict['k'] = k
err_dict['c'] = c
err_dict['acc'] = temp_err.mean()
err.append(err_dict)
err = pd.DataFrame(err)
for k in err.k.unique():
plt.plot(c_vals, err.acc[err.k == k], label='k = {}'.format(k))
plt.title('Accuracy vs Regularization Strength')
plt.xlabel('C')
plt.ylabel('Accuracy')
plt.legend();
Explanation: Training scores are very similar in each of the plots, there are some differences between the test accuracy curves where the best value for C seems to go towards 1.
End of explanation
scores = {}
best_scores = {}
for k in [2, 3, 4, 5]:
# validation cruve uses cv and wants params similar to grid search
train_scores, test_scores = validation_curve(estimator=pipe,
X=wine.drop('class', axis=1),
y=wine['class'],
param_name='lr__C',
param_range=param_range,
cv=k)
# saving mean scores
mean_train_scores = train_scores.mean(axis=1)
mean_test_scores = test_scores.mean(axis=1)
# and best scores
best_scores[k] = [param_range[mean_test_scores.argmax()], mean_test_scores[mean_test_scores.argmax()]]
scores[k] = np.array([param_range, mean_train_scores, mean_test_scores]).T
best_scores
fig, ax = plt.subplots(2, 2, figsize=(16, 8), sharex='col', sharey='row', subplot_kw=dict(xscale='log'))
i, j = 0, 0
for k in scores:
ax[i, j].plot(scores[k][:, 0], scores[k][:, 1], label='Train')
ax[i, j].plot(scores[k][:, 0], scores[k][:, 2], label='Test')
ax[i, j].plot(best_scores[k][0], best_scores[k][1], 'ro', label='Best Score')
ax[i, j].set_title('{}-fold CV'.format(k))
if j == 0:
ax[i, j].set_ylabel('Accuracy')
if i == 1:
ax[i, j].set_xlabel('C value')
ax[i, j].legend()
j += 1
if j == 2:
j = 0
i += 1
Explanation: 5 - Tune the regularization strength parameter as in part (4), but this time using sklearn.model_select.evaluation_curve. Comment on your results.
End of explanation
pipe = Pipeline([
('scl', StandardScaler()),
('svm', SVC())
])
# using grid search with LeaveOneOut as cv
param_range = [0.0001, 0.0003, 0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1, 3, 10, 30, 100, 300, 1000, 3000]
param_grid = [{'svm__C': param_range}]
# in solution: cross_val_score(pipe, X, y, cv=LeaveOneOut()) used looping through params values
gs = GridSearchCV(estimator=pipe,
param_grid=param_grid,
cv=LeaveOneOut())
gs.fit(wine.drop('class', axis=1), wine['class'])
gs.best_params_['svm__C'], gs.best_score_
Explanation: The plots are the same as above.
6 - Fit another classification algorithm to the data, tuning the parameter using LOOCV. Comment on your results.
End of explanation
# resampling 5 times the data
wine_strapped = resample(wine, n_samples=len(wine)*5, random_state=10)
# solution: leaves 80% of the data out and bootstraps the remaining 20%:
data_sub_test, data_sub = train_test_split(wine, test_size=0.2, random_state=0)
data_boot = data_sub.sample(n=150, replace=True, random_state=0)
# still from solution:
means = {'original': wine.drop('class', axis=1).mean(),
'20%': data_sub.drop('class', axis=1).mean(),
'bootstrap%': data_boot.drop('class', axis=1).mean()}
var = {'original': wine.drop('class', axis=1).var(),
'20%': data_sub.drop('class', axis=1).var(),
'bootstrap%': data_boot.drop('class', axis=1).var()}
means = pd.DataFrame(means)
var = pd.DataFrame(var)
means
var
# last from solution:
pipe = Pipeline([
('scl', StandardScaler()),
('lr', LogisticRegression())
])
pipe.set_params(lr__C=1)
pipe.fit(wine.drop('class', axis=1), wine['class'])
print(cross_val_score(pipe, wine.drop('class', axis=1), wine['class'], cv=5).mean())
pipe.fit(data_boot.drop('class', axis=1), data_boot['class'])
print(cross_val_score(pipe, data_sub_test.drop('class', axis=1), data_sub_test['class'], cv=5).mean())
wine.describe().T
wine_strapped.describe().T
pipe = Pipeline([
('scl', StandardScaler()),
('lr', LogisticRegression())
])
# using grid search to fit the best logistic regression
param_range = [0.0001, 0.0003, 0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1, 3, 10, 30, 100, 300, 1000, 3000]
param_grid = [{'lr__C': param_range}]
gs = GridSearchCV(estimator=pipe,
param_grid=param_grid)
scores = {}
best_scores = {}
# looping through number of folds
for k in [2, 3, 4, 5]:
gs.set_params(cv=k)
gs.fit(wine_strapped.drop('class', axis=1), wine_strapped['class'])
best_scores[k] = [gs.best_params_['lr__C'], gs.best_score_]
scores[k] = np.array([param_range, gs.cv_results_['mean_train_score'], gs.cv_results_['mean_test_score']]).T
best_scores
Explanation: Using LOOCV I get 0.3 as the best choice for C and an accuracy of almost 99%, which is slightly higher than before.
7 - Suppose that the wine data the we received was incomplete, containing, say, only 20% of the full set, but due to a fast approaching deadline, we need to still compute some statistics and fit a model. Use bootstrapping to compute the mean and variance of the features, and fit the classification model you used in part (4), comparing and commenting on your results with those from the full dataset.
End of explanation
shrooms = pd.read_csv('http://archive.ics.uci.edu/ml/machine-learning-databases/mushroom/agaricus-lepiota.data', header=None)
shrooms.columns = ['poisonous', 'cap_shape', 'cap_surface', 'cap_color', 'bruises', 'odor', 'gill_attachment', 'gill_spacing',
'gill_size', 'gill_color', 'stalk_shape', 'stalk_root', 'stalk_surface_above_ring',
'stalk_surface_below_ring', 'stalk_color_above_ring', 'stalk_color_below_ring', 'veil_type',
'veil_color', 'ring_number', 'ring_type', 'spore_print_color', 'population', 'habitat']
shrooms.head()
shrooms.info()
shrooms.describe().T
Explanation: In all the cases I get 100% accuracy!
Model Selection
All that has been within the previous sections is part of the much broader topic known as model selection. Model selection is not only about choosing the right machine learning algorithm to use, but also about tuning parameters while keeping overfitting and scalability issues in mind. In this exercise, we'll build models for a couple different datasets, using all of the concepts you've worked with previously.
Classification
1 - Head over to the Machine Learning Repository, download the Mushroom Data Set, and put it into a dataframe. Be sure to familiarize yourself with the data before proceeding. Break the data into training and testing sets. Be sure to keep this testing set the same for the duration of this exercise as we will be using to test various algorithms!
End of explanation
shrooms.drop('veil_type', axis=1, inplace=True)
Xtrain, Xtest, ytrain, ytest = train_test_split(shrooms.drop('poisonous', axis=1),
shrooms.poisonous,
test_size=0.2,
random_state=42)
Explanation: We have 2480 missing values for stalk_root (denoted by ?), which I'm going to keep as they are since this is a category.
I'm dropping veil_type instead because it has only 1 class.
End of explanation
ytrain = ytrain.apply(lambda x: 1 if x == 'p' else 0)
ytest = ytest.apply(lambda x: 1 if x == 'p' else 0)
Explanation: Also, I'm relabeling the poisonous mushrooms as 1 and not poisonous ones as 0.
End of explanation
# in solution he encodes all variables once using df.apply(le.fit_transform)
Explanation: 2 - Fit a machine learning algorithm of your choice to the data, tuning hyperparameters using the Better Holdout Method. Generate training and validation plots and comment on your results.
End of explanation
class MultiLabelEncoder(BaseEstimator, TransformerMixin):
def __init__(self, columns=None):
self.columns = columns
def fit(self, X, y=None):
return self
def transform(self, X):
'''
Transforms columns of X specified in self.columns using
LabelEncoder(). If no columns specified, transforms all
columns in X.
'''
output = X.copy()
if self.columns is not None:
for col in self.columns:
output[col] = LabelEncoder().fit_transform(output[col])
else:
for colname, col in output.iteritems():
output[colname] = LabelEncoder().fit_transform(col)
return output
# def fit_transform(self, X, y=None):
# return self.fit(X, y).transform(X)
# def get_params(self):
# return 'columns'
# def set_params(self, param, value):
# self.param = value
# encode, one hot encode and then KNN
pipe = Pipeline([
('mle', MultiLabelEncoder()),
('ohe', OneHotEncoder()),
('knn', KNeighborsClassifier())
])
# better holdout
Xtr, Xcv, ytr, ycv = train_test_split(Xtrain, ytrain, test_size=0.2, random_state=42)
scores = []
# looping through various neighbors number
for k in range(1, 21):
pipe.set_params(knn__n_neighbors=k)
pipe.fit(Xtr, ytr)
tr_score = pipe.score(Xtr, ytr)
cv_score = pipe.score(Xcv, ycv)
scores.append([k, tr_score, cv_score])
scores = np.array(scores)
scores
fig, ax = plt.subplots(figsize=(16, 8))
# plotting train and holdout scores
ax.plot(scores[:, 0], scores[:, 1], label='Train')
ax.plot(scores[:, 0], scores[:, 2], label='Hold Out')
ax.set_title('KNN with Different # of Neighbors'.format(k))
ax.set_ylabel('Accuracy')
ax.set_xlabel('# of Neighbors')
ax.legend();
pipe.set_params(knn__n_neighbors=5)
pipe.fit(Xtrain, ytrain)
pipe.score(Xtest, ytest)
Explanation: This is a class to perform label encoding on multiple columns:
End of explanation
# from solution:
train_scores, test_scores = validation_curve(pipe, Xtrain, ytrain, param_name='knn__n_neighbors', param_range=range(1, 21))
# solution continues:
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.plot(range(1, 21), train_scores_mean, label='Train')
plt.fill_between(range(1, 21), train_scores_mean - train_scores_std, train_scores_mean + train_scores_std)
plt.plot(range(1, 21), test_scores_mean, label='Test')
plt.fill_between(range(1, 21), test_scores_mean - test_scores_std, test_scores_mean + test_scores_std)
plt.legend();
scores = []
# same as above, only using kfold
for k in range(1, 11):
kfold = KFold(n_splits=10, random_state=1).split(Xtrain, ytrain) # you have to do this every time because the folds
# are eliminated after use
fold_scores = []
pipe.set_params(knn__n_neighbors=k)
print('{} Neighbors'.format(k))
# looping through the folds
for i, (train, test) in enumerate(kfold):
# creating train and cv sets
Xtr, ytr = Xtrain.iloc[train, :], ytrain.iloc[train]
Xcv, ycv = Xtrain.iloc[test, :], ytrain.iloc[test]
# fit and score
pipe.fit(Xtr, ytr)
tr_score = pipe.score(Xtr, ytr)
cv_score = pipe.score(Xcv, ycv)
# append fold scores
fold_scores.append([i, tr_score, cv_score])
print('Fold {} done'.format(i+1))
fold_scores = np.array(fold_scores)
# append mean and std score of the folds
scores.append([k, fold_scores.mean(axis=0)[1], fold_scores.mean(axis=0)[2],
fold_scores.std(axis=0)[1], fold_scores.std(axis=0)[2]])
scores = np.array(scores)
scores
fig, ax = plt.subplots(figsize=(16, 8))
# plotting train and cv mean scores and filling with std
ax.plot(scores[:, 0], scores[:, 1], label='Train')
ax.plot(scores[:, 0], scores[:, 2], label='CV')
ax.fill_between(scores[:, 0],
scores[:, 1] + scores[:, 3],
scores[:, 1] - scores[:, 3],
alpha=0.15, color='blue')
ax.fill_between(scores[:, 0],
scores[:, 2] + scores[:, 4],
scores[:, 2] - scores[:, 4],
alpha=0.15, color='orange')
ax.set_title('KNN with Different # of Neighbors'.format(k))
ax.set_ylabel('Accuracy')
ax.set_xlabel('# of Neighbors')
ax.legend();
Explanation: The best results are for a number of neighbors between 2 and 7 (100% accuracy on both train and test set), so I'm going with the default of 5.
3 - Repeat part (2), this time using cross validation. Comment on your results.
End of explanation
# encoder and random forest
pipe = Pipeline([
('mle', MultiLabelEncoder()),
#('ohe', OneHotEncoder()),
('forest', RandomForestClassifier())
])
scores = []
for k in np.arange(2, 50, 2):
kfold = KFold(n_splits=10, random_state=1).split(Xtrain, ytrain)
fold_scores = []
pipe.set_params(forest__n_estimators=k)
print('{} Trees'.format(k))
for i, (train, test) in enumerate(kfold):
Xtr, ytr = Xtrain.iloc[train, :], ytrain.iloc[train]
Xcv, ycv = Xtrain.iloc[test, :], ytrain.iloc[test]
pipe.fit(Xtr, ytr)
tr_score = pipe.score(Xtr, ytr)
cv_score = pipe.score(Xcv, ycv)
fold_scores.append([i, tr_score, cv_score])
print('Fold {} done'.format(i+1))
fold_scores = np.array(fold_scores)
scores.append([k, fold_scores.mean(axis=0)[1], fold_scores.mean(axis=0)[2],
fold_scores.std(axis=0)[1], fold_scores.std(axis=0)[2]])
scores = np.array(scores)
scores
fig, ax = plt.subplots(figsize=(16, 8))
ax.plot(scores[:, 0], scores[:, 1], label='Train')
ax.plot(scores[:, 0], scores[:, 2], label='CV')
ax.fill_between(scores[:, 0],
scores[:, 1] + scores[:, 3],
scores[:, 1] - scores[:, 3],
alpha=0.15, color='blue')
ax.fill_between(scores[:, 0],
scores[:, 2] + scores[:, 4],
scores[:, 2] - scores[:, 4],
alpha=0.15, color='orange')
ax.set_title('Random Forest with different # of Trees'.format(k))
ax.set_ylabel('Accuracy')
ax.set_xlabel('# of Trees')
ax.legend();
pipe.set_params(forest__n_estimators=10)
pipe.fit(Xtrain, ytrain)
pipe.score(Xtest, ytest)
Explanation: We get perfect accuracy for a number of neighbors between 2 and 8 this time; we can also see that we have no variance for this range of values.
4 - Repeat part (3) using a different machine learning algorithm. Comment on your results.
End of explanation
mle = MultiLabelEncoder()
pca = PCA()
pca.fit(mle.fit_transform(Xtrain), ytrain)
pca.explained_variance_ratio_
Explanation: We get perfect accuracy for about 8 trees or more, so I'm going with the default value of 10.
5 - Which ever of your two algorithms in parts (3) and (4) performed more poorly, perform a variable selection to see if you can improve your results.
End of explanation
pipe = Pipeline([
('mle', MultiLabelEncoder()),
('sel', PCA(n_components=7)),
('forest', RandomForestClassifier())
])
scores = []
for k in np.arange(2, 22, 2):
kfold = KFold(n_splits=10, random_state=1).split(Xtrain, ytrain)
fold_scores = []
pipe.set_params(forest__n_estimators=k)
print('{} Trees'.format(k))
for i, (train, test) in enumerate(kfold):
Xtr, ytr = Xtrain.iloc[train, :], ytrain.iloc[train]
Xcv, ycv = Xtrain.iloc[test, :], ytrain.iloc[test]
pipe.fit(Xtr, ytr)
tr_score = pipe.score(Xtr, ytr)
cv_score = pipe.score(Xcv, ycv)
fold_scores.append([i, tr_score, cv_score])
print('Fold {} done'.format(i+1))
fold_scores = np.array(fold_scores)
scores.append([k, fold_scores.mean(axis=0)[1], fold_scores.mean(axis=0)[2],
fold_scores.std(axis=0)[1], fold_scores.std(axis=0)[2]])
scores = np.array(scores)
fig, ax = plt.subplots(figsize=(16, 8))
ax.plot(scores[:, 0], scores[:, 1], label='Train')
ax.plot(scores[:, 0], scores[:, 2], label='CV')
ax.fill_between(scores[:, 0],
scores[:, 1] + scores[:, 3],
scores[:, 1] - scores[:, 3],
alpha=0.15, color='blue')
ax.fill_between(scores[:, 0],
scores[:, 2] + scores[:, 4],
scores[:, 2] - scores[:, 4],
alpha=0.15, color='orange')
ax.set_title('Random Forest with different # of Trees'.format(k))
ax.set_ylabel('Accuracy')
ax.set_xlabel('# of Trees')
ax.legend();
Explanation: Let's try 7 components:
End of explanation
from sklearn.feature_selection import RFE
Explanation: Well it's actually worse!
6 - Pick a classification algorithm that has at least two hyperparameters and tune them using GridSeachCV. Comment on your results.
I'm trying logistic regression + PCA tuning number of components and regularization:
In solution he uses random forest tuning n_estimators and max_features!
End of explanation
mle = MultiLabelEncoder()
Xtrain_enc = mle.fit_transform(Xtrain)
Xtest_enc = mle.transform(Xtest)
pipe = Pipeline([
('sel', RFE(estimator=LogisticRegression())),
('lr', LogisticRegression())
])
# C_range = [0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1, 3, 10, 30, 100, 300]
C_range = [0.01, 0.03, 0.1, 0.3, 1, 3]
comp_range = np.arange(1, 16)
# using grid search to tune regularization and number of PCs
param_grid = [{'sel__n_features_to_select':comp_range, 'lr__C': C_range}]
gs = GridSearchCV(estimator=pipe,
param_grid=param_grid,
cv=10)
# this could take a very long time...
gs.fit(Xtrain_enc, ytrain)
print(gs.best_score_)
print(gs.best_params_)
pipe.set_params(sel__n_features_to_select=13, lr__C=0.3)
pipe.fit(Xtrain_enc, ytrain)
pipe.score(Xtest_enc, ytest)
Explanation: So, apparently GridSearchCV gets stuck if I use custom scorers or if I'm multithreading or something like that, so I'm going to try to encode my labels outside the pipeline...
End of explanation
parkinson = pd.read_csv('http://archive.ics.uci.edu/ml/machine-learning-databases/parkinsons/telemonitoring/parkinsons_updrs.data')
parkinson.drop(['age', 'sex', 'test_time', 'total_UPDRS'], axis=1, inplace=True)
Explanation: We get 95% accuracy with simple logistic regression, not bad!
Regression
1 - Head over to the Machine Learning Repository, download the Parkinson's Telemonitoring Data Set, and put it into a dataframe. Be sure to familiarize yourself with the data before proceeding, removing the columns related to time, age, and sex. We're going to be predicting motor_UPDRS, so drop the total_UPDRS variable as well. Break the data into training and testing sets. Be sure to keep this testing set the same for the duration of this exercise as we will be using to test various algorithms!
End of explanation
parkinson.head()
parkinson.info()
parkinson.describe().T
# I'm dropping subject# because it is the patient id
Xtrain, Xtest, ytrain, ytest = train_test_split(parkinson.drop(['subject#', 'motor_UPDRS'], axis=1),
parkinson.motor_UPDRS,
test_size=0.2,
random_state=54)
Explanation: In solution he uses dummies for subject#, which I don't think is really "honest".
End of explanation
# scaling and ridge regression
pipe = Pipeline([
('stsc', StandardScaler()),
('ridge', Ridge())
])
# better hold out
Xtr, Xcv, ytr, ycv = train_test_split(Xtrain, ytrain, test_size=0.2, random_state=42)
scores = []
# looping through regularization values
for a in [0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1, 3, 10, 30, 100, 300, 1000, 3000]:
pipe.set_params(ridge__alpha=a)
pipe.fit(Xtr, ytr)
tr_score = pipe.score(Xtr, ytr)
cv_score = pipe.score(Xcv, ycv)
scores.append([a, tr_score, cv_score])
scores = np.array(scores)
scores
fig, ax = plt.subplots(figsize=(16, 8), subplot_kw=dict(xscale='log'))
ax.plot(scores[:, 0], scores[:, 1], label='Train')
ax.plot(scores[:, 0], scores[:, 2], label='Hold Out')
ax.set_title(r'Ridge Regression with different $\alpha$')
ax.set_ylabel('$R^2$')
ax.set_xlabel(r'\alpha')
ax.legend();
Explanation: 2 - Fit a machine learning algorithm of your choice to the data, tuning hyperparameters using the Better Holdout Method. Generate training and validation plots and comment on your results.
End of explanation
ypred = pipe.predict(Xtrain)
residuals = ypred - ytrain
plt.figure(figsize=(16, 8))
plt.scatter(ypred, residuals)
plt.hlines(y=0, xmin=ypred.min(), xmax=ypred.max(), lw=2, color='red');
pipe.set_params(ridge__alpha=100)
pipe.fit(Xtrain, ytrain)
pipe.score(Xtest, ytest)
Explanation: These doesn't seems so randomly scattered...
End of explanation
scores = []
for a in [0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1, 3, 10, 30, 100, 300, 1000, 3000]:
kfold = KFold(n_splits=10, random_state=1).split(Xtrain, ytrain)
fold_scores = []
pipe.set_params(ridge__alpha=a)
print('alpha = {}'.format(a))
for i, (train, test) in enumerate(kfold):
Xtr, ytr = Xtrain.iloc[train, :], ytrain.iloc[train]
Xcv, ycv = Xtrain.iloc[test, :], ytrain.iloc[test]
pipe.fit(Xtr, ytr)
tr_score = pipe.score(Xtr, ytr)
cv_score = pipe.score(Xcv, ycv)
fold_scores.append([i, tr_score, cv_score])
print('Fold {} done'.format(i+1))
fold_scores = np.array(fold_scores)
scores.append([a, fold_scores.mean(axis=0)[1], fold_scores.mean(axis=0)[2],
fold_scores.std(axis=0)[1], fold_scores.std(axis=0)[2]])
scores = np.array(scores)
scores
fig, ax = plt.subplots(figsize=(16, 8), subplot_kw=dict(xscale='log'))
ax.plot(scores[:, 0], scores[:, 1], label='Train')
ax.plot(scores[:, 0], scores[:, 2], label='CV')
ax.fill_between(scores[:, 0],
scores[:, 1] + scores[:, 3],
scores[:, 1] - scores[:, 3],
alpha=0.15, color='blue')
ax.fill_between(scores[:, 0],
scores[:, 2] + scores[:, 4],
scores[:, 2] - scores[:, 4],
alpha=0.15, color='orange')
ax.set_title(r'Ridge Regression with different $\alpha$')
ax.set_ylabel('$R^2$')
ax.set_xlabel(r'$\alpha$')
ax.legend();
Explanation: The score is very bad, the best choice for the regularization parameter seems to be 100 but there seems to be a problem of serious bias in this dataset. We may need more features to get accurate predictions.
3 - Repeat part (2), this time using cross validation. Comment on your results.
Again in the solution he uses cross_validation_curve...
End of explanation
# scaling and random forest
pipe = Pipeline([
('scl', StandardScaler()),
('forest', RandomForestRegressor())
])
scores = []
# looping through different number of trees
for n in [3, 10, 30, 100, 300]:
kfold = KFold(n_splits=10, random_state=1).split(Xtrain, ytrain)
fold_scores = []
pipe.set_params(forest__n_estimators=n)
print('# trees = {}'.format(n))
# looping through folds
for i, (train, test) in enumerate(kfold):
Xtr, ytr = Xtrain.iloc[train, :], ytrain.iloc[train]
Xcv, ycv = Xtrain.iloc[test, :], ytrain.iloc[test]
pipe.fit(Xtr, ytr)
tr_score = pipe.score(Xtr, ytr)
cv_score = pipe.score(Xcv, ycv)
fold_scores.append([i, tr_score, cv_score])
print('Fold {} done'.format(i+1))
fold_scores = np.array(fold_scores)
scores.append([n, fold_scores.mean(axis=0)[1], fold_scores.mean(axis=0)[2],
fold_scores.std(axis=0)[1], fold_scores.std(axis=0)[2]])
scores = np.array(scores)
scores
fig, ax = plt.subplots(figsize=(16, 8))
ax.plot(scores[:, 0], scores[:, 1], label='Train')
ax.plot(scores[:, 0], scores[:, 2], label='CV')
ax.fill_between(scores[:, 0],
scores[:, 1] + scores[:, 3],
scores[:, 1] - scores[:, 3],
alpha=0.15, color='blue')
ax.fill_between(scores[:, 0],
scores[:, 2] + scores[:, 4],
scores[:, 2] - scores[:, 4],
alpha=0.15, color='orange')
ax.set_title('Random Forest Regression with Different # of Trees')
ax.set_ylabel('$R^2$')
ax.set_xlabel('# of Trees')
ax.legend();
pipe.set_params(forest__n_estimators=100)
pipe.fit(Xtrain, ytrain)
pipe.score(Xtest, ytest)
Explanation: The two curves are even closer, but again performances are overall very poor. This plot confirms that we have a high bias for this dataset.
4 - Repeat part (3) using a different machine learning algorithm. Comment on your results.
End of explanation
scl = StandardScaler()
sfm = SelectFromModel(LassoCV())
Xtrain_sc = scl.fit_transform(Xtrain)
Xtest_sc = scl.transform(Xtest)
Xtrain_lm = sfm.fit_transform(Xtrain_sc, ytrain)
Xtest_lm = sfm.transform(Xtest_sc)
train_scores, test_scores = validation_curve(Ridge(), Xtrain_lm, ytrain,
param_name='alpha',
param_range=[0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1, 3, 10, 30, 100, 300, 1000, 3000])
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
fig, ax = plt.subplots(figsize=(16, 8), subplot_kw=dict(xscale='log'))
ax.plot([0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1, 3, 10, 30, 100, 300, 1000, 3000],
train_scores_mean, label='Train')
ax.fill_between([0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1, 3, 10, 30, 100, 300, 1000, 3000],
train_scores_mean - train_scores_std, train_scores_mean + train_scores_std, alpha=0.2)
ax.plot([0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1, 3, 10, 30, 100, 300, 1000, 3000],
test_scores_mean, label='Test')
ax.fill_between([0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1, 3, 10, 30, 100, 300, 1000, 3000],
test_scores_mean - test_scores_std, test_scores_mean + test_scores_std, alpha=0.2)
ax.legend();
pca = PCA()
pca.fit(Xtrain)
pca.explained_variance_ratio_
Explanation: Using a random forest we get better performances but we have a serious problem of high variance in this case.
5 - Which ever of your two algorithms in parts (3) and (4) performed more poorly, perform a variable selection to see if you can improve your results.
I'm going to use PCA to try and improve the R regression, but I don't think there is much to do...
In solution he uses LassoCV with SelectFromModel:
End of explanation
pipe = Pipeline([
('scl', StandardScaler()),
('pca', PCA(n_components=12)),
('ridge', Ridge())
])
scores = []
for a in [0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1, 3, 10, 30, 100, 300, 1000, 3000]:
kfold = KFold(n_splits=10, random_state=1).split(Xtrain, ytrain)
fold_scores = []
pipe.set_params(ridge__alpha=a)
print('alpha = {}'.format(a))
for i, (train, test) in enumerate(kfold):
Xtr, ytr = Xtrain.iloc[train, :], ytrain.iloc[train]
Xcv, ycv = Xtrain.iloc[test, :], ytrain.iloc[test]
pipe.fit(Xtr, ytr)
tr_score = pipe.score(Xtr, ytr)
cv_score = pipe.score(Xcv, ycv)
fold_scores.append([i, tr_score, cv_score])
print('Fold {} done'.format(i+1))
fold_scores = np.array(fold_scores)
scores.append([a, fold_scores.mean(axis=0)[1], fold_scores.mean(axis=0)[2],
fold_scores.std(axis=0)[1], fold_scores.std(axis=0)[2]])
scores = np.array(scores)
scores
fig, ax = plt.subplots(figsize=(16, 8), subplot_kw=dict(xscale='log'))
ax.plot(scores[:, 0], scores[:, 1], label='Train')
ax.plot(scores[:, 0], scores[:, 2], label='CV')
ax.fill_between(scores[:, 0],
scores[:, 1] + scores[:, 3],
scores[:, 1] - scores[:, 3],
alpha=0.15, color='blue')
ax.fill_between(scores[:, 0],
scores[:, 2] + scores[:, 4],
scores[:, 2] - scores[:, 4],
alpha=0.15, color='orange')
ax.set_title(r'Ridge Regression with different $\alpha$')
ax.set_ylabel('$R^2$')
ax.set_xlabel(r'$\alpha$')
ax.legend();
Explanation: I guess we can try with 12 components
End of explanation
pipe = Pipeline([
('scl', StandardScaler()),
('pca', PCA(n_components=9)),
('poly', PolynomialFeatures()),
('ridge', Ridge())
])
alpha_range = [0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1, 3, 10, 30]
degree_range = np.arange(1, 4)
# using grid search to tune regularization and degree of polynomials
param_grid = [{'poly__degree':degree_range, 'ridge__alpha': alpha_range}]
gs = GridSearchCV(estimator=pipe,
param_grid=param_grid,
cv=10,
n_jobs=2)
gs.fit(Xtrain, ytrain)
print(gs.best_score_)
print(gs.best_params_)
pipe.set_params(poly__degree=2, ridge__alpha=30)
pipe.fit(Xtrain, ytrain)
pipe.score(Xtest, ytest)
Explanation: Not much of an improvement...
6 - Pick a regression algorithm that has at least two hyperparameters and tune them using GridSeachCV. Comment on your results.
I'm going to try and add some polynomial features to the Ridge regression to see if it can perform better:
End of explanation |
10,018 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This note book gives the trend of a single word in single mailing list.
Step1: You'll need to download some resources for NLTK (the natural language toolkit) in order to do the kind of processing we want on all the mailing list text. In particular, for this notebook you'll need punkt, the Punkt Tokenizer Models.
To download, from an interactive Python shell, run
Step2: Group the dataframe by the month and year, and aggregate the counts for the checkword during each month to get a quick histogram of how frequently that word has been used over time. | Python Code:
%matplotlib inline
from bigbang.archive import Archive
import bigbang.parse as parse
import bigbang.graph as graph
import bigbang.mailman as mailman
import bigbang.process as process
import networkx as nx
import matplotlib.pyplot as plt
import pandas as pd
from pprint import pprint as pp
import pytz
import numpy as np
import math
import nltk
from itertools import repeat
from nltk.stem.lancaster import LancasterStemmer
st = LancasterStemmer()
from nltk.corpus import stopwords
import re
urls = ["http://mail.scipy.org/pipermail/ipython-dev/"]#,
#"http://mail.scipy.org/pipermail/ipython-user/"],
#"http://mail.scipy.org/pipermail/scipy-dev/",
#"http://mail.scipy.org/pipermail/scipy-user/",
#"http://mail.scipy.org/pipermail/numpy-discussion/"]
archives= [Archive(url,archive_dir="../archives") for url in urls]
checkword = "python" #can change words, should be lower case
Explanation: This note book gives the trend of a single word in single mailing list.
End of explanation
df = pd.DataFrame(columns=["MessageId","Date","From","In-Reply-To","Count"])
for row in archives[0].data.iterrows():
try:
w = row[1]["Body"].replace("'", "")
k = re.sub(r'[^\w]', ' ', w)
k = k.lower()
t = nltk.tokenize.word_tokenize(k)
subdict = {}
count = 0
for g in t:
try:
word = st.stem(g)
except:
print g
pass
if word == checkword:
count += 1
if count == 0:
continue
else:
subdict["MessageId"] = row[0]
subdict["Date"] = row[1]["Date"]
subdict["From"] = row[1]["From"]
subdict["In-Reply-To"] = row[1]["In-Reply-To"]
subdict["Count"] = count
df = df.append(subdict,ignore_index=True)
except:
if row[1]["Body"] is None:
print '!!! Detected an email with an empty Body field...'
else: print 'error'
df[:5] #dataframe of informations of the particular word.
Explanation: You'll need to download some resources for NLTK (the natural language toolkit) in order to do the kind of processing we want on all the mailing list text. In particular, for this notebook you'll need punkt, the Punkt Tokenizer Models.
To download, from an interactive Python shell, run:
import nltk
nltk.download()
And in the graphical UI that appears, choose "punkt" from the All Packages tab and Download.
End of explanation
df.groupby([df.Date.dt.year, df.Date.dt.month]).agg({'Count':np.sum}).plot(y='Count')
Explanation: Group the dataframe by the month and year, and aggregate the counts for the checkword during each month to get a quick histogram of how frequently that word has been used over time.
End of explanation |
10,019 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The goal of punk is to make available sime wrappers for a variety of machine learning pipelines.
The pipelines are termed primitves and each primitive is designed with a functional programming approach in mind.
At the time of this writing, punk is being periodically updated. Any new primitives will be realesed as a pip-installable python package every friday along with their corresponding annotations files for the broader D3M community.
Here we will briefly show how the primitives in the punk package can be utilized.
Step1: Feature Selection
Feature Selection for Classification Problems
The rfclassifier_feature_selection primitive takes in a dataset (training data along with labels) to output a ranking of features as shown below
Step2: Feature Selection for Regression Problems
Similarly, rfregressor_feature_selection can be used for regression type problems
Step3: To provide some context, below we show the correlation coefficients between some of the features in the boston dataset.
Notice how the two features that were ranked the most important by our primitve are also the two features with the highest correlation coefficient (in absolute value) with the dependent variable MEDV.
This figure was taken from python machine learning book.
Step4: Ranking Features by their Contributions to Principal
pca_feature_selection does a feature ranking based on the contributions each feature has to each of the principal components and by their contributions to the first principal component. | Python Code:
import punk
help(punk)
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from sklearn import datasets
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from punk import feature_selection
Explanation: The goal of punk is to make available sime wrappers for a variety of machine learning pipelines.
The pipelines are termed primitves and each primitive is designed with a functional programming approach in mind.
At the time of this writing, punk is being periodically updated. Any new primitives will be realesed as a pip-installable python package every friday along with their corresponding annotations files for the broader D3M community.
Here we will briefly show how the primitives in the punk package can be utilized.
End of explanation
# Wine dataset
df_wine = pd.read_csv('https://raw.githubusercontent.com/rasbt/'
'python-machine-learning-book/master/code/datasets/wine/wine.data',
header=None)
columns = np.array(['Alcohol', 'Malic acid', 'Ash',
'Alcalinity of ash', 'Magnesium', 'Total phenols',
'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins',
'Color intensity', 'Hue', 'OD280/OD315 of diluted wines',
'Proline'])
# Split dataset
X, y = df_wine.iloc[:, 1:].values, df_wine.iloc[:, 0].values
X, _, y, _ = train_test_split(X, y, test_size=0.3, random_state=0)
%%time
# Run primitive
rfc = feature_selection.RFFeatures(problem_type="classification",
cv=3, scoring="accuracy", verbose=0, n_jobs=1)
rfc.fit(("matrix", "matrix"), (X, y))
indices = rfc.transform()
feature_importances = rfc.feature_importances
#feature_indices = rfc.indices
for i in range(len(columns)):
print("{:>2}) {:^30} {:.5f}".format(i+1,
columns[indices[i]],
feature_importances[indices[i]]
))
plt.figure(figsize=(9, 5))
plt.title('Feature Importances')
plt.bar(range(len(columns)), feature_importances[indices], color='lightblue', align='center')
plt.xticks(range(len(columns)), columns[indices], rotation=90, fontsize=14)
plt.xlim([-1, len(columns)])
plt.tight_layout()
plt.savefig('./random_forest.png', dpi=300)
plt.show()
Explanation: Feature Selection
Feature Selection for Classification Problems
The rfclassifier_feature_selection primitive takes in a dataset (training data along with labels) to output a ranking of features as shown below:
End of explanation
# Get boston dataset
boston = datasets.load_boston()
X, y = boston.data, boston.target
%%time
# Run primitive
rfr = feature_selection.RFFeatures(problem_type="regression",
cv=3, scoring="r2", verbose=0, n_jobs=1)
rfr.fit(("matrix", "matrix"), (X, y))
indices = rfr.transform()
feature_importances = rfr.feature_importances
#feature_indices = rfr.indices
columns = boston.feature_names
for i in range(len(columns)):
print("{:>2}) {:^15} {:.5f}".format(i+1,
columns[indices[i]],
feature_importances[indices[i]]
))
plt.figure(figsize=(9, 5))
plt.title('Feature Importances')
plt.bar(range(len(columns)), feature_importances[indices], color='lightblue', align='center')
plt.xticks(range(len(columns)), columns[indices], rotation=90, fontsize=14)
plt.xlim([-1, len(columns)])
plt.tight_layout()
plt.savefig('./random_forest.png', dpi=300)
plt.show()
Explanation: Feature Selection for Regression Problems
Similarly, rfregressor_feature_selection can be used for regression type problems:
End of explanation
import matplotlib.image as mpimg
img=mpimg.imread("heatmap.png")
plt.figure(figsize=(10, 10))
plt.axis("off")
plt.imshow(img);
Explanation: To provide some context, below we show the correlation coefficients between some of the features in the boston dataset.
Notice how the two features that were ranked the most important by our primitve are also the two features with the highest correlation coefficient (in absolute value) with the dependent variable MEDV.
This figure was taken from python machine learning book.
End of explanation
# Get iris dataset
iris = datasets.load_iris()
sc = StandardScaler()
X = sc.fit_transform(iris.data)
# run primitive
iris_ranking = feature_selection.PCAFeatures()
iris_ranking.fit(["matrix"], X)
importances = iris_ranking.transform()
feature_names = np.array(iris.feature_names)
print(feature_names, '\n')
for i in range(importances["importance_onallpcs"].shape[0]):
print("{:>2}) {:^19}".format(i+1, feature_names[iris_ranking.importance_onallpcs[i]]))
plt.figure(figsize=(9, 5))
plt.bar(range(1, 5), iris_ranking.explained_variance_ratio_, alpha=0.5, align='center')
plt.step(range(1, 5), np.cumsum(iris_ranking.explained_variance_ratio_), where='mid')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.xticks([1, 2, 3, 4])
plt.show()
Explanation: Ranking Features by their Contributions to Principal
pca_feature_selection does a feature ranking based on the contributions each feature has to each of the principal components and by their contributions to the first principal component.
End of explanation |
10,020 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Cross Spectra
This tutorial shows how to make and manipulate a cross spectrum of two light curves using Stingray.
Step1: 1. Create two light curves
There are two ways to make Lightcurve objects. We'll show one way here. Check out "Lightcurve/Lightcurve\ tutorial.ipynb" for more examples.
Generate an array of relative timestamps that's 8 seconds long, with dt = 0.03125 s, and make two signals in units of counts. The first is a sine wave with amplitude = 300 cts/s, frequency = 2 Hz, phase offset = 0 radians, and mean = 1000 cts/s. The second is a sine wave with amplitude = 200 cts/s, frequency = 2 Hz, phase offset = pi/4 radians, and mean = 900 cts/s. We then add Poisson noise to the light curves.
Step2: Now let's turn noisy_1 and noisy_2 into Lightcurve objects.
Step3: Here we're plotting them to see what they look like.
Step4: 2. Pass both of the light curves to the Crossspectrum class to create a Crossspectrum object.
The first Lightcurve passed is the channel of interest or interest band, and the second Lightcurve passed is the reference band.
You can also specify the optional attribute norm if you wish to normalize the real part of the cross spectrum to squared fractional rms, Leahy, or squared absolute normalization. The default normalization is 'frac'.
Step5: Note that, in principle, the Crossspectrum object could have been initialized directly as
ps = Crossspectrum(lc1, lc2, norm="leahy")
However, we recommend using the specific method for input light curve objects used above, for clarity. Equivalently, one can initialize a Crossspectrum object
Step6: Since the negative Fourier frequencies (and their associated cross powers) are discarded, the number of time bins per segment n is twice the length of freq and power.
Step7: Properties
A Crossspectrum object has the following properties
Step8: You'll notice that the cross spectrum is a bit noisy. This is because we're only using one segment of data. Let's try averaging together multiple segments of data.
Averaged cross spectrum example
You could use two long Lightcurves and have AveragedCrossspectrum chop them into specified segments, or give two lists of Lightcurves where each segment of Lightcurve is the same length. We'll show the first way here. Remember to check the Lightcurve tutorial notebook for fancier ways of making light curves.
1. Create two long light curves.
Generate an array of relative timestamps that's 1600 seconds long, and two signals in count rate units, with the same properties as the previous example. We then add Poisson noise and turn them into Lightcurve objects.
Step9: 2. Pass both light curves to the AveragedCrossspectrum class with a specified segment_size.
If the exposure (length) of the light curve cannot be divided by segment_size with a remainder of zero, the last incomplete segment is thrown out, to avoid signal artefacts. Here we're using 8 second segments.
Step10: Note that also the AveragedCrossspectrum object could have been initialized using different input types
Step11: If m is less than 50 and you try to compute the coherence, a warning will pop up letting you know that your number of segments is significantly low, so the error on coherence might not follow the expected (Gaussian) statistical distributions.
Step12: Properties
An AveragedCrossspectrum object has the following properties, same as Crossspectrum
Step13: Now we'll show examples of all the things you can do with a Crossspectrum or AveragedCrossspectrum object using built-in stingray methods.
Normalizating the cross spectrum
The three kinds of normalization are
Step14: Here we plot the three normalized averaged cross spectra.
Step15: Re-binning a cross spectrum in frequency
Typically, rebinning is done on an averaged, normalized cross spectrum.
1. We can linearly re-bin a cross spectrum
(although this is not done much in practice)
Step16: 2. And we can logarithmically/geometrically re-bin a cross spectrum
In this re-binning, each bin size is 1+f times larger than the previous bin size, where f is user-specified and normally in the range 0.01-0.1. The default value is f=0.01.
Logarithmic rebinning only keeps the real part of the cross spectum.
Step17: Note that like rebin, rebin_log returns a Crossspectrum or AveragedCrossspectrum object (depending on the input object)
Step18: Time lags / phase lags
1. Frequency-dependent lags
The lag-frequency spectrum shows the time lag between two light curves (usually non-overlapping broad energy bands) as a function of Fourier frequency.
See Uttley et al. 2014, A&ARev, 22, 72 section 2.2.1.
Step19: The time_lag method returns an np.ndarray with the time lag in seconds per positive Fourier frequency.
Step20: And this is a plot of the lag-frequency spectrum.
Step21: 2. Energy-dependent lags
The lag vs energy spectrum can be calculated using the LagEnergySpectrum from stingray.varenergy. Refer to the Spectral Timing documentation.
Coherence
Coherence is a Fourier-frequency-dependent measure of the linear correlation between time series measured simultaneously in two energy channels.
See Vaughan and Nowak 1997, ApJ, 474, L43 and Uttley et al. 2014, A&ARev, 22, 72 section 2.1.3.
Step22: The coherence method returns two np.ndarrays, of the coherence and uncertainty.
Step23: The coherence and uncertainty have the same length as the positive Fourier frequencies.
Step24: And we can plot the coherence vs the frequency. | Python Code:
import numpy as np
from stingray import Lightcurve, Crossspectrum, AveragedCrossspectrum
import matplotlib.pyplot as plt
import matplotlib.font_manager as font_manager
%matplotlib inline
font_prop = font_manager.FontProperties(size=16)
Explanation: Cross Spectra
This tutorial shows how to make and manipulate a cross spectrum of two light curves using Stingray.
End of explanation
dt = 0.03125 # seconds
exposure = 8. # seconds
times = np.arange(0, exposure, dt) # seconds
signal_1 = 300 * np.sin(2.*np.pi*times/0.5) + 1000 # counts/s
signal_2 = 200 * np.sin(2.*np.pi*times/0.5 + np.pi/4) + 900 # counts/s
noisy_1 = np.random.poisson(signal_1*dt) # counts
noisy_2 = np.random.poisson(signal_2*dt) # counts
Explanation: 1. Create two light curves
There are two ways to make Lightcurve objects. We'll show one way here. Check out "Lightcurve/Lightcurve\ tutorial.ipynb" for more examples.
Generate an array of relative timestamps that's 8 seconds long, with dt = 0.03125 s, and make two signals in units of counts. The first is a sine wave with amplitude = 300 cts/s, frequency = 2 Hz, phase offset = 0 radians, and mean = 1000 cts/s. The second is a sine wave with amplitude = 200 cts/s, frequency = 2 Hz, phase offset = pi/4 radians, and mean = 900 cts/s. We then add Poisson noise to the light curves.
End of explanation
lc1 = Lightcurve(times, noisy_1)
lc2 = Lightcurve(times, noisy_2)
Explanation: Now let's turn noisy_1 and noisy_2 into Lightcurve objects.
End of explanation
fig, ax = plt.subplots(1,1,figsize=(10,6))
ax.plot(lc1.time, lc1.counts, lw=2, color='blue')
ax.plot(lc1.time, lc2.counts, lw=2, color='red')
ax.set_xlabel("Time (s)", fontproperties=font_prop)
ax.set_ylabel("Counts (cts)", fontproperties=font_prop)
ax.tick_params(axis='x', labelsize=16)
ax.tick_params(axis='y', labelsize=16)
ax.tick_params(which='major', width=1.5, length=7)
ax.tick_params(which='minor', width=1.5, length=4)
plt.show()
Explanation: Here we're plotting them to see what they look like.
End of explanation
cs = Crossspectrum.from_lightcurve(lc1, lc2)
print(cs)
Explanation: 2. Pass both of the light curves to the Crossspectrum class to create a Crossspectrum object.
The first Lightcurve passed is the channel of interest or interest band, and the second Lightcurve passed is the reference band.
You can also specify the optional attribute norm if you wish to normalize the real part of the cross spectrum to squared fractional rms, Leahy, or squared absolute normalization. The default normalization is 'frac'.
End of explanation
print(cs.freq[0:5])
print(cs.power[0:5])
Explanation: Note that, in principle, the Crossspectrum object could have been initialized directly as
ps = Crossspectrum(lc1, lc2, norm="leahy")
However, we recommend using the specific method for input light curve objects used above, for clarity. Equivalently, one can initialize a Crossspectrum object:
from EventList objects as
bin_time = 0.1
ps = Crossspectrum.from_events(events1, events2, dt=bin_time, norm="leahy")
where the light curves, uniformly binned at 0.1 s, are created internally.
from numpy arrays of times, as
bin_time = 0.1
ps = Crossspectrum.from_events(times1, times2, dt=bin_time, gti=[[t0, t1], [t2, t3], ...], norm="leahy")
where the light curves, uniformly binned at 0.1 s in this case, are created internally, and the good time intervals (time interval where the instrument was collecting data nominally) are passed by hand. Note that the frequencies of the cross spectrum will be expressed in inverse units as the input time arrays. If the times are expressed in seconds, frequencies will be in Hz; with times in days, frequencies will be in 1/d, and so on. We do not support units (e.g. astropy units) yet, so the user should pay attention to these details.
from an iterable of light curves
ps = Crossspectrum.from_lc_iter(lc_iterable1, lc_iterable2, dt=bin_time, norm="leahy")
where lc_iterableX is any iterable of Lightcurve objects (list, tuple, generator, etc.) and dt is the sampling time of the light curves. Note that this dt is needed because the iterables might be generators, in which case the light curves are lazy-loaded after a bunch of operations using dt have been done.
We can print the first five values in the arrays of the positive Fourier frequencies and the cross power. The cross power has a real and an imaginary component.
End of explanation
print("Size of positive Fourier frequencies: %d" % len(cs.freq))
print("Number of data points per segment: %d" % cs.n)
Explanation: Since the negative Fourier frequencies (and their associated cross powers) are discarded, the number of time bins per segment n is twice the length of freq and power.
End of explanation
cs_amplitude = np.abs(cs.power) # The mod square of the real and imaginary components
fig, ax1 = plt.subplots(1,1,figsize=(9,6), sharex=True)
ax1.plot(cs.freq, cs_amplitude, lw=2, color='blue')
ax1.set_xlabel("Frequency (Hz)", fontproperties=font_prop)
ax1.set_ylabel("Cross spectral amplitude", fontproperties=font_prop)
ax1.set_yscale('log')
ax1.tick_params(axis='x', labelsize=16)
ax1.tick_params(axis='y', labelsize=16)
ax1.tick_params(which='major', width=1.5, length=7)
ax1.tick_params(which='minor', width=1.5, length=4)
for axis in ['top', 'bottom', 'left', 'right']:
ax1.spines[axis].set_linewidth(1.5)
plt.show()
Explanation: Properties
A Crossspectrum object has the following properties :
freq : Numpy array of mid-bin frequencies that the Fourier transform samples.
power : Numpy array of the cross spectrum (complex numbers).
df : The frequency resolution.
m : The number of cross spectra averaged together. For a Crossspectrum of a single segment, m=1.
n : The number of data points (time bins) in one segment of the light curves.
nphots1 : The total number of photons in the first (interest) light curve.
nphots2 : The total number of photons in the second (reference) light curve.
We can compute the amplitude of the cross spectrum, and plot it as a function of Fourier frequency. Notice how there's a spike at our signal frequency of 2 Hz!
End of explanation
long_dt = 0.03125 # seconds
long_exposure = 1600. # seconds
long_times = np.arange(0, long_exposure, long_dt) # seconds
# In count rate units here
long_signal_1 = 300 * np.sin(2.*np.pi*long_times/0.5) + 1000 # counts/s
long_signal_2 = 200 * np.sin(2.*np.pi*long_times/0.5 + np.pi/4) + 900 # counts/s
# Multiply by dt to get count units, then add Poisson noise
long_noisy_1 = np.random.poisson(long_signal_1*dt) # counts
long_noisy_2 = np.random.poisson(long_signal_2*dt) # counts
long_lc1 = Lightcurve(long_times, long_noisy_1)
long_lc2 = Lightcurve(long_times, long_noisy_2)
fig, ax = plt.subplots(1,1,figsize=(10,6))
ax.plot(long_lc1.time, long_lc1.counts, lw=2, color='blue')
ax.plot(long_lc1.time, long_lc2.counts, lw=2, color='red')
ax.set_xlim(0,20)
ax.set_xlabel("Time (s)", fontproperties=font_prop)
ax.set_ylabel("Counts (cts)", fontproperties=font_prop)
ax.tick_params(axis='x', labelsize=16)
ax.tick_params(axis='y', labelsize=16)
ax.tick_params(which='major', width=1.5, length=7)
ax.tick_params(which='minor', width=1.5, length=4)
plt.show()
Explanation: You'll notice that the cross spectrum is a bit noisy. This is because we're only using one segment of data. Let's try averaging together multiple segments of data.
Averaged cross spectrum example
You could use two long Lightcurves and have AveragedCrossspectrum chop them into specified segments, or give two lists of Lightcurves where each segment of Lightcurve is the same length. We'll show the first way here. Remember to check the Lightcurve tutorial notebook for fancier ways of making light curves.
1. Create two long light curves.
Generate an array of relative timestamps that's 1600 seconds long, and two signals in count rate units, with the same properties as the previous example. We then add Poisson noise and turn them into Lightcurve objects.
End of explanation
avg_cs = AveragedCrossspectrum.from_lightcurve(long_lc1, long_lc2, 8.)
Explanation: 2. Pass both light curves to the AveragedCrossspectrum class with a specified segment_size.
If the exposure (length) of the light curve cannot be divided by segment_size with a remainder of zero, the last incomplete segment is thrown out, to avoid signal artefacts. Here we're using 8 second segments.
End of explanation
print(avg_cs.freq[0:5])
print(avg_cs.power[0:5])
print("\nNumber of segments: %d" % avg_cs.m)
Explanation: Note that also the AveragedCrossspectrum object could have been initialized using different input types:
from EventList objects as
bin_time = 0.1
ps = AveragedCrossspectrum.from_events(
events1, events2, dt=bin_time, segment_size=segment_size,
norm="leahy")
(note, again, the necessity of the bin time)
from numpy arrays of times, as
bin_time = 0.1
ps = AveragedCrossspectrum.from_events(
times1, times2, dt=bin_time, segment_size=segment_size,
gti=[[t0, t1], [t2, t3], ...], norm="leahy")
where the light curves, uniformly binned at 0.1 s in this case, are created internally, and the good time intervals (time interval where the instrument was collecting data nominally) are passed by hand. Note that the frequencies of the cross spectrum will be expressed in inverse units as the input time arrays. If the times are expressed in seconds, frequencies will be in Hz; with times in days, frequencies will be in 1/d, and so on. We do not support units (e.g. astropy units) yet, so the user should pay attention to these details.
from iterables of light curves
ps = AveragedCrossspectrum.from_lc_iter(
lc_iterable1, lc_iterable2, dt=bin_time, segment_size=segment_size,
norm="leahy")
where lc_iterableX is any iterable of Lightcurve objects (list, tuple, generator, etc.) and dt is the sampling time of the light curves. Note that this dt is needed because the iterables might be generators, in which case the light curves are lazy-loaded after a bunch of operations using dt have been done.
Again we can print the first five Fourier frequencies and first five cross spectral values, as well as the number of segments.
End of explanation
test_cs = AveragedCrossspectrum.from_lightcurve(long_lc1, long_lc2, 40.)
print(test_cs.m)
coh, err = test_cs.coherence()
Explanation: If m is less than 50 and you try to compute the coherence, a warning will pop up letting you know that your number of segments is significantly low, so the error on coherence might not follow the expected (Gaussian) statistical distributions.
End of explanation
avg_cs_amplitude = np.abs(avg_cs.power)
fig, ax1 = plt.subplots(1,1,figsize=(9,6))
ax1.plot(avg_cs.freq, avg_cs_amplitude, lw=2, color='blue')
ax1.set_xlabel("Frequency (Hz)", fontproperties=font_prop)
ax1.set_ylabel("Cross spectral amplitude", fontproperties=font_prop)
ax1.set_yscale('log')
ax1.tick_params(axis='x', labelsize=16)
ax1.tick_params(axis='y', labelsize=16)
ax1.tick_params(which='major', width=1.5, length=7)
ax1.tick_params(which='minor', width=1.5, length=4)
for axis in ['top', 'bottom', 'left', 'right']:
ax1.spines[axis].set_linewidth(1.5)
plt.show()
Explanation: Properties
An AveragedCrossspectrum object has the following properties, same as Crossspectrum :
freq : Numpy array of mid-bin frequencies that the Fourier transform samples.
power : Numpy array of the averaged cross spectrum (complex numbers).
df : The frequency resolution (in Hz).
m : The number of cross spectra averaged together, equal to the number of whole segments in a light curve.
n : The number of data points (time bins) in one segment of the light curves.
nphots1 : The total number of photons in the first (interest) light curve.
nphots2 : The total number of photons in the second (reference) light curve.
Let's plot the amplitude of the averaged cross spectrum!
End of explanation
avg_cs_leahy = AveragedCrossspectrum.from_lightcurve(long_lc1, long_lc2, 8., norm='leahy')
avg_cs_frac = AveragedCrossspectrum.from_lightcurve(long_lc1, long_lc2, 8., norm='frac')
avg_cs_abs = AveragedCrossspectrum.from_lightcurve(long_lc1, long_lc2, 8., norm='abs')
Explanation: Now we'll show examples of all the things you can do with a Crossspectrum or AveragedCrossspectrum object using built-in stingray methods.
Normalizating the cross spectrum
The three kinds of normalization are:
* leahy: Leahy normalization. Makes the Poisson noise level $= 2$. See Leahy et al. 1983, ApJ, 266, 160L.
* frac: Fractional rms-squared normalization, also known as rms normalization. Makes the Poisson noise level $= 2 / \sqrt(meanrate_1\times meanrate_2)$. See Belloni & Hasinger 1990, A&A, 227, L33, and Miyamoto et al. 1992, ApJ, 391, L21.. This is the default.
* abs: Absolute rms-squared normalization, also known as absolute normalization. Makes the Poisson noise level $= 2 \times \sqrt(meanrate_1\times meanrate_2)$. See insert citation.
* none: No normalization applied.
Note that these normalizations and the Poisson noise levels apply to the "cross power", not the cross-spectral amplitude.
End of explanation
fig, [ax1, ax2, ax3] = plt.subplots(3,1,figsize=(6,12))
ax1.plot(avg_cs_leahy.freq, avg_cs_leahy.power, lw=2, color='black')
ax1.set_xlabel("Frequency (Hz)", fontproperties=font_prop)
ax1.set_ylabel("Leahy cross-power", fontproperties=font_prop)
ax1.set_yscale('log')
ax1.tick_params(axis='x', labelsize=14)
ax1.tick_params(axis='y', labelsize=14)
ax1.tick_params(which='major', width=1.5, length=7)
ax1.tick_params(which='minor', width=1.5, length=4)
ax1.set_title("Leahy norm.", fontproperties=font_prop)
ax2.plot(avg_cs_frac.freq, avg_cs_frac.power, lw=2, color='black')
ax2.set_xlabel("Frequency (Hz)", fontproperties=font_prop)
ax2.set_ylabel("rms cross-power", fontproperties=font_prop)
ax2.tick_params(axis='x', labelsize=14)
ax2.tick_params(axis='y', labelsize=14)
ax2.set_yscale('log')
ax2.tick_params(which='major', width=1.5, length=7)
ax2.tick_params(which='minor', width=1.5, length=4)
ax2.set_title("Fractional rms-squared norm.", fontproperties=font_prop)
ax3.plot(avg_cs_abs.freq, avg_cs_abs.power, lw=2, color='black')
ax3.set_xlabel("Frequency (Hz)", fontproperties=font_prop)
ax3.set_ylabel("Absolute cross-power", fontproperties=font_prop)
ax3.tick_params(axis='x', labelsize=14)
ax3.tick_params(axis='y', labelsize=14)
ax3.set_yscale('log')
ax3.tick_params(which='major', width=1.5, length=7)
ax3.tick_params(which='minor', width=1.5, length=4)
ax3.set_title("Absolute rms-squared norm.", fontproperties=font_prop)
for axis in ['top', 'bottom', 'left', 'right']:
ax1.spines[axis].set_linewidth(1.5)
ax2.spines[axis].set_linewidth(1.5)
ax3.spines[axis].set_linewidth(1.5)
plt.tight_layout()
plt.show()
Explanation: Here we plot the three normalized averaged cross spectra.
End of explanation
print("DF before:", avg_cs.df)
# Both of the following ways are allowed syntax:
# lin_rb_cs = Crossspectrum.rebin(avg_cs, 0.25, method='mean')
lin_rb_cs = avg_cs.rebin(0.25, method='mean')
print("DF after:", lin_rb_cs.df)
Explanation: Re-binning a cross spectrum in frequency
Typically, rebinning is done on an averaged, normalized cross spectrum.
1. We can linearly re-bin a cross spectrum
(although this is not done much in practice)
End of explanation
# Both of the following ways are allowed syntax:
# log_rb_cs, log_rb_freq, binning = Crossspectrum.rebin_log(avg_cs, f=0.02)
log_rb_cs = avg_cs.rebin_log(f=0.02)
Explanation: 2. And we can logarithmically/geometrically re-bin a cross spectrum
In this re-binning, each bin size is 1+f times larger than the previous bin size, where f is user-specified and normally in the range 0.01-0.1. The default value is f=0.01.
Logarithmic rebinning only keeps the real part of the cross spectum.
End of explanation
print(type(lin_rb_cs))
Explanation: Note that like rebin, rebin_log returns a Crossspectrum or AveragedCrossspectrum object (depending on the input object):
End of explanation
long_dt = 0.03125 # seconds
long_exposure = 1600. # seconds
long_times = np.arange(0, long_exposure, long_dt) # seconds
# long_signal_1 = 300 * np.sin(2.*np.pi*long_times/0.5) + 100 * np.sin(2.*np.pi*long_times*5 + np.pi/6) + 1000
# long_signal_2 = 200 * np.sin(2.*np.pi*long_times/0.5 + np.pi/4) + 80 * np.sin(2.*np.pi*long_times*5) + 900
long_signal_1 = (300 * np.sin(2.*np.pi*long_times*3) + 1000) * dt
long_signal_2 = (200 * np.sin(2.*np.pi*long_times*3 + np.pi/3) + 900) * dt
long_lc1 = Lightcurve(long_times, long_signal_1)
long_lc2 = Lightcurve(long_times, long_signal_2)
avg_cs = AveragedCrossspectrum.from_lightcurve(long_lc1, long_lc2, 8.)
fig, ax = plt.subplots(1,1,figsize=(10,6))
ax.plot(long_lc1.time, long_lc1.counts, lw=2, color='blue')
ax.plot(long_lc1.time, long_lc2.counts, lw=2, color='red')
ax.set_xlim(0,4)
ax.set_xlabel("Time (s)", fontproperties=font_prop)
ax.set_ylabel("Counts (cts)", fontproperties=font_prop)
ax.tick_params(axis='x', labelsize=16)
ax.tick_params(axis='y', labelsize=16)
plt.show()
fig, ax = plt.subplots(1,1,figsize=(10,6))
ax.plot(avg_cs.freq, avg_cs.power, lw=2, color='blue')
plt.show()
Explanation: Time lags / phase lags
1. Frequency-dependent lags
The lag-frequency spectrum shows the time lag between two light curves (usually non-overlapping broad energy bands) as a function of Fourier frequency.
See Uttley et al. 2014, A&ARev, 22, 72 section 2.2.1.
End of explanation
freq_lags, freq_lags_err = avg_cs.time_lag()
Explanation: The time_lag method returns an np.ndarray with the time lag in seconds per positive Fourier frequency.
End of explanation
fig, ax = plt.subplots(1,1,figsize=(8,5))
ax.hlines(0, avg_cs.freq[0], avg_cs.freq[-1], color='black', linestyle='dashed', lw=2)
ax.errorbar(avg_cs.freq, freq_lags, yerr=freq_lags_err,fmt="o", lw=1, color='blue')
ax.set_xlabel("Frequency (Hz)", fontproperties=font_prop)
ax.set_ylabel("Time lag (s)", fontproperties=font_prop)
ax.tick_params(axis='x', labelsize=14)
ax.tick_params(axis='y', labelsize=14)
ax.tick_params(which='major', width=1.5, length=7)
ax.tick_params(which='minor', width=1.5, length=4)
for axis in ['top', 'bottom', 'left', 'right']:
ax.spines[axis].set_linewidth(1.5)
plt.show()
Explanation: And this is a plot of the lag-frequency spectrum.
End of explanation
long_dt = 0.03125 # seconds
long_exposure = 1600. # seconds
long_times = np.arange(0, long_exposure, long_dt) # seconds
long_signal_1 = 300 * np.sin(2.*np.pi*long_times/0.5) + 1000
long_signal_2 = 200 * np.sin(2.*np.pi*long_times/0.5 + np.pi/4) + 900
long_noisy_1 = np.random.poisson(long_signal_1*dt)
long_noisy_2 = np.random.poisson(long_signal_2*dt)
long_lc1 = Lightcurve(long_times, long_noisy_1)
long_lc2 = Lightcurve(long_times, long_noisy_2)
avg_cs = AveragedCrossspectrum.from_lightcurve(long_lc1, long_lc2, 8.)
Explanation: 2. Energy-dependent lags
The lag vs energy spectrum can be calculated using the LagEnergySpectrum from stingray.varenergy. Refer to the Spectral Timing documentation.
Coherence
Coherence is a Fourier-frequency-dependent measure of the linear correlation between time series measured simultaneously in two energy channels.
See Vaughan and Nowak 1997, ApJ, 474, L43 and Uttley et al. 2014, A&ARev, 22, 72 section 2.1.3.
End of explanation
coh, err_coh = avg_cs.coherence()
Explanation: The coherence method returns two np.ndarrays, of the coherence and uncertainty.
End of explanation
print(len(coh) == len(avg_cs.freq))
Explanation: The coherence and uncertainty have the same length as the positive Fourier frequencies.
End of explanation
fig, ax = plt.subplots(1,1,figsize=(8,5))
# ax.hlines(0, avg_cs.freq[0], avg_cs.freq[-1], color='black', linestyle='dashed', lw=2)
ax.errorbar(avg_cs.freq, coh, yerr=err_coh, lw=2, color='blue')
ax.set_xlabel("Frequency (Hz)", fontproperties=font_prop)
ax.set_ylabel("Coherence", fontproperties=font_prop)
ax.tick_params(axis='x', labelsize=14)
ax.tick_params(axis='y', labelsize=14)
ax.tick_params(which='major', width=1.5, length=7)
ax.tick_params(which='minor', width=1.5, length=4)
for axis in ['top', 'bottom', 'left', 'right']:
ax.spines[axis].set_linewidth(1.5)
plt.show()
Explanation: And we can plot the coherence vs the frequency.
End of explanation |
10,021 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Import necessary libraries
Step1: K-means clustering
Example adapted from here.
Load dataset
Step2: Define and train model
Step3: Extract the labels and the cluster centers
Step4: Plot the clusters
Step6: Gaussian Mixture Model
Example taken from here.
Define a visualization function
Step7: Load dataset and make training and test splits
Step8: Train and compare different GMMs
Step9: Hierarchical Agglomerative Clustering
Example taken from here.
Load and pre-process dataset
Step10: Visualize the clustering
Step11: Create a 2D embedding of the digits dataset
Step12: Train and visualize the clusters
Ward minimizes the sum of squared differences within all clusters. It is a variance-minimizing approach and in this sense is similar to the k-means objective function but tackled with an agglomerative hierarchical approach.
Maximum or complete linkage minimizes the maximum distance between observations of pairs of clusters.
Average linkage minimizes the average of the distances between all observations of pairs of clusters. | Python Code:
import numpy as np
from scipy import ndimage
from time import time
from sklearn import datasets, manifold
from sklearn.cluster import KMeans, AgglomerativeClustering
from sklearn.mixture import GMM
from sklearn.cross_validation import StratifiedKFold
import matplotlib.pyplot as plt
import matplotlib as mpl
%matplotlib inline
Explanation: Import necessary libraries
End of explanation
iris = datasets.load_iris()
X,y = iris.data[:,:2], iris.target
Explanation: K-means clustering
Example adapted from here.
Load dataset
End of explanation
num_clusters = 8
model = KMeans(n_clusters=num_clusters)
model.fit(X)
Explanation: Define and train model
End of explanation
labels = model.labels_
cluster_centers = model.cluster_centers_
print cluster_centers
Explanation: Extract the labels and the cluster centers
End of explanation
plt.scatter(X[:,0], X[:,1],c=labels.astype(np.float))
plt.hold(True)
plt.scatter(cluster_centers[:,0], cluster_centers[:,1], c = np.arange(num_clusters), marker = '^', s = 150)
plt.show()
plt.scatter(X[:,0], X[:,1],c=np.choose(y,[0,2,1]).astype(np.float))
plt.show()
Explanation: Plot the clusters
End of explanation
def make_ellipses(gmm, ax):
Visualize the gaussians in a GMM as ellipses
for n, color in enumerate('rgb'):
v, w = np.linalg.eigh(gmm._get_covars()[n][:2, :2])
u = w[0] / np.linalg.norm(w[0])
angle = np.arctan2(u[1], u[0])
angle = 180 * angle / np.pi # convert to degrees
v *= 9
ell = mpl.patches.Ellipse(gmm.means_[n, :2], v[0], v[1],
180 + angle, color=color)
ell.set_clip_box(ax.bbox)
ell.set_alpha(0.5)
ax.add_artist(ell)
Explanation: Gaussian Mixture Model
Example taken from here.
Define a visualization function
End of explanation
iris = datasets.load_iris()
# Break up the dataset into non-overlapping training (75%) and testing
# (25%) sets.
skf = StratifiedKFold(iris.target, n_folds=4)
# Only take the first fold.
train_index, test_index = next(iter(skf))
X_train = iris.data[train_index]
y_train = iris.target[train_index]
X_test = iris.data[test_index]
y_test = iris.target[test_index]
n_classes = len(np.unique(y_train))
Explanation: Load dataset and make training and test splits
End of explanation
# Try GMMs using different types of covariances.
classifiers = dict((covar_type, GMM(n_components=n_classes,
covariance_type=covar_type, init_params='wc', n_iter=20))
for covar_type in ['spherical', 'diag', 'tied', 'full'])
n_classifiers = len(classifiers)
plt.figure(figsize=(2*3 * n_classifiers / 2, 2*6))
plt.subplots_adjust(bottom=.01, top=0.95, hspace=.15, wspace=.05,
left=.01, right=.99)
for index, (name, classifier) in enumerate(classifiers.items()):
# Since we have class labels for the training data, we can
# initialize the GMM parameters in a supervised manner.
classifier.means_ = np.array([X_train[y_train == i].mean(axis=0)
for i in xrange(n_classes)])
# Train the other parameters using the EM algorithm.
classifier.fit(X_train)
h = plt.subplot(2, n_classifiers / 2, index + 1)
make_ellipses(classifier, h)
for n, color in enumerate('rgb'):
data = iris.data[iris.target == n]
plt.scatter(data[:, 0], data[:, 1], 0.8, color=color,
label=iris.target_names[n])
# Plot the test data with crosses
for n, color in enumerate('rgb'):
data = X_test[y_test == n]
plt.plot(data[:, 0], data[:, 1], 'x', color=color)
y_train_pred = classifier.predict(X_train)
train_accuracy = np.mean(y_train_pred.ravel() == y_train.ravel()) * 100
plt.text(0.05, 0.9, 'Train accuracy: %.1f' % train_accuracy,
transform=h.transAxes)
y_test_pred = classifier.predict(X_test)
test_accuracy = np.mean(y_test_pred.ravel() == y_test.ravel()) * 100
plt.text(0.05, 0.8, 'Test accuracy: %.1f' % test_accuracy,
transform=h.transAxes)
plt.xticks(())
plt.yticks(())
plt.title(name)
plt.legend(loc='lower right', prop=dict(size=12))
plt.show()
Explanation: Train and compare different GMMs
End of explanation
digits = datasets.load_digits(n_class=10)
X = digits.data
y = digits.target
n_samples, n_features = X.shape
np.random.seed(0)
def nudge_images(X, y):
# Having a larger dataset shows more clearly the behavior of the
# methods, but we multiply the size of the dataset only by 2, as the
# cost of the hierarchical clustering methods are strongly
# super-linear in n_samples
shift = lambda x: ndimage.shift(x.reshape((8, 8)),
.3 * np.random.normal(size=2),
mode='constant',
).ravel()
X = np.concatenate([X, np.apply_along_axis(shift, 1, X)])
Y = np.concatenate([y, y], axis=0)
return X, Y
X, y = nudge_images(X, y)
Explanation: Hierarchical Agglomerative Clustering
Example taken from here.
Load and pre-process dataset
End of explanation
def plot_clustering(X_red, X, labels, title=None):
x_min, x_max = np.min(X_red, axis=0), np.max(X_red, axis=0)
X_red = (X_red - x_min) / (x_max - x_min)
plt.figure(figsize=(2*6, 2*4))
for i in range(X_red.shape[0]):
plt.text(X_red[i, 0], X_red[i, 1], str(y[i]),
color=plt.cm.spectral(labels[i] / 10.),
fontdict={'weight': 'bold', 'size': 9})
plt.xticks([])
plt.yticks([])
if title is not None:
plt.title(title, size=17)
plt.axis('off')
plt.tight_layout()
Explanation: Visualize the clustering
End of explanation
print("Computing embedding")
X_red = manifold.SpectralEmbedding(n_components=2).fit_transform(X)
print("Done.")
Explanation: Create a 2D embedding of the digits dataset
End of explanation
from sklearn.cluster import AgglomerativeClustering
for linkage in ('ward', 'average', 'complete'):
clustering = AgglomerativeClustering(linkage=linkage, n_clusters=10)
t0 = time()
clustering.fit(X_red)
print("%s : %.2fs" % (linkage, time() - t0))
plot_clustering(X_red, X, clustering.labels_, "%s linkage" % linkage)
plt.show()
Explanation: Train and visualize the clusters
Ward minimizes the sum of squared differences within all clusters. It is a variance-minimizing approach and in this sense is similar to the k-means objective function but tackled with an agglomerative hierarchical approach.
Maximum or complete linkage minimizes the maximum distance between observations of pairs of clusters.
Average linkage minimizes the average of the distances between all observations of pairs of clusters.
End of explanation |
10,022 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: TensorFlow Distributions
Step2: Basic Univariate Distributions
Let's dive right in and create a normal distribution
Step3: We can draw a sample from it
Step4: We can draw multiple samples
Step5: We can evaluate a log prob
Step6: We can evaluate multiple log probabilities
Step7: We have a wide range of distributions. Let's try a Bernoulli
Step8: Multivariate Distributions
We'll create a multivariate normal with a diagonal covariance
Step9: Comparing this to the univariate normal we created earlier, what's different?
Step10: We see that the univariate normal has an event_shape of (), indicating it's a scalar distribution. The multivariate normal has an event_shape of 2, indicating the basic [event space](https
Step11: Multivariate normals do not in general have diagonal covariance. TFD offers multiple ways to create multivariate normals, including a full-covariance specification, which we use here.
Step12: Multiple Distributions
Our first Bernoulli distribution represented a flip of a single fair coin. We can also create a batch of independent Bernoulli distributions, each with their own parameters, in a single Distribution object
Step13: It's important to be clear on what this means. The above call defines three independent Bernoulli distributions, which happen to be contained in the same Python Distribution object. The three distributions cannot be manipulated individually. Note how the batch_shape is (3,), indicating a batch of three distributions, and the event_shape is (), indicating the individual distributions have a univariate event space.
If we call sample, we get a sample from all three
Step14: If we call prob, (this has the same shape semantics as log_prob; we use prob with these small Bernoulli examples for clarity, although log_prob is usually preferred in applications) we can pass it a vector and evaluate the probability of each coin yielding that value
Step15: Why does the API include batch shape? Semantically, one could perform the same computations by creating a list of distributions and iterating over them with a for loop (at least in Eager mode, in TF graph mode you'd need a tf.while loop). However, having a (potentially large) set of identically parameterized distributions is extremely common, and the use of vectorized computations whenever possible is a key ingredient in being able to perform fast computations using hardware accelerators.
Using Independent To Aggregate Batches to Events
In the previous section, we created b3, a single Distribution object that represented three coin flips. If we called b3.prob on a vector $v$, the $i$'th entry was the probability that the $i$th coin takes value $v[i]$.
Suppose we'd instead like to specify a "joint" distribution over independent random variables from the same underlying family. This is a different object mathematically, in that for this new distribution, prob on a vector $v$ will return a single value representing the probability that the entire set of coins matches the vector $v$.
How do we accomplish this? We use a "higher-order" distribution called Independent, which takes a distribution and yields a new distribution with the batch shape moved to the event shape
Step16: Compare the shape to that of the original b3
Step17: As promised, we see that that Independent has moved the batch shape into the event shape
Step18: An alternate way to get the same result would be to compute probabilities using b3 and do the reduction manually by multiplying (or, in the more usual case where log probabilities are used, summing)
Step19: Indpendent allows the user to more explicitly represent the desired concept. We view this as extremely useful, although it's not strictly necessary.
Fun facts
Step20: We see batch_shape = (3,), so there are three independent multivariate normals, and event_shape = (2,), so each multivariate normal is two-dimensional. In this example, the individual distributions do not have independent elements.
Sampling works
Step21: Since batch_shape = (3,) and event_shape = (2,), we pass a tensor of shape (3, 2) to log_prob
Step22: Broadcasting, aka Why Is This So Confusing?
Abstracting out what we've done so far, every distribution has an batch shape B and an event shape E. Let BE be the concatenation of the event shapes
Step23: Let's turn to the two-dimensional multivariate normal nd (parameters changed for illustrative purposes)
Step24: log_prob "expects" an argument with shape (2,), but it will accept any argument that broadcasts against this shape
Step25: But we can pass in "more" examples, and evaluate all their log_prob's at once
Step26: Perhaps less appealingly, we can broadcast over the event dimensions
Step27: Broadcasting this way is a consequence of our "enable broadcasting whenever possible" design; this usage is somewhat controversial and could potentially be removed in a future version of TFP.
Now let's look at the three coins example again
Step28: Here, using broadcasting to represent the probability that each coin comes up heads is quite intuitive
Step29: (Compare this to b3.prob([1., 1., 1.]), which we would have used back where b3 was introduced.)
Now suppose we want to know, for each coin, the probability the coin comes up heads and the probability it comes up tails. We could imagine trying | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
import collections
import tensorflow as tf
import tensorflow_probability as tfp
tfd = tfp.distributions
try:
tf.compat.v1.enable_eager_execution()
except ValueError:
pass
import matplotlib.pyplot as plt
Explanation: TensorFlow Distributions: A Gentle Introduction
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/probability/examples/TensorFlow_Distributions_Tutorial"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/TensorFlow_Distributions_Tutorial.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/TensorFlow_Distributions_Tutorial.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/probability/tensorflow_probability/examples/jupyter_notebooks/TensorFlow_Distributions_Tutorial.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
In this notebook, we'll explore TensorFlow Distributions (TFD for short). The goal of this notebook is to get you gently up the learning curve, including understanding TFD's handling of tensor shapes. This notebook tries to present examples before rather than abstract concepts. We'll present canonical easy ways to do things first, and save the most general abstract view until the end. If you're the type who prefers a more abstract and reference-style tutorial, check out Understanding TensorFlow Distributions Shapes. If you have any questions about the material here, don't hesitate to contact (or join) the TensorFlow Probability mailing list. We're happy to help.
Before we start, we need to import the appropriate libraries. Our overall library is tensorflow_probability. By convention, we generally refer to the distributions library as tfd.
Tensorflow Eager is an imperative execution environment for TensorFlow. In TensorFlow eager, every TF operation is immediately evaluated and produces a result. This is in contrast to TensorFlow's standard "graph" mode, in which TF operations add nodes to a graph which is later executed. This entire notebook is written using TF Eager, although none of the concepts presented here rely on that, and TFP can be used in graph mode.
End of explanation
n = tfd.Normal(loc=0., scale=1.)
n
Explanation: Basic Univariate Distributions
Let's dive right in and create a normal distribution:
End of explanation
n.sample()
Explanation: We can draw a sample from it:
End of explanation
n.sample(3)
Explanation: We can draw multiple samples:
End of explanation
n.log_prob(0.)
Explanation: We can evaluate a log prob:
End of explanation
n.log_prob([0., 2., 4.])
Explanation: We can evaluate multiple log probabilities:
End of explanation
b = tfd.Bernoulli(probs=0.7)
b
b.sample()
b.sample(8)
b.log_prob(1)
b.log_prob([1, 0, 1, 0])
Explanation: We have a wide range of distributions. Let's try a Bernoulli:
End of explanation
nd = tfd.MultivariateNormalDiag(loc=[0., 10.], scale_diag=[1., 4.])
nd
Explanation: Multivariate Distributions
We'll create a multivariate normal with a diagonal covariance:
End of explanation
tfd.Normal(loc=0., scale=1.)
Explanation: Comparing this to the univariate normal we created earlier, what's different?
End of explanation
nd.sample()
nd.sample(5)
nd.log_prob([0., 10])
Explanation: We see that the univariate normal has an event_shape of (), indicating it's a scalar distribution. The multivariate normal has an event_shape of 2, indicating the basic [event space](https://en.wikipedia.org/wiki/Event_(probability_theory)) of this distribution is two-dimensional.
Sampling works just as before:
End of explanation
nd = tfd.MultivariateNormalFullCovariance(
loc = [0., 5], covariance_matrix = [[1., .7], [.7, 1.]])
data = nd.sample(200)
plt.scatter(data[:, 0], data[:, 1], color='blue', alpha=0.4)
plt.axis([-5, 5, 0, 10])
plt.title("Data set")
plt.show()
Explanation: Multivariate normals do not in general have diagonal covariance. TFD offers multiple ways to create multivariate normals, including a full-covariance specification, which we use here.
End of explanation
b3 = tfd.Bernoulli(probs=[.3, .5, .7])
b3
Explanation: Multiple Distributions
Our first Bernoulli distribution represented a flip of a single fair coin. We can also create a batch of independent Bernoulli distributions, each with their own parameters, in a single Distribution object:
End of explanation
b3.sample()
b3.sample(6)
Explanation: It's important to be clear on what this means. The above call defines three independent Bernoulli distributions, which happen to be contained in the same Python Distribution object. The three distributions cannot be manipulated individually. Note how the batch_shape is (3,), indicating a batch of three distributions, and the event_shape is (), indicating the individual distributions have a univariate event space.
If we call sample, we get a sample from all three:
End of explanation
b3.prob([1, 1, 0])
Explanation: If we call prob, (this has the same shape semantics as log_prob; we use prob with these small Bernoulli examples for clarity, although log_prob is usually preferred in applications) we can pass it a vector and evaluate the probability of each coin yielding that value:
End of explanation
b3_joint = tfd.Independent(b3, reinterpreted_batch_ndims=1)
b3_joint
Explanation: Why does the API include batch shape? Semantically, one could perform the same computations by creating a list of distributions and iterating over them with a for loop (at least in Eager mode, in TF graph mode you'd need a tf.while loop). However, having a (potentially large) set of identically parameterized distributions is extremely common, and the use of vectorized computations whenever possible is a key ingredient in being able to perform fast computations using hardware accelerators.
Using Independent To Aggregate Batches to Events
In the previous section, we created b3, a single Distribution object that represented three coin flips. If we called b3.prob on a vector $v$, the $i$'th entry was the probability that the $i$th coin takes value $v[i]$.
Suppose we'd instead like to specify a "joint" distribution over independent random variables from the same underlying family. This is a different object mathematically, in that for this new distribution, prob on a vector $v$ will return a single value representing the probability that the entire set of coins matches the vector $v$.
How do we accomplish this? We use a "higher-order" distribution called Independent, which takes a distribution and yields a new distribution with the batch shape moved to the event shape:
End of explanation
b3
Explanation: Compare the shape to that of the original b3:
End of explanation
b3_joint.prob([1, 1, 0])
Explanation: As promised, we see that that Independent has moved the batch shape into the event shape: b3_joint is a single distribution (batch_shape = ()) over a three-dimensional event space (event_shape = (3,)).
Let's check the semantics:
End of explanation
tf.reduce_prod(b3.prob([1, 1, 0]))
Explanation: An alternate way to get the same result would be to compute probabilities using b3 and do the reduction manually by multiplying (or, in the more usual case where log probabilities are used, summing):
End of explanation
nd_batch = tfd.MultivariateNormalFullCovariance(
loc = [[0., 0.], [1., 1.], [2., 2.]],
covariance_matrix = [[[1., .1], [.1, 1.]],
[[1., .3], [.3, 1.]],
[[1., .5], [.5, 1.]]])
nd_batch
Explanation: Indpendent allows the user to more explicitly represent the desired concept. We view this as extremely useful, although it's not strictly necessary.
Fun facts:
b3.sample and b3_joint.sample have different conceptual implementations, but indistinguishable outputs: the difference between a batch of independent distributions and a single distribution created from the batch using Independent shows up when computing probabilites, not when sampling.
MultivariateNormalDiag could be trivially implemented using the scalar Normal and Independent distributions (it isn't actually implemented this way, but it could be).
Batches of Multivariate Distirbutions
Let's create a batch of three full-covariance two-dimensional multivariate normals:
End of explanation
nd_batch.sample(4)
Explanation: We see batch_shape = (3,), so there are three independent multivariate normals, and event_shape = (2,), so each multivariate normal is two-dimensional. In this example, the individual distributions do not have independent elements.
Sampling works:
End of explanation
nd_batch.log_prob([[0., 0.], [1., 1.], [2., 2.]])
Explanation: Since batch_shape = (3,) and event_shape = (2,), we pass a tensor of shape (3, 2) to log_prob:
End of explanation
n = tfd.Normal(loc=0., scale=1.)
n
n.log_prob(0.)
n.log_prob([0.])
n.log_prob([[0., 1.], [-1., 2.]])
Explanation: Broadcasting, aka Why Is This So Confusing?
Abstracting out what we've done so far, every distribution has an batch shape B and an event shape E. Let BE be the concatenation of the event shapes:
For the univariate scalar distributions n and b, BE = ()..
For the two-dimensional multivariate normals nd. BE = (2).
For both b3 and b3_joint, BE = (3).
For the batch of multivariate normals ndb, BE = (3, 2).
The "evaluation rules" we've been using so far are:
Sample with no argument returns a tensor with shape BE; sampling with a scalar n returns an "n by BE" tensor.
prob and log_prob take a tensor of shape BE and return a result of shape B.
The actual "evaluation rule" for prob and log_prob is more complicated, in a way that offers potential power and speed but also complexity and challenges. The actual rule is (essentially) that the argument to log_prob must be broadcastable against BE; any "extra" dimensions are preserved in the output.
Let's explore the implications. For the univariate normal n, BE = (), so log_prob expects a scalar. If we pass log_prob a tensor with non-empty shape, those show up as batch dimensions in the output:
End of explanation
nd = tfd.MultivariateNormalDiag(loc=[0., 1.], scale_diag=[1., 1.])
nd
Explanation: Let's turn to the two-dimensional multivariate normal nd (parameters changed for illustrative purposes):
End of explanation
nd.log_prob([0., 0.])
Explanation: log_prob "expects" an argument with shape (2,), but it will accept any argument that broadcasts against this shape:
End of explanation
nd.log_prob([[0., 0.],
[1., 1.],
[2., 2.]])
Explanation: But we can pass in "more" examples, and evaluate all their log_prob's at once:
End of explanation
nd.log_prob([0.])
nd.log_prob([[0.], [1.], [2.]])
Explanation: Perhaps less appealingly, we can broadcast over the event dimensions:
End of explanation
b3 = tfd.Bernoulli(probs=[.3, .5, .7])
Explanation: Broadcasting this way is a consequence of our "enable broadcasting whenever possible" design; this usage is somewhat controversial and could potentially be removed in a future version of TFP.
Now let's look at the three coins example again:
End of explanation
b3.prob([1])
Explanation: Here, using broadcasting to represent the probability that each coin comes up heads is quite intuitive:
End of explanation
b3.prob([[0], [1]])
Explanation: (Compare this to b3.prob([1., 1., 1.]), which we would have used back where b3 was introduced.)
Now suppose we want to know, for each coin, the probability the coin comes up heads and the probability it comes up tails. We could imagine trying:
b3.log_prob([0, 1])
Unfortunately, this produces an error with a long and not-very-readable stack trace. b3 has BE = (3), so we must pass b3.prob something broadcastable against (3,). [0, 1] has shape (2), so it doesn't broadcast and creates an error. Instead, we have to say:
End of explanation |
10,023 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ensemble Learning
<!-- new sections -->
<!-- Ensemble learning -->
<!-- - Machine Learning Flach, Ch.11 -->
<!-- - Machine Learning Mohri, pp.135- -->
<!-- - Data Mining Witten, Ch. 8 -->
Step1: With the exception of the random forest, we have so far considered machine
learning models as stand-alone entities. Combinations of models that jointly
produce a classification are known as ensembles. There are two main
methodologies that create ensembles
Step2: The training data and the resulting perceptron separating boundary
are shown in Figure. The circles and crosses are the
sampled training data and the gray separating line is the perceptron's
separating boundary between the two categories. The black squares are those
elements in the training data that the perceptron mis-classified. Because the
perceptron can only produce linear separating boundaries, and the boundary in
this case is non-linear, the perceptron makes mistakes near where the
boundary curves. The next step is to see how bagging can
improve upon this by using multiple perceptrons.
<!-- dom
Step3: <!-- dom | Python Code:
from IPython.display import Image
Image('../../../python_for_probability_statistics_and_machine_learning.jpg')
from pprint import pprint
import textwrap
import sys, re
def displ(x):
if x is None: return
print ("\n".join(textwrap.wrap(repr(x).replace(' ',''),width=80)))
sys.displayhook=displ
Explanation: Ensemble Learning
<!-- new sections -->
<!-- Ensemble learning -->
<!-- - Machine Learning Flach, Ch.11 -->
<!-- - Machine Learning Mohri, pp.135- -->
<!-- - Data Mining Witten, Ch. 8 -->
End of explanation
from sklearn.linear_model import Perceptron
p=Perceptron()
p
Explanation: With the exception of the random forest, we have so far considered machine
learning models as stand-alone entities. Combinations of models that jointly
produce a classification are known as ensembles. There are two main
methodologies that create ensembles: bagging and boosting.
Bagging
Bagging refers to bootstrap aggregating, where bootstrap here is the same as we
discussed in the section ch:stats:sec:boot. Basically,
we resample the data with replacement and then train a classifier on the newly
sampled data. Then, we combine the outputs of each of the individual
classifiers using a majority-voting scheme (for discrete outputs) or a weighted
average (for continuous outputs). This combination is particularly effective
for models that are easily influenced by a single data element. The resampling
process means that these elements cannot appear in every bootstrapped
training set so that some of the models will not suffer these effects. This
makes the so-computed combination of outputs less volatile. Thus, bagging
helps reduce the collective variance of individual high-variance models.
To get a sense of bagging, let's suppose we have a two-dimensional plane that
is partitioned into two regions with the following boundary: $y=-x+x^2$.
Pairs of $(x_i,y_i)$ points above this boundary are labeled one and points
below are labeled zero. Figure shows the two regions
with the nonlinear separating boundary as the black curved line.
<!-- dom:FIGURE: [fig-machine_learning/ensemble_001.png, width=500 frac=0.75]
Two regions in the plane are separated by a nonlinear boundary. The training
data is sampled from this plane. The objective is to correctly classify the so-
sampled data. <div id="fig:ensemble_001"></div> -->
<!-- begin figure -->
<div id="fig:ensemble_001"></div>
<p>Two regions in the plane are separated by a nonlinear boundary. The training
data is sampled from this plane. The objective is to correctly classify the so-
sampled data.</p>
<img src="fig-machine_learning/ensemble_001.png" width=500>
<!-- end figure -->
The problem is to take samples from each of these regions and
classify them correctly using a perceptron. A perceptron is the simplest
possible linear classifier that finds a line in the plane to separate two
purported categories. Because the separating boundary is nonlinear, there is no
way that the perceptron can completely solve this problem. The following code
sets up the perceptron available in Scikit-learn.
End of explanation
from sklearn.ensemble import BaggingClassifier
bp = BaggingClassifier(Perceptron(),max_samples=0.50,n_estimators=3)
bp
Explanation: The training data and the resulting perceptron separating boundary
are shown in Figure. The circles and crosses are the
sampled training data and the gray separating line is the perceptron's
separating boundary between the two categories. The black squares are those
elements in the training data that the perceptron mis-classified. Because the
perceptron can only produce linear separating boundaries, and the boundary in
this case is non-linear, the perceptron makes mistakes near where the
boundary curves. The next step is to see how bagging can
improve upon this by using multiple perceptrons.
<!-- dom:FIGURE: [fig-machine_learning/ensemble_002.png, width=500 frac=0.75]
The perceptron finds the best linear boundary between the two classes. <div
id="fig:ensemble_002"></div> -->
<!-- begin figure -->
<div id="fig:ensemble_002"></div>
<p>The perceptron finds the best linear boundary between the two classes.</p>
<img src="fig-machine_learning/ensemble_002.png" width=500>
<!-- end figure -->
The following code sets up the bagging classifier in Scikit-learn. Here we
select only three perceptrons. Figure shows each of the
three individual classifiers and the final bagged classifer in the panel on the
bottom right. As before, the black circles indicate misclassifications in the
training data. Joint classifications are determined by majority voting.
End of explanation
from sklearn.ensemble import AdaBoostClassifier
clf=AdaBoostClassifier(Perceptron(),n_estimators=3,
algorithm='SAMME',
learning_rate=0.5)
clf
Explanation: <!-- dom:FIGURE: [fig-machine_learning/ensemble_003.png, width=500 frac=0.85]
Each panel with the single gray line is one of the perceptrons used for the
ensemble bagging classifier on the lower right. <div
id="fig:ensemble_003"></div> -->
<!-- begin figure -->
<div id="fig:ensemble_003"></div>
<p>Each panel with the single gray line is one of the perceptrons used for the
ensemble bagging classifier on the lower right.</p>
<img src="fig-machine_learning/ensemble_003.png" width=500>
<!-- end figure -->
The BaggingClassifier can estimate its own out-of-sample error if passed the
oob_score=True flag upon construction. This keeps track of which samples were
used for training and which were not, and then estimates the out-of-sample
error using those samples that were unused in training. The max_samples
keyword argument specifies the number of items from the training set to use for
the base classifier. The smaller the max_samples used in the bagging
classifier, the better the out-of-sample error estimate, but at the cost of
worse in-sample performance. Of course, this depends on the overall number of
samples and the degrees-of-freedom in each individual classifier. The
VC-dimension surfaces again!
Boosting
As we discussed, bagging is particularly effective for individual high-variance
classifiers because the final majority-vote tends to smooth out the individual
classifiers and produce a more stable collaborative solution. On the other
hand, boosting is particularly effective for high-bias classifiers that are
slow to adjust to new data. On the one hand, boosting is similiar to bagging in
that it uses a majority-voting (or averaging for numeric prediction) process at
the end; and it also combines individual classifiers of the same type. On the
other hand, boosting is serially iterative, whereas the individual classifiers
in bagging can be trained in parallel. Boosting uses the misclassifications of
prior iterations to influence the training of the next iterative classifier by
weighting those misclassifications more heavily in subsequent steps. This means
that, at every step, boosting focuses more and more on specific
misclassifications up to that point, letting the prior classifications
be carried by earlier iterations.
The primary implementation for boosting in Scikit-learn is the Adaptive
Boosting (AdaBoost) algorithm, which does classification
(AdaBoostClassifier) and regression (AdaBoostRegressor). The first step in
the basic AdaBoost algorithm is to initialize the weights over each of the
training set indicies, $D_0(i)=1/n$ where there are $n$ elements in the
training set. Note that this creates a discrete uniform distribution over the
indicies, not over the training data $\lbrace (x_i,y_i) \rbrace$ itself. In
other words, if there are repeated elements in the training data, then each
gets its own weight. The next step is to train the base classifer $h_k$ and
record the classification error at the $k^{th}$ iteration, $\epsilon_k$. Two
factors can next be calculated using $\epsilon_k$,
$$
\alpha_k = \frac{1}{2}\log \frac{1-\epsilon_k}{\epsilon_k}
$$
and the normalization factor,
$$
Z_k = 2 \sqrt{ \epsilon_k (1- \epsilon_k) }
$$
For the next step, the weights over the training data are updated as
in the following,
$$
D_{k+1}(i) = \frac{1}{Z_k} D_k(i)\exp{(-\alpha_k y_i h_k(x_i))}
$$
The final classification result is assembled using the $\alpha_k$
factors, $g = \sgn(\sum_{k} \alpha_k h_k)$.
To re-do the problem above using boosting with perceptrons, we set up the
AdaBoost classifier in the following,
End of explanation |
10,024 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Construct the metadata just based on the headers
Step1: Add the frame types as we've always done
Step2: Find the unique configurations. The unique configurations are found by matching data in the headers between frames. The keywords that are matched is set by the configuration_keys in each spectrograph class.
Step3: Once you have the unique configurations, the next step is to identify each frame with a configuration. The configuration is added to the table in the configuration column. The end result of unique_configurations and set_configurations can be done in a single step.
Step4: Then you identify each frame within a given configuration as being part of a calibration group. Calibration frames can be assigned to multiple calibration groups, but science frames can only be assigned to a single calibration group. I.e., the calibration group of a science frame selects which calibration frames to use for it. At the moment, some frame types can be assigned to calibrations independent of their configuration. These are defined by the global_frames keyword below. The calibration group is currently an integer or list of integers in the calib column of the table.
At the moment, frames are assigned to a calibration group just based on the configuration, but the logic will become more complicated. I.e., impose a time limit on when arcs should be used with given science frames.
Step5: This matches calibration frames to science frames using the existing approach.
Step6: The following is just a check to compare what one would get using the calib group to assign the science ID. In the test cases with a single configuration the two are identical. | Python Code:
fitstbl = PypeItMetaData('keck_lris_red', file_list=file_list, background_index=True)
Explanation: Construct the metadata just based on the headers
End of explanation
_ = fitstbl.get_frame_types(flag_unknown=True)
Explanation: Add the frame types as we've always done
End of explanation
cfgs = fitstbl.unique_configurations(ignore_frames=['bias', 'dark'])
cfgs
Explanation: Find the unique configurations. The unique configurations are found by matching data in the headers between frames. The keywords that are matched is set by the configuration_keys in each spectrograph class.
End of explanation
fitstbl.set_configurations(cfgs)
Explanation: Once you have the unique configurations, the next step is to identify each frame with a configuration. The configuration is added to the table in the configuration column. The end result of unique_configurations and set_configurations can be done in a single step.
End of explanation
fitstbl.set_calibration_groups(global_frames=['bias', 'dark'])
fitstbl[fitstbl.find_calib_group(0)]
Explanation: Then you identify each frame within a given configuration as being part of a calibration group. Calibration frames can be assigned to multiple calibration groups, but science frames can only be assigned to a single calibration group. I.e., the calibration group of a science frame selects which calibration frames to use for it. At the moment, some frame types can be assigned to calibrations independent of their configuration. These are defined by the global_frames keyword below. The calibration group is currently an integer or list of integers in the calib column of the table.
At the moment, frames are assigned to a calibration group just based on the configuration, but the logic will become more complicated. I.e., impose a time limit on when arcs should be used with given science frames.
End of explanation
fitstbl.match_to_science(fitstbl.par['calibrations'], fitstbl.par['rdx']['calwin'], fitstbl.par['fluxcalib'], setup=True)
Explanation: This matches calibration frames to science frames using the existing approach.
End of explanation
fitstbl.calib_to_science()
fitstbl.get_setup(0)
fitstbl.write_sorted('test.sorted')
fitstbl.write_setups('test.setups', ignore=['None'])
fitstbl.write_calib('test.calib')
fitstbl.master_key(0)
Explanation: The following is just a check to compare what one would get using the calib group to assign the science ID. In the test cases with a single configuration the two are identical.
End of explanation |
10,025 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Import required packages
Step1: Download and Prep NASA's Turbofan Engine Degradation Simulation (PHM08 Challenge) Data Set
Step2: Read training data into a DataFrame.
Step3: Create training data with 30 random engines
Step4: Read test data into a DataFrame.
Step5: Connect to SAS Viya CAS Engine
Step6: Load pandas DataFrames into CAS
Step7: Check details of loaded tables.
Step8: Import SVDD action set
Step9: Create SVDD model for training data
Step10: Score SVDD astore against scoring data
Step11: Save SVDD astore for use in SAS Event Stream Processing | Python Code:
import os
import matplotlib.pyplot as plt
import pandas as pd
import swat # SAS Viya Python interface
%matplotlib inline
Explanation: Import required packages
End of explanation
DATA_URL = 'https://ti.arc.nasa.gov/m/project/prognostic-repository/Challenge_Data.zip'
DATA_DIR = '.'
train_tsv = os.path.join(DATA_DIR, 'train.txt')
test_tsv = os.path.join(DATA_DIR, 'test.txt')
if not os.path.isfile(train_tsv) or not os.path.isfile(test_tsv):
import zipfile
from six.moves import urllib
try:
filename, headers = urllib.request.urlretrieve(DATA_URL)
with zipfile.ZipFile(filename, 'r') as data_zip:
data_zip.extract('train.txt', DATA_DIR)
data_zip.extract('test.txt', DATA_DIR)
finally:
urllib.request.urlcleanup()
Explanation: Download and Prep NASA's Turbofan Engine Degradation Simulation (PHM08 Challenge) Data Set
End of explanation
# Create list of x1-x24
x = ['x%s' % i for i in range(1, 25)]
df = pd.read_table(train_tsv, delim_whitespace=True, names=['engine', 'cycle'] + x)
df.head()
Explanation: Read training data into a DataFrame.
End of explanation
train = df[df['engine'].isin([7, 28, 32, 38, 40, 51, 65, 84, 90, 95, 99, 107,
120, 124, 135, 137, 138, 148, 151, 160, 166, 178,
182, 188, 197, 199, 200, 207, 210, 211])]
# Keep first 50 observations per engine to train SVDD
train = train[train['cycle'] <= 50]
train['index'] = train.index
train.tail()
Explanation: Create training data with 30 random engines
End of explanation
df = pd.read_table('test.txt', delim_whitespace=True, names=['engine', 'cycle'] + x)
# create a scoring data set with 9 random engines from the test data set
df['index'] = df.index
score = df[df['engine'].isin([1, 8, 22, 53, 63, 86, 102, 158, 170, 202])]
score.tail()
Explanation: Read test data into a DataFrame.
End of explanation
s = swat.CAS('localhost', 5570)
Explanation: Connect to SAS Viya CAS Engine
End of explanation
train_tbl = s.upload_frame(train, casout=dict(name='train', replace=True))
score_tbl = s.upload_frame(score, casout=dict(name='score', replace=True))
Explanation: Load pandas DataFrames into CAS
End of explanation
s.tableinfo()
Explanation: Check details of loaded tables.
End of explanation
s.loadactionset('svdd')
Explanation: Import SVDD action set
End of explanation
# Run svdd.svddTrain action set on training data
ysvdd_state = s.CASTable('ysvddstate', replace=True)
state_s = s.CASTable('state_s', replace=True)
train_tbl.svdd.svddtrain(gauss=11,
solver='actset',
inputs=x,
savestate=ysvdd_state,
output=dict(casout=state_s),
id='index')
sv = state_s.to_frame()
sv
Explanation: Create SVDD model for training data
End of explanation
# Load astore action set
s.loadactionset('astore')
# Score resulting SVDD astore (ysvddstate) against the scoring data (score) and output results (svddscored)
svdd_scored = s.CASTable('svddscored', replace=True)
score_tbl.astore.score(rstore=ysvdd_state, out=svdd_scored)
# Create local dataframe of scored data to plot using Matplotlib
output = svdd_scored.to_frame()
output.head()
# Add SVDD scored values to original score DataFrame for plotting purposes
df = score.merge(output, how='left')
df.head()
df = df.loc[df['engine'] < 150]
for index, group in df.groupby('engine'):
group.plot(x='cycle', y='_SVDDDISTANCE_', title=index, label='engine', figsize=(15, 4))
plt.show()
Explanation: Score SVDD astore against scoring data
End of explanation
# Download SVDD astore for use in SAS Event Stream Processing (ESP)
results = s.astore.download(rstore=ysvdd_state)
# Check details of loaded data
s.tableinfo()
Explanation: Save SVDD astore for use in SAS Event Stream Processing
End of explanation |
10,026 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
LAB 5a
Step1: Import necessary libraries.
Step2: Set environment variables.
Set environment variables so that we can use them throughout the entire lab. We will be using our project name for our bucket, so you only need to change your project and region.
Step3: Check data exists
Verify that you previously created CSV files we'll be using for training and evaluation. If not, go back to lab 1b_prepare_data_babyweight to create them.
Step4: Now that we have the Keras wide-and-deep code working on a subset of the data, we can package the TensorFlow code up as a Python module and train it on Cloud AI Platform.
Train on Cloud AI Platform
Training on Cloud AI Platform requires
Step7: We then use the %%writefile magic to write the contents of the cell below to a file called task.py in the babyweight/trainer folder.
Create trainer module's task.py to hold hyperparameter argparsing code.
The cell below writes the file babyweight/trainer/task.py which sets up our training job. Here is where we determine which parameters of our model to pass as flags during training using the parser module. Look at how batch_size is passed to the model in the code below. Use this as an example to parse arguements for the following variables
- nnsize which represents the hidden layer sizes to use for DNN feature columns
- nembeds which represents the embedding size of a cross of n key real-valued parameters
- train_examples which represents the number of examples (in thousands) to run the training job
- eval_steps which represents the positive number of steps for which to evaluate model
Be sure to include a default value for the parsed arguments above and specfy the type if necessary.
Step16: In the same way we can write to the file model.py the model that we developed in the previous notebooks.
Create trainer module's model.py to hold Keras model code.
To create our model.py, we'll use the code we wrote for the Wide & Deep model. Look back at your 4c_keras_wide_and_deep_babyweight.ipynb notebook and copy/paste the necessary code from that notebook into its place in the cell below.
Step17: Train locally
After moving the code to a package, make sure it works as a standalone. Note, we incorporated the --train_examples flag so that we don't try to train on the entire dataset while we are developing our pipeline. Once we are sure that everything is working on a subset, we can change it so that we can train on all the data. Even for this subset, this takes about 3 minutes in which you won't see any output ...
Run trainer module package locally.
We can run a very small training job over a single file with a small batch size, 1 epoch, 1 train example, and 1 eval step.
Step18: Training on Cloud AI Platform
Now that we see everything is working locally, it's time to train on the cloud!
To submit to the Cloud we use gcloud ai-platform jobs submit training [jobname] and simply specify some additional parameters for AI Platform Training Service
Step19: The training job should complete within 10 to 15 minutes. You do not need to wait for this training job to finish before moving forward in the notebook, but will need a trained model to complete our next lab.
Lab Summary
Step20: Build and push container image to repo
Now that we have created our Dockerfile, we need to build and push our container image to our project's container repo. To do this, we'll create a small shell script that we can call from the bash.
Step21: Note
Step22: Kindly ignore the incompatibility errors.
Test container locally
Before we submit our training job to Cloud AI Platform, let's make sure our container that we just built and pushed to our project's container repo works perfectly. We can do that by calling our container in bash and passing the necessary user_args for our task.py's parser.
Step23: Train on Cloud AI Platform
Once the code works in standalone mode, you can run it on Cloud AI Platform. Because this is on the entire dataset, it will take a while. The training run took about <b> two hours </b> for me. You can monitor the job from the GCP console in the Cloud AI Platform section.
Step24: When I ran it, I used train_examples=2000000. When training finished, I filtered in the Stackdriver log on the word "dict" and saw that the last line was
Step25: Repeat training
This time with tuned parameters for batch_size and nembeds. Note that your best results may differ from below. So be sure to fill yours in! | Python Code:
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
!pip3 install cloudml-hypertune
Explanation: LAB 5a: Training Keras model on Cloud AI Platform
Learning Objectives
Setup up the environment
Create trainer module's task.py to hold hyperparameter argparsing code
Create trainer module's model.py to hold Keras model code
Run trainer module package locally
Submit training job to Cloud AI Platform
Submit hyperparameter tuning job to Cloud AI Platform
Introduction
After having testing our training pipeline both locally and in the cloud on a susbset of the data, we can submit another (much larger) training job to the cloud. It is also a good idea to run a hyperparameter tuning job to make sure we have optimized the hyperparameters of our model.
In this notebook, we'll be training our Keras model at scale using Cloud AI Platform.
In this lab, we will set up the environment, create the trainer module's task.py to hold hyperparameter argparsing code, create the trainer module's model.py to hold Keras model code, run the trainer module package locally, submit a training job to Cloud AI Platform, and submit a hyperparameter tuning job to Cloud AI Platform.
Set up environment variables and load necessary libraries
First we will install the cloudml-hypertune package on our local machine. This is the package which we will use to report hyperparameter tuning metrics to Cloud AI Platform. Installing the package will allow us to test our trainer package locally.
End of explanation
import os
Explanation: Import necessary libraries.
End of explanation
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "${PROJECT}
# TODO: Change these to try this notebook out
PROJECT = "your-project-name-here" # Replace with your PROJECT
BUCKET = PROJECT # defaults to PROJECT
REGION = "us-central1" # Replace with your REGION
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["TFVERSION"] = "2.1"
os.environ["PYTHONVERSION"] = "3.7"
%%bash
gcloud config set project ${PROJECT}
gcloud config set compute/region ${REGION}
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
Explanation: Set environment variables.
Set environment variables so that we can use them throughout the entire lab. We will be using our project name for our bucket, so you only need to change your project and region.
End of explanation
%%bash
gsutil ls gs://${BUCKET}/babyweight/data/*000000000000.csv
Explanation: Check data exists
Verify that you previously created CSV files we'll be using for training and evaluation. If not, go back to lab 1b_prepare_data_babyweight to create them.
End of explanation
%%bash
mkdir -p babyweight/trainer
touch babyweight/trainer/__init__.py
Explanation: Now that we have the Keras wide-and-deep code working on a subset of the data, we can package the TensorFlow code up as a Python module and train it on Cloud AI Platform.
Train on Cloud AI Platform
Training on Cloud AI Platform requires:
* Making the code a Python package
* Using gcloud to submit the training code to Cloud AI Platform
Ensure that the Cloud AI Platform API is enabled by going to this link.
Move code into a Python package
A Python package is simply a collection of one or more .py files along with an __init__.py file to identify the containing directory as a package. The __init__.py sometimes contains initialization code but for our purposes an empty file suffices.
The bash command touch creates an empty file in the specified location, the directory babyweight should already exist.
End of explanation
%%writefile babyweight/trainer/task.py
import argparse
import json
import os
from trainer import model
import tensorflow as tf
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"--job-dir",
help="this model ignores this field, but it is required by gcloud",
default="junk"
)
parser.add_argument(
"--train_data_path",
help="GCS location of training data",
required=True
)
parser.add_argument(
"--eval_data_path",
help="GCS location of evaluation data",
required=True
)
parser.add_argument(
"--output_dir",
help="GCS location to write checkpoints and export models",
required=True
)
parser.add_argument(
"--batch_size",
help="Number of examples to compute gradient over.",
type=int,
default=512
)
parser.add_argument(
"--nnsize",
help="Hidden layer sizes for DNN -- provide space-separated layers",
nargs="+",
type=int,
default=[128, 32, 4]
)
parser.add_argument(
"--nembeds",
help="Embedding size of a cross of n key real-valued parameters",
type=int,
default=3
)
parser.add_argument(
"--num_epochs",
help="Number of epochs to train the model.",
type=int,
default=10
)
parser.add_argument(
"--train_examples",
help=Number of examples (in thousands) to run the training job over.
If this is more than actual # of examples available, it cycles through
them. So specifying 1000 here when you have only 100k examples makes
this 10 epochs.,
type=int,
default=5000
)
parser.add_argument(
"--eval_steps",
help=Positive number of steps for which to evaluate model. Default
to None, which means to evaluate until input_fn raises an end-of-input
exception,
type=int,
default=None
)
# Parse all arguments
args = parser.parse_args()
arguments = args.__dict__
# Unused args provided by service
arguments.pop("job_dir", None)
arguments.pop("job-dir", None)
# Modify some arguments
arguments["train_examples"] *= 1000
# Append trial_id to path if we are doing hptuning
# This code can be removed if you are not using hyperparameter tuning
arguments["output_dir"] = os.path.join(
arguments["output_dir"],
json.loads(
os.environ.get("TF_CONFIG", "{}")
).get("task", {}).get("trial", "")
)
# Run the training job
model.train_and_evaluate(arguments)
Explanation: We then use the %%writefile magic to write the contents of the cell below to a file called task.py in the babyweight/trainer folder.
Create trainer module's task.py to hold hyperparameter argparsing code.
The cell below writes the file babyweight/trainer/task.py which sets up our training job. Here is where we determine which parameters of our model to pass as flags during training using the parser module. Look at how batch_size is passed to the model in the code below. Use this as an example to parse arguements for the following variables
- nnsize which represents the hidden layer sizes to use for DNN feature columns
- nembeds which represents the embedding size of a cross of n key real-valued parameters
- train_examples which represents the number of examples (in thousands) to run the training job
- eval_steps which represents the positive number of steps for which to evaluate model
Be sure to include a default value for the parsed arguments above and specfy the type if necessary.
End of explanation
%%writefile babyweight/trainer/model.py
import datetime
import os
import shutil
import numpy as np
import tensorflow as tf
import hypertune
# Determine CSV, label, and key columns
CSV_COLUMNS = ["weight_pounds",
"is_male",
"mother_age",
"plurality",
"gestation_weeks"]
LABEL_COLUMN = "weight_pounds"
# Set default values for each CSV column.
# Treat is_male and plurality as strings.
DEFAULTS = [[0.0], ["null"], [0.0], ["null"], [0.0]]
def features_and_labels(row_data):
Splits features and labels from feature dictionary.
Args:
row_data: Dictionary of CSV column names and tensor values.
Returns:
Dictionary of feature tensors and label tensor.
label = row_data.pop(LABEL_COLUMN)
return row_data, label # features, label
def load_dataset(pattern, batch_size=1, mode='eval'):
Loads dataset using the tf.data API from CSV files.
Args:
pattern: str, file pattern to glob into list of files.
batch_size: int, the number of examples per batch.
mode: 'train' | 'eval' to determine if training or evaluating.
Returns:
`Dataset` object.
print("mode = {}".format(mode))
# Make a CSV dataset
dataset = tf.data.experimental.make_csv_dataset(
file_pattern=pattern,
batch_size=batch_size,
column_names=CSV_COLUMNS,
column_defaults=DEFAULTS)
# Map dataset to features and label
dataset = dataset.map(map_func=features_and_labels) # features, label
# Shuffle and repeat for training
if mode == 'train':
dataset = dataset.shuffle(buffer_size=1000).repeat()
# Take advantage of multi-threading; 1=AUTOTUNE
dataset = dataset.prefetch(buffer_size=1)
return dataset
def create_input_layers():
Creates dictionary of input layers for each feature.
Returns:
Dictionary of `tf.Keras.layers.Input` layers for each feature.
deep_inputs = {
colname: tf.keras.layers.Input(
name=colname, shape=(), dtype="float32")
for colname in ["mother_age", "gestation_weeks"]
}
wide_inputs = {
colname: tf.keras.layers.Input(
name=colname, shape=(), dtype="string")
for colname in ["is_male", "plurality"]
}
inputs = {**wide_inputs, **deep_inputs}
return inputs
def categorical_fc(name, values):
Helper function to wrap categorical feature by indicator column.
Args:
name: str, name of feature.
values: list, list of strings of categorical values.
Returns:
Categorical and indicator column of categorical feature.
cat_column = tf.feature_column.categorical_column_with_vocabulary_list(
key=name, vocabulary_list=values)
ind_column = tf.feature_column.indicator_column(
categorical_column=cat_column)
return cat_column, ind_column
def create_feature_columns(nembeds):
Creates wide and deep dictionaries of feature columns from inputs.
Args:
nembeds: int, number of dimensions to embed categorical column down to.
Returns:
Wide and deep dictionaries of feature columns.
deep_fc = {
colname: tf.feature_column.numeric_column(key=colname)
for colname in ["mother_age", "gestation_weeks"]
}
wide_fc = {}
is_male, wide_fc["is_male"] = categorical_fc(
"is_male", ["True", "False", "Unknown"])
plurality, wide_fc["plurality"] = categorical_fc(
"plurality", ["Single(1)", "Twins(2)", "Triplets(3)",
"Quadruplets(4)", "Quintuplets(5)", "Multiple(2+)"])
# Bucketize the float fields. This makes them wide
age_buckets = tf.feature_column.bucketized_column(
source_column=deep_fc["mother_age"],
boundaries=np.arange(15, 45, 1).tolist())
wide_fc["age_buckets"] = tf.feature_column.indicator_column(
categorical_column=age_buckets)
gestation_buckets = tf.feature_column.bucketized_column(
source_column=deep_fc["gestation_weeks"],
boundaries=np.arange(17, 47, 1).tolist())
wide_fc["gestation_buckets"] = tf.feature_column.indicator_column(
categorical_column=gestation_buckets)
# Cross all the wide columns, have to do the crossing before we one-hot
crossed = tf.feature_column.crossed_column(
keys=[age_buckets, gestation_buckets],
hash_bucket_size=1000)
deep_fc["crossed_embeds"] = tf.feature_column.embedding_column(
categorical_column=crossed, dimension=nembeds)
return wide_fc, deep_fc
def get_model_outputs(wide_inputs, deep_inputs, dnn_hidden_units):
Creates model architecture and returns outputs.
Args:
wide_inputs: Dense tensor used as inputs to wide side of model.
deep_inputs: Dense tensor used as inputs to deep side of model.
dnn_hidden_units: List of integers where length is number of hidden
layers and ith element is the number of neurons at ith layer.
Returns:
Dense tensor output from the model.
# Hidden layers for the deep side
layers = [int(x) for x in dnn_hidden_units]
deep = deep_inputs
for layerno, numnodes in enumerate(layers):
deep = tf.keras.layers.Dense(
units=numnodes,
activation="relu",
name="dnn_{}".format(layerno+1))(deep)
deep_out = deep
# Linear model for the wide side
wide_out = tf.keras.layers.Dense(
units=10, activation="relu", name="linear")(wide_inputs)
# Concatenate the two sides
both = tf.keras.layers.concatenate(
inputs=[deep_out, wide_out], name="both")
# Final output is a linear activation because this is regression
output = tf.keras.layers.Dense(
units=1, activation="linear", name="weight")(both)
return output
def rmse(y_true, y_pred):
Calculates RMSE evaluation metric.
Args:
y_true: tensor, true labels.
y_pred: tensor, predicted labels.
Returns:
Tensor with value of RMSE between true and predicted labels.
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
def build_wide_deep_model(dnn_hidden_units=[64, 32], nembeds=3):
Builds wide and deep model using Keras Functional API.
Returns:
`tf.keras.models.Model` object.
# Create input layers
inputs = create_input_layers()
# Create feature columns for both wide and deep
wide_fc, deep_fc = create_feature_columns(nembeds)
# The constructor for DenseFeatures takes a list of numeric columns
# The Functional API in Keras requires: LayerConstructor()(inputs)
wide_inputs = tf.keras.layers.DenseFeatures(
feature_columns=wide_fc.values(), name="wide_inputs")(inputs)
deep_inputs = tf.keras.layers.DenseFeatures(
feature_columns=deep_fc.values(), name="deep_inputs")(inputs)
# Get output of model given inputs
output = get_model_outputs(wide_inputs, deep_inputs, dnn_hidden_units)
# Build model and compile it all together
model = tf.keras.models.Model(inputs=inputs, outputs=output)
model.compile(optimizer="adam", loss="mse", metrics=[rmse, "mse"])
return model
def train_and_evaluate(args):
model = build_wide_deep_model(args["nnsize"], args["nembeds"])
print("Here is our Wide-and-Deep architecture so far:\n")
print(model.summary())
trainds = load_dataset(
args["train_data_path"],
args["batch_size"],
'train')
evalds = load_dataset(
args["eval_data_path"], 1000, 'eval')
if args["eval_steps"]:
evalds = evalds.take(count=args["eval_steps"])
num_batches = args["batch_size"] * args["num_epochs"]
steps_per_epoch = args["train_examples"] // num_batches
checkpoint_path = os.path.join(args["output_dir"], "checkpoints/babyweight")
cp_callback = tf.keras.callbacks.ModelCheckpoint(
filepath=checkpoint_path, verbose=1, save_weights_only=True)
history = model.fit(
trainds,
validation_data=evalds,
epochs=args["num_epochs"],
steps_per_epoch=steps_per_epoch,
verbose=2, # 0=silent, 1=progress bar, 2=one line per epoch
callbacks=[cp_callback])
EXPORT_PATH = os.path.join(
args["output_dir"], datetime.datetime.now().strftime("%Y%m%d%H%M%S"))
tf.saved_model.save(
obj=model, export_dir=EXPORT_PATH) # with default serving function
hp_metric = history.history['val_rmse'][-1]
hpt = hypertune.HyperTune()
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='rmse',
metric_value=hp_metric,
global_step=args['num_epochs'])
print("Exported trained model to {}".format(EXPORT_PATH))
Explanation: In the same way we can write to the file model.py the model that we developed in the previous notebooks.
Create trainer module's model.py to hold Keras model code.
To create our model.py, we'll use the code we wrote for the Wide & Deep model. Look back at your 4c_keras_wide_and_deep_babyweight.ipynb notebook and copy/paste the necessary code from that notebook into its place in the cell below.
End of explanation
%%bash
OUTDIR=babyweight_trained
rm -rf ${OUTDIR}
export PYTHONPATH=${PYTHONPATH}:${PWD}/babyweight
python3 -m trainer.task \
--job-dir=./tmp \
--train_data_path=gs://${BUCKET}/babyweight/data/train*.csv \
--eval_data_path=gs://${BUCKET}/babyweight/data/eval*.csv \
--output_dir=${OUTDIR} \
--batch_size=10 \
--num_epochs=1 \
--train_examples=1 \
--eval_steps=1
Explanation: Train locally
After moving the code to a package, make sure it works as a standalone. Note, we incorporated the --train_examples flag so that we don't try to train on the entire dataset while we are developing our pipeline. Once we are sure that everything is working on a subset, we can change it so that we can train on all the data. Even for this subset, this takes about 3 minutes in which you won't see any output ...
Run trainer module package locally.
We can run a very small training job over a single file with a small batch size, 1 epoch, 1 train example, and 1 eval step.
End of explanation
%%bash
OUTDIR=gs://${BUCKET}/babyweight/trained_model
JOBID=babyweight_$(date -u +%y%m%d_%H%M%S)
gcloud ai-platform jobs submit training ${JOBID} \
--region=${REGION} \
--module-name=trainer.task \
--package-path=$(pwd)/babyweight/trainer \
--job-dir=${OUTDIR} \
--staging-bucket=gs://${BUCKET} \
--master-machine-type=n1-standard-8 \
--scale-tier=CUSTOM \
--runtime-version=${TFVERSION} \
--python-version=${PYTHONVERSION} \
-- \
--train_data_path=gs://${BUCKET}/babyweight/data/train*.csv \
--eval_data_path=gs://${BUCKET}/babyweight/data/eval*.csv \
--output_dir=${OUTDIR} \
--num_epochs=10 \
--train_examples=10000 \
--eval_steps=100 \
--batch_size=32 \
--nembeds=8
Explanation: Training on Cloud AI Platform
Now that we see everything is working locally, it's time to train on the cloud!
To submit to the Cloud we use gcloud ai-platform jobs submit training [jobname] and simply specify some additional parameters for AI Platform Training Service:
- jobname: A unique identifier for the Cloud job. We usually append system time to ensure uniqueness
- job-dir: A GCS location to upload the Python package to
- runtime-version: Version of TF to use.
- python-version: Version of Python to use. Currently only Python 3.7 is supported for TF 2.1.
- region: Cloud region to train in. See here for supported AI Platform Training Service regions
Below the -- \ we add in the arguments for our task.py file.
End of explanation
%%writefile babyweight/Dockerfile
FROM gcr.io/deeplearning-platform-release/tf2-cpu
COPY trainer /babyweight/trainer
RUN apt update && \
apt install --yes python3-pip && \
pip3 install --upgrade --quiet tensorflow==2.1 && \
pip3 install --upgrade --quiet cloudml-hypertune
ENV PYTHONPATH ${PYTHONPATH}:/babyweight
ENTRYPOINT ["python3", "babyweight/trainer/task.py"]
Explanation: The training job should complete within 10 to 15 minutes. You do not need to wait for this training job to finish before moving forward in the notebook, but will need a trained model to complete our next lab.
Lab Summary:
In this lab, we set up the environment, created the trainer module's task.py to hold hyperparameter argparsing code, created the trainer module's model.py to hold Keras model code, ran the trainer module package locally, submitted a training job to Cloud AI Platform, and submitted a hyperparameter tuning job to Cloud AI Platform.
Extra: Training on Cloud AI Platform using containers
Though we can directly submit TensorFlow 2.1 models using the gcloud ai-platform jobs submit training command, we can also submit containerized models for training. One advantage of using this approach is that we can use frameworks not natively supported by Cloud AI Platform for training and have more control over the environment in which the training loop is running.
The rest of this notebook is dedicated to using the containerized model approach.
Create Dockerfile
We need to create a container with everything we need to be able to run our model. This includes our trainer module package, python3, as well as the libraries we use such as the most up to date TensorFlow 2.0 version.
End of explanation
%%writefile babyweight/push_docker.sh
export PROJECT_ID=$(gcloud config list project --format "value(core.project)")
export IMAGE_REPO_NAME=babyweight_training_container
export IMAGE_URI=gcr.io/${PROJECT_ID}/${IMAGE_REPO_NAME}
echo "Building $IMAGE_URI"
docker build -f Dockerfile -t ${IMAGE_URI} ./
echo "Pushing $IMAGE_URI"
docker push ${IMAGE_URI}
Explanation: Build and push container image to repo
Now that we have created our Dockerfile, we need to build and push our container image to our project's container repo. To do this, we'll create a small shell script that we can call from the bash.
End of explanation
%%bash
cd babyweight
bash push_docker.sh
Explanation: Note: If you get a permissions/stat error when running push_docker.sh from Notebooks, do it from CloudShell:
Open CloudShell on the GCP Console
* git clone https://github.com/GoogleCloudPlatform/training-data-analyst
* cd training-data-analyst/courses/machine_learning/deepdive2/structured/solutions/babyweight
* bash push_docker.sh
This step takes 5-10 minutes to run.
End of explanation
%%bash
export PROJECT_ID=$(gcloud config list project --format "value(core.project)")
export IMAGE_REPO_NAME=babyweight_training_container
export IMAGE_URI=gcr.io/${PROJECT_ID}/${IMAGE_REPO_NAME}
echo "Running $IMAGE_URI"
docker run ${IMAGE_URI} \
--train_data_path=gs://${BUCKET}/babyweight/data/train*.csv \
--train_data_path=gs://${BUCKET}/babyweight/data/train*.csv \
--eval_data_path=gs://${BUCKET}/babyweight/data/eval*.csv \
--output_dir=gs://${BUCKET}/babyweight/trained_model \
--batch_size=10 \
--num_epochs=10 \
--train_examples=1 \
--eval_steps=1
Explanation: Kindly ignore the incompatibility errors.
Test container locally
Before we submit our training job to Cloud AI Platform, let's make sure our container that we just built and pushed to our project's container repo works perfectly. We can do that by calling our container in bash and passing the necessary user_args for our task.py's parser.
End of explanation
%%bash
OUTDIR=gs://${BUCKET}/babyweight/trained_model
JOBID=babyweight_$(date -u +%y%m%d_%H%M%S)
echo ${OUTDIR} ${REGION} ${JOBID}
# gsutil -m rm -rf ${OUTDIR}
IMAGE=gcr.io/${PROJECT}/babyweight_training_container
gcloud ai-platform jobs submit training ${JOBID} \
--staging-bucket=gs://${BUCKET} \
--region=${REGION} \
--master-image-uri=${IMAGE} \
--master-machine-type=n1-standard-4 \
--scale-tier=CUSTOM \
-- \
--train_data_path=gs://${BUCKET}/babyweight/data/train*.csv \
--eval_data_path=gs://${BUCKET}/babyweight/data/eval*.csv \
--output_dir=${OUTDIR} \
--num_epochs=10 \
--train_examples=20000 \
--eval_steps=100 \
--batch_size=32 \
--nembeds=8
Explanation: Train on Cloud AI Platform
Once the code works in standalone mode, you can run it on Cloud AI Platform. Because this is on the entire dataset, it will take a while. The training run took about <b> two hours </b> for me. You can monitor the job from the GCP console in the Cloud AI Platform section.
End of explanation
%%writefile hyperparam.yaml
trainingInput:
scaleTier: STANDARD_1
hyperparameters:
hyperparameterMetricTag: rmse
goal: MINIMIZE
maxTrials: 20
maxParallelTrials: 5
enableTrialEarlyStopping: True
params:
- parameterName: batch_size
type: INTEGER
minValue: 8
maxValue: 512
scaleType: UNIT_LOG_SCALE
- parameterName: nembeds
type: INTEGER
minValue: 3
maxValue: 30
scaleType: UNIT_LINEAR_SCALE
%%bash
OUTDIR=gs://${BUCKET}/babyweight/hyperparam
JOBNAME=babyweight_$(date -u +%y%m%d_%H%M%S)
echo ${OUTDIR} ${REGION} ${JOBNAME}
gsutil -m rm -rf ${OUTDIR}
IMAGE=gcr.io/${PROJECT}/babyweight_training_container
gcloud ai-platform jobs submit training ${JOBNAME} \
--staging-bucket=gs://${BUCKET} \
--region=${REGION} \
--master-image-uri=${IMAGE} \
--master-machine-type=n1-standard-8 \
--scale-tier=CUSTOM \
--config=hyperparam.yaml \
-- \
--train_data_path=gs://${BUCKET}/babyweight/data/train*.csv \
--eval_data_path=gs://${BUCKET}/babyweight/data/eval*.csv \
--output_dir=${OUTDIR} \
--num_epochs=10 \
--train_examples=5000 \
--eval_steps=100
Explanation: When I ran it, I used train_examples=2000000. When training finished, I filtered in the Stackdriver log on the word "dict" and saw that the last line was:
<pre>
Saving dict for global step 5714290: average_loss = 1.06473, global_step = 5714290, loss = 34882.4, rmse = 1.03186
</pre>
The final RMSE was 1.03 pounds.
Hyperparameter tuning
All of these are command-line parameters to my program. To do hyperparameter tuning, create hyperparam.yaml and pass it as --config hyperparam.yaml.
This step will take <b>up to 2 hours</b> -- you can increase maxParallelTrials or reduce maxTrials to get it done faster. Since maxParallelTrials is the number of initial seeds to start searching from, you don't want it to be too large; otherwise, all you have is a random search.
Note that this is the same hyperparam.yaml file as above, but included here for convenience.
End of explanation
%%bash
OUTDIR=gs://${BUCKET}/babyweight/trained_model_tuned
JOBNAME=babyweight_$(date -u +%y%m%d_%H%M%S)
echo ${OUTDIR} ${REGION} ${JOBNAME}
gsutil -m rm -rf ${OUTDIR}
IMAGE=gcr.io/${PROJECT}/babyweight_training_container
gcloud ai-platform jobs submit training ${JOBNAME} \
--staging-bucket=gs://${BUCKET} \
--region=${REGION} \
--master-image-uri=${IMAGE} \
--master-machine-type=n1-standard-4 \
--scale-tier=CUSTOM \
-- \
--train_data_path=gs://${BUCKET}/babyweight/data/train*.csv \
--eval_data_path=gs://${BUCKET}/babyweight/data/eval*.csv \
--output_dir=${OUTDIR} \
--num_epochs=10 \
--train_examples=20000 \
--eval_steps=100 \
--batch_size=32 \
--nembeds=8
Explanation: Repeat training
This time with tuned parameters for batch_size and nembeds. Note that your best results may differ from below. So be sure to fill yours in!
End of explanation |
10,027 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using a random forest for demographic model selection
In Schrider and Kern (2017) we give a toy example of demographic model selection via supervised machine learning in Figure Box 1. Following a discussion on twitter, Vince Buffalo had the great idea of our providing a simple example of supervised ML in population genetics using a jupyter notebook; this notebook aims to serve that purpose by showing you exactly how we produced that figure in our paper
Preliminaries
The road map here will be to 1) do some simulation of three demographic models, 2) to train a classifier to distinguish among those models, 3) test that classifier with new simulation data, and 4) to graphically present how well our trained classifier works.
To do this we will use coalescent simulations as implemented in Dick Hudson's well known ms software and for the ML side of things we will use the scikit-learn package. Let's start by installing these dependencies (if you don't have them installed already)
Install, and compile ms
We have put a copy of the ms tarball in this repo, so the following should work upon cloning
Step1: Install scikit-learn
If you use anaconda, you may already have these modules installed, but if not you can install with either of the following
Step2: or if you don't use conda, you can use pip to install scikit-learn with
Step3: Step 1
Step4: Step 2
Step5: That's it! The classifier is trained. This Random Forest classifer used 100 decision trees in its ensemble, a pretty large number considering that we are only using two summary stats to represent our data. Nevertheless it trains on the data very, very quickly.
Confession
Step6: Above we can see which regions of our feature space are assigned to each class
Step7: Looks pretty good. But can we make it better? Well a simple way might be to increase the number of features (i.e. summary statistics) we use as input. Let's give that a whirl using all of the output from Hudson's sample_stats | Python Code:
#untar and compile ms and sample_stats
!tar zxf ms.tar.gz; cd msdir; gcc -o ms ms.c streec.c rand1.c -lm; gcc -o sample_stats sample_stats.c tajd.c -lm
#I get three compiler warnings from ms, but everything should be fine
#now I'll just move the programs into the current working dir
!mv msdir/ms . ; mv msdir/sample_stats .;
Explanation: Using a random forest for demographic model selection
In Schrider and Kern (2017) we give a toy example of demographic model selection via supervised machine learning in Figure Box 1. Following a discussion on twitter, Vince Buffalo had the great idea of our providing a simple example of supervised ML in population genetics using a jupyter notebook; this notebook aims to serve that purpose by showing you exactly how we produced that figure in our paper
Preliminaries
The road map here will be to 1) do some simulation of three demographic models, 2) to train a classifier to distinguish among those models, 3) test that classifier with new simulation data, and 4) to graphically present how well our trained classifier works.
To do this we will use coalescent simulations as implemented in Dick Hudson's well known ms software and for the ML side of things we will use the scikit-learn package. Let's start by installing these dependencies (if you don't have them installed already)
Install, and compile ms
We have put a copy of the ms tarball in this repo, so the following should work upon cloning
End of explanation
!conda install scikit-learn --yes
Explanation: Install scikit-learn
If you use anaconda, you may already have these modules installed, but if not you can install with either of the following
End of explanation
!pip install -U scikit-learn
Explanation: or if you don't use conda, you can use pip to install scikit-learn with
End of explanation
#simulate under the equilibrium model
!./ms 20 2000 -t 100 -r 100 10000 | ./sample_stats > equilibrium.msOut.stats
#simulate under the contraction model
!./ms 20 2000 -t 100 -r 100 10000 -en 0 1 0.5 -en 0.2 1 1 | ./sample_stats > contraction.msOut.stats
#simulate under the growth model
!./ms 20 2000 -t 100 -r 100 10000 -en 0.2 1 0.5 | ./sample_stats > growth.msOut.stats
#now lets suck up the data columns we want for each of these files, and create one big training set; we will use numpy for this
# note that we are only using two columns of the data- these correspond to segSites and Fay & Wu's H
import numpy as np
X1 = np.loadtxt("equilibrium.msOut.stats",usecols=(3,9))
X2 = np.loadtxt("contraction.msOut.stats",usecols=(3,9))
X3 = np.loadtxt("growth.msOut.stats",usecols=(3,9))
X = np.concatenate((X1,X2,X3))
#create associated 'labels' -- these will be the targets for training
y = [0]*len(X1) + [1]*len(X2) + [2]*len(X3)
Y = np.array(y)
#the last step in this process will be to shuffle the data, and then split it into a training set and a testing set
#the testing set will NOT be used during training, and will allow us to check how well the classifier is doing
#scikit-learn has a very convenient function for doing this shuffle and split operation
#
# will will keep out 10% of the data for testing
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X,Y,test_size=0.1)
Explanation: Step 1: create a training set and a testing set
We will create a training set using simulations from three different demographic models: equilibrium population size, instantaneous population growth, and instantaneous population contraction. As you'll see this is really just a toy example because we will perform classification based on data from a single locus; in practice this would be ill-advised and you would want to use data from many loci simulataneously.
So lets do some simulation using ms and summarize those simulations using the sample_stats program that Hudson provides. Ultimately we will only use two summary stats for classification, but one could use many more. Each of these simulations should take a few seconds to run.
End of explanation
from sklearn.ensemble import RandomForestClassifier
rfClf = RandomForestClassifier(n_estimators=100,n_jobs=10)
clf = rfClf.fit(X_train, Y_train)
Explanation: Step 2: train our classifier and visualize decision surface
Now that we have a training and testing set ready to go, we can move on to training our classifier. For this example we will use a random forest classifier (Breiman 2001). This is all implemented in scikit-learn and so the code is very brief.
End of explanation
from sklearn.preprocessing import normalize
#These two functions (taken from scikit-learn.org) plot the decision boundaries for a classifier.
def plot_contours(ax, clf, xx, yy, **params):
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
out = ax.contourf(xx, yy, Z, **params)
return out
def make_meshgrid(x, y, h=.05):
x_min, x_max = x.min() - 1, x.max() + 1
y_min, y_max = y.min() - 1, y.max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
return xx, yy
#Let's do the plotting
import matplotlib.pyplot as plt
fig,ax= plt.subplots(1,1)
X0, X1 = X[:, 0], X[:, 1]
xx, yy = make_meshgrid(X0, X1, h=0.2)
plot_contours(ax, clf, xx, yy, cmap=plt.cm.coolwarm, alpha=0.8)
# plotting only a subset of our data to keep things from getting too cluttered
ax.scatter(X_test[:200, 0], X_test[:200, 1], c=Y_test[:200], cmap=plt.cm.coolwarm, edgecolors='k')
ax.set_xlabel(r"$\theta_{w}$", fontsize=14)
ax.set_ylabel(r"Fay and Wu's $H$", fontsize=14)
ax.set_xticks(())
ax.set_yticks(())
ax.set_title("Classifier decision surface", fontsize=14)
plt.show()
Explanation: That's it! The classifier is trained. This Random Forest classifer used 100 decision trees in its ensemble, a pretty large number considering that we are only using two summary stats to represent our data. Nevertheless it trains on the data very, very quickly.
Confession: the real reason we are using only two summary statistics right here is because it makes it really easy to visualize that classifier's decision surface: which regions of the feature space would be assigned to which class? Let's have a look!
(Note: I have increased the h argument for the call to make_meshgrid below, coarsening the contour plot in the interest of efficiency. Decreasing this will yield a smoother plot, but may take a while and use up a lot more memory. Adjust at your own risk!)
End of explanation
#here's the confusion matrix function
def makeConfusionMatrixHeatmap(data, title, trueClassOrderLs, predictedClassOrderLs, ax):
data = np.array(data)
data = normalize(data, axis=1, norm='l1')
heatmap = ax.pcolor(data, cmap=plt.cm.Blues, vmin=0.0, vmax=1.0)
for i in range(len(predictedClassOrderLs)):
for j in reversed(range(len(trueClassOrderLs))):
val = 100*data[j, i]
if val > 50:
c = '0.9'
else:
c = 'black'
ax.text(i + 0.5, j + 0.5, '%.2f%%' % val, horizontalalignment='center', verticalalignment='center', color=c, fontsize=9)
cbar = plt.colorbar(heatmap, cmap=plt.cm.Blues, ax=ax)
cbar.set_label("Fraction of simulations assigned to class", rotation=270, labelpad=20, fontsize=11)
# put the major ticks at the middle of each cell
ax.set_xticks(np.arange(data.shape[1]) + 0.5, minor=False)
ax.set_yticks(np.arange(data.shape[0]) + 0.5, minor=False)
ax.axis('tight')
ax.set_title(title)
#labels
ax.set_xticklabels(predictedClassOrderLs, minor=False, fontsize=9, rotation=45)
ax.set_yticklabels(reversed(trueClassOrderLs), minor=False, fontsize=9)
ax.set_xlabel("Predicted class")
ax.set_ylabel("True class")
#now the actual work
#first get the predictions
preds=clf.predict(X_test)
counts=[[0.,0.,0.],[0.,0.,0.],[0.,0.,0.]]
for i in range(len(Y_test)):
counts[Y_test[i]][preds[i]] += 1
counts.reverse()
classOrderLs=['equil','contraction','growth']
#now do the plotting
fig,ax= plt.subplots(1,1)
makeConfusionMatrixHeatmap(counts, "Confusion matrix", classOrderLs, classOrderLs, ax)
plt.show()
Explanation: Above we can see which regions of our feature space are assigned to each class: dark blue shaded areas will be classified as Equilibrium, faint blue as Contraction, and red as Growth. Note the non-linear decision surface. Looks pretty cool! And also illustrates how this type of classifier might be useful for discriminating among classes that are difficult to linearly separate. Also plotted are a subset of our test examples, as dots colored according to their true class. Looks like we are doing pretty well but have a few misclassifications. Would be nice to quantify this somehow, which brings us to...
Step 3: benchmark our classifier
The last step of the process is to use our trained classifier to predict which demographic models our test data are drawn from. Recall that the classifier hasn't seen these test data so this should be a fair test of how well the classifier will perform on any new data we throw at it in the future. We will visualize performance using a confusion matrix.
End of explanation
X1 = np.loadtxt("equilibrium.msOut.stats",usecols=(1,3,5,7,9))
X2 = np.loadtxt("contraction.msOut.stats",usecols=(1,3,5,7,9))
X3 = np.loadtxt("growth.msOut.stats",usecols=(1,3,5,7,9))
X = np.concatenate((X1,X2,X3))
#create associated 'labels' -- these will be the targets for training
y = [0]*len(X1) + [1]*len(X2) + [2]*len(X3)
Y = np.array(y)
X_train, X_test, Y_train, Y_test = train_test_split(X,Y,test_size=0.1)
rfClf = RandomForestClassifier(n_estimators=100,n_jobs=10)
clf = rfClf.fit(X_train, Y_train)
preds=clf.predict(X_test)
counts=[[0.,0.,0.],[0.,0.,0.],[0.,0.,0.]]
for i in range(len(Y_test)):
counts[Y_test[i]][preds[i]] += 1
counts.reverse()
fig,ax= plt.subplots(1,1)
makeConfusionMatrixHeatmap(counts, "Confusion matrix", classOrderLs, classOrderLs, ax)
plt.show()
Explanation: Looks pretty good. But can we make it better? Well a simple way might be to increase the number of features (i.e. summary statistics) we use as input. Let's give that a whirl using all of the output from Hudson's sample_stats
End of explanation |
10,028 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
06 - Data Preparation and Advanced Model Evaluation
by Alejandro Correa Bahnsen
version 0.2, May 2016
Part of the class Machine Learning for Security Informatics
This notebook is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Special thanks goes to Kevin Markham
Handling missing values
scikit-learn models expect that all values are numeric and hold meaning. Thus, missing values are not allowed by scikit-learn.
Step1: One possible strategy is to drop missing values
Step2: Sometimes a better strategy is to impute missing values
Step3: Another strategy would be to build a KNN model just to impute missing values. How would we do that?
If values are missing from a categorical feature, we could treat the missing values as another category. Why might that make sense?
How do we choose between all of these strategies?
Handling categorical features
How do we include a categorical feature in our model?
Ordered categories
Step4: How do we interpret the encoding for Embarked?
Why didn't we just encode Embarked using a single feature (C=0, Q=1, S=2)?
Does it matter which category we choose to define as the baseline?
Why do we only need two dummy variables for Embarked?
Step5: ROC curves and AUC
Step6: Besides allowing you to calculate AUC, seeing the ROC curve can help you to choose a threshold that balances sensitivity and specificity in a way that makes sense for the particular context.
Step7: If you use y_pred_class, it will interpret the zeros and ones as predicted probabilities of 0% and 100%.
Cross-validation
Review of model evaluation procedures
Motivation
Step8: train test spliting create bias due to the intrinsic randomness in the sets selection
K-fold cross-validation
Split the dataset into K equal partitions (or "folds").
Use fold 1 as the testing set and the union of the other folds as the training set.
Calculate testing accuracy.
Repeat steps 2 and 3 K times, using a different fold as the testing set each time.
Use the average testing accuracy as the estimate of out-of-sample accuracy.
Diagram of 5-fold cross-validation
Step9: Dataset contains 25 observations (numbered 0 through 24)
5-fold cross-validation, thus it runs for 5 iterations
For each iteration, every observation is either in the training set or the testing set, but not both
Every observation is in the testing set exactly once
Step10: Comparing cross-validation to train/test split
Advantages of cross-validation
Step11: Now let's create a realization of this dataset
Step12: Now say we want to perform a regression on this data. Let's use the built-in linear regression function to compute a fit
Step13: We have fit a straight line to the data, but clearly this model is not a good choice. We say that this model is biased, or that it under-fits the data.
Let's try to improve this by creating a more complicated model. We can do this by adding degrees of freedom, and computing a polynomial regression over the inputs. Scikit-learn makes this easy with the PolynomialFeatures preprocessor, which can be pipelined with a linear regression.
Let's make a convenience routine to do this
Step14: Now we'll use this to fit a quadratic curve to the data.
Step15: This reduces the mean squared error, and makes a much better fit. What happens if we use an even higher-degree polynomial?
Step16: When we increase the degree to this extent, it's clear that the resulting fit is no longer reflecting the true underlying distribution, but is more sensitive to the noise in the training data. For this reason, we call it a high-variance model, and we say that it over-fits the data.
Just for fun, let's use IPython's interact capability (only in IPython 2.0+) to explore this interactively
Step17: Detecting Over-fitting with Validation Curves
Clearly, computing the error on the training data is not enough (we saw this previously). As above, we can use cross-validation to get a better handle on how the model fit is working.
Let's do this here, again using the validation_curve utility. To make things more clear, we'll use a slightly larger dataset
Step18: Now let's plot the validation curves
Step19: Notice the trend here, which is common for this type of plot.
For a small model complexity, the training error and validation error are very similar. This indicates that the model is under-fitting the data
Step20: Detecting Data Sufficiency with Learning Curves
As you might guess, the exact turning-point of the tradeoff between bias and variance is highly dependent on the number of training points used. Here we'll illustrate the use of learning curves, which display this property.
The idea is to plot the mean-squared-error for the training and test set as a function of Number of Training Points
Step21: Let's see what the learning curves look like for a linear model
Step22: This shows a typical learning curve
Step23: Here we see that by adding more model complexity, we've managed to lower the level of convergence to an rms error of 1.0!
What if we get even more complex?
Step24: For an even more complex model, we still converge, but the convergence only happens for large amounts of training data.
So we see the following
Step25:
Step26: F1Score
The traditional F-measure or balanced F-score (F1 score) is the harmonic mean of precision and recall | Python Code:
import pandas as pd
import zipfile
with zipfile.ZipFile('../datasets/titanic.csv.zip', 'r') as z:
f = z.open('titanic.csv')
titanic = pd.read_csv(f, sep=',', index_col=0)
titanic.head()
# check for missing values
titanic.isnull().sum()
Explanation: 06 - Data Preparation and Advanced Model Evaluation
by Alejandro Correa Bahnsen
version 0.2, May 2016
Part of the class Machine Learning for Security Informatics
This notebook is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Special thanks goes to Kevin Markham
Handling missing values
scikit-learn models expect that all values are numeric and hold meaning. Thus, missing values are not allowed by scikit-learn.
End of explanation
# drop rows with any missing values
titanic.dropna().shape
# drop rows where Age is missing
titanic[titanic.Age.notnull()].shape
Explanation: One possible strategy is to drop missing values:
End of explanation
# mean Age
titanic.Age.mean()
# median Age
titanic.Age.median()
# most frequent Age
titanic.Age.mode()
# fill missing values for Age with the median age
titanic.Age.fillna(titanic.Age.median(), inplace=True)
Explanation: Sometimes a better strategy is to impute missing values:
End of explanation
titanic.head(10)
# encode Sex_Female feature
titanic['Sex_Female'] = titanic.Sex.map({'male':0, 'female':1})
# create a DataFrame of dummy variables for Embarked
embarked_dummies = pd.get_dummies(titanic.Embarked, prefix='Embarked')
embarked_dummies.drop(embarked_dummies.columns[0], axis=1, inplace=True)
# concatenate the original DataFrame and the dummy DataFrame
titanic = pd.concat([titanic, embarked_dummies], axis=1)
titanic.head(1)
Explanation: Another strategy would be to build a KNN model just to impute missing values. How would we do that?
If values are missing from a categorical feature, we could treat the missing values as another category. Why might that make sense?
How do we choose between all of these strategies?
Handling categorical features
How do we include a categorical feature in our model?
Ordered categories: transform them to sensible numeric values (example: small=1, medium=2, large=3)
Unordered categories: use dummy encoding (0/1)
End of explanation
# define X and y
feature_cols = ['Pclass', 'Parch', 'Age', 'Sex_Female', 'Embarked_Q', 'Embarked_S']
X = titanic[feature_cols]
y = titanic.Survived
# train/test split
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
# train a logistic regression model
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression(C=1e9)
logreg.fit(X_train, y_train)
# make predictions for testing set
y_pred_class = logreg.predict(X_test)
# calculate testing accuracy
from sklearn import metrics
print(metrics.accuracy_score(y_test, y_pred_class))
Explanation: How do we interpret the encoding for Embarked?
Why didn't we just encode Embarked using a single feature (C=0, Q=1, S=2)?
Does it matter which category we choose to define as the baseline?
Why do we only need two dummy variables for Embarked?
End of explanation
# predict probability of survival
y_pred_prob = logreg.predict_proba(X_test)[:, 1]
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (8, 6)
plt.rcParams['font.size'] = 14
# plot ROC curve
fpr, tpr, thresholds = metrics.roc_curve(y_test, y_pred_prob)
plt.plot(fpr, tpr)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.xlabel('False Positive Rate (1 - Specificity)')
plt.ylabel('True Positive Rate (Sensitivity)')
# calculate AUC
print(metrics.roc_auc_score(y_test, y_pred_prob))
Explanation: ROC curves and AUC
End of explanation
# histogram of predicted probabilities grouped by actual response value
df = pd.DataFrame({'probability':y_pred_prob, 'actual':y_test})
df.hist(column='probability', by='actual', sharex=True, sharey=True)
# ROC curve using y_pred_class - WRONG!
fpr, tpr, thresholds = metrics.roc_curve(y_test, y_pred_class)
plt.plot(fpr, tpr)
# AUC using y_pred_class - WRONG!
print(metrics.roc_auc_score(y_test, y_pred_class))
Explanation: Besides allowing you to calculate AUC, seeing the ROC curve can help you to choose a threshold that balances sensitivity and specificity in a way that makes sense for the particular context.
End of explanation
from sklearn.linear_model import LogisticRegression
from sklearn.cross_validation import train_test_split
from sklearn import metrics
# define X and y
feature_cols = ['Pclass', 'Parch', 'Age', 'Sex_Female', 'Embarked_Q', 'Embarked_S']
X = titanic[feature_cols]
y = titanic.Survived
# train/test split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
# train a logistic regression model
logreg = LogisticRegression(C=1e9)
logreg.fit(X_train, y_train)
# make predictions for testing set
y_pred_class = logreg.predict(X_test)
# calculate testing accuracy
print(metrics.accuracy_score(y_test, y_pred_class))
# train/test split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=2)
# train a logistic regression model
logreg = LogisticRegression(C=1e9)
logreg.fit(X_train, y_train)
# make predictions for testing set
y_pred_class = logreg.predict(X_test)
# calculate testing accuracy
print(metrics.accuracy_score(y_test, y_pred_class))
# train/test split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=3)
# train a logistic regression model
logreg = LogisticRegression(C=1e9)
logreg.fit(X_train, y_train)
# make predictions for testing set
y_pred_class = logreg.predict(X_test)
# calculate testing accuracy
print(metrics.accuracy_score(y_test, y_pred_class))
Explanation: If you use y_pred_class, it will interpret the zeros and ones as predicted probabilities of 0% and 100%.
Cross-validation
Review of model evaluation procedures
Motivation: Need a way to choose between machine learning models
Goal is to estimate likely performance of a model on out-of-sample data
Initial idea: Train and test on the same data
But, maximizing training accuracy rewards overly complex models which overfit the training data
Alternative idea: Train/test split
Split the dataset into two pieces, so that the model can be trained and tested on different data
Testing accuracy is a better estimate than training accuracy of out-of-sample performance
But, it provides a high variance estimate since changing which observations happen to be in the testing set can significantly change testing accuracy
End of explanation
# simulate splitting a dataset of 25 observations into 5 folds
from sklearn.cross_validation import KFold
kf = KFold(25, n_folds=5, shuffle=False)
# print the contents of each training and testing set
print('{} {:^61} {}'.format('Iteration', 'Training set observations', 'Testing set observations'))
for iteration, data in enumerate(kf, start=1):
print('{:^9} {} {:^25}'.format(str(iteration), str(data[0]), str(data[1])))
Explanation: train test spliting create bias due to the intrinsic randomness in the sets selection
K-fold cross-validation
Split the dataset into K equal partitions (or "folds").
Use fold 1 as the testing set and the union of the other folds as the training set.
Calculate testing accuracy.
Repeat steps 2 and 3 K times, using a different fold as the testing set each time.
Use the average testing accuracy as the estimate of out-of-sample accuracy.
Diagram of 5-fold cross-validation:
End of explanation
# Create k-folds
kf = KFold(X.shape[0], n_folds=10, random_state=0)
results = []
for train_index, test_index in kf:
X_train, X_test = X.iloc[train_index], X.iloc[test_index]
y_train, y_test = y.iloc[train_index], y.iloc[test_index]
# train a logistic regression model
logreg = LogisticRegression(C=1e9)
logreg.fit(X_train, y_train)
# make predictions for testing set
y_pred_class = logreg.predict(X_test)
# calculate testing accuracy
results.append(metrics.accuracy_score(y_test, y_pred_class))
pd.Series(results).describe()
from sklearn.cross_validation import cross_val_score
logreg = LogisticRegression(C=1e9)
results = cross_val_score(logreg, X, y, cv=10, scoring='accuracy')
pd.Series(results).describe()
Explanation: Dataset contains 25 observations (numbered 0 through 24)
5-fold cross-validation, thus it runs for 5 iterations
For each iteration, every observation is either in the training set or the testing set, but not both
Every observation is in the testing set exactly once
End of explanation
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
def test_func(x, err=0.5):
y = 10 - 1. / (x + 0.1)
if err > 0:
y = np.random.normal(y, err)
return y
Explanation: Comparing cross-validation to train/test split
Advantages of cross-validation:
More accurate estimate of out-of-sample accuracy
More "efficient" use of data (every observation is used for both training and testing)
Advantages of train/test split:
Runs K times faster than K-fold cross-validation
Simpler to examine the detailed results of the testing process
Cross-validation recommendations
K can be any number, but K=10 is generally recommended
For classification problems, stratified sampling is recommended for creating the folds
Each response class should be represented with equal proportions in each of the K folds
scikit-learn's cross_val_score function does this by default
Improvements to cross-validation
Repeated cross-validation
Repeat cross-validation multiple times (with different random splits of the data) and average the results
More reliable estimate of out-of-sample performance by reducing the variance associated with a single trial of cross-validation
Creating a hold-out set
"Hold out" a portion of the data before beginning the model building process
Locate the best model using cross-validation on the remaining data, and test it using the hold-out set
More reliable estimate of out-of-sample performance since hold-out set is truly out-of-sample
Feature engineering and selection within cross-validation iterations
Normally, feature engineering and selection occurs before cross-validation
Instead, perform all feature engineering and selection within each cross-validation iteration
More reliable estimate of out-of-sample performance since it better mimics the application of the model to out-of-sample data
Overfitting, Underfitting and Model Selection
Now that we've gone over the basics of validation, and cross-validation, it's time to go into even more depth regarding model selection.
The issues associated with validation and
cross-validation are some of the most important
aspects of the practice of machine learning. Selecting the optimal model
for your data is vital, and is a piece of the problem that is not often
appreciated by machine learning practitioners.
Of core importance is the following question:
If our estimator is underperforming, how should we move forward?
Use simpler or more complicated model?
Add more features to each observed data point?
Add more training samples?
The answer is often counter-intuitive. In particular, Sometimes using a
more complicated model will give worse results. Also, Sometimes adding
training data will not improve your results. The ability to determine
what steps will improve your model is what separates the successful machine
learning practitioners from the unsuccessful.
Illustration of the Bias-Variance Tradeoff
For this section, we'll work with a simple 1D regression problem. This will help us to
easily visualize the data and the model, and the results generalize easily to higher-dimensional
datasets. We'll explore a simple linear regression problem.
This can be accomplished within scikit-learn with the sklearn.linear_model module.
We'll create a simple nonlinear function that we'd like to fit
End of explanation
def make_data(N=40, error=1.0, random_seed=1):
# randomly sample the data
np.random.seed(1)
X = np.random.random(N)[:, np.newaxis]
y = test_func(X.ravel(), error)
return X, y
X, y = make_data(40, error=1)
plt.scatter(X.ravel(), y);
Explanation: Now let's create a realization of this dataset:
End of explanation
X_test = np.linspace(-0.1, 1.1, 500)[:, None]
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
model = LinearRegression()
model.fit(X, y)
y_test = model.predict(X_test)
plt.scatter(X.ravel(), y)
plt.plot(X_test.ravel(), y_test)
plt.title("mean squared error: {0:.3g}".format(mean_squared_error(model.predict(X), y)));
Explanation: Now say we want to perform a regression on this data. Let's use the built-in linear regression function to compute a fit:
End of explanation
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
Explanation: We have fit a straight line to the data, but clearly this model is not a good choice. We say that this model is biased, or that it under-fits the data.
Let's try to improve this by creating a more complicated model. We can do this by adding degrees of freedom, and computing a polynomial regression over the inputs. Scikit-learn makes this easy with the PolynomialFeatures preprocessor, which can be pipelined with a linear regression.
Let's make a convenience routine to do this:
End of explanation
X_poly = PolynomialFeatures(degree=2).fit_transform(X)
X_test_poly = PolynomialFeatures(degree=2).fit_transform(X_test)
model = LinearRegression()
model.fit(X_poly, y)
y_test = model.predict(X_test_poly)
plt.scatter(X.ravel(), y)
plt.plot(X_test.ravel(), y_test)
plt.title("mean squared error: {0:.3g}".format(mean_squared_error(model.predict(X_poly), y)));
Explanation: Now we'll use this to fit a quadratic curve to the data.
End of explanation
X_poly = PolynomialFeatures(degree=30).fit_transform(X)
X_test_poly = PolynomialFeatures(degree=30).fit_transform(X_test)
model = LinearRegression()
model.fit(X_poly, y)
y_test = model.predict(X_test_poly)
plt.scatter(X.ravel(), y)
plt.plot(X_test.ravel(), y_test)
plt.title("mean squared error: {0:.3g}".format(mean_squared_error(model.predict(X_poly), y)))
plt.ylim(-4, 14);
Explanation: This reduces the mean squared error, and makes a much better fit. What happens if we use an even higher-degree polynomial?
End of explanation
from IPython.html.widgets import interact
def plot_fit(degree=1, Npts=50):
X, y = make_data(Npts, error=1)
X_test = np.linspace(-0.1, 1.1, 500)[:, None]
X_poly = PolynomialFeatures(degree=degree).fit_transform(X)
X_test_poly = PolynomialFeatures(degree=degree).fit_transform(X_test)
model = LinearRegression()
model.fit(X_poly, y)
y_test = model.predict(X_test_poly)
plt.scatter(X.ravel(), y)
plt.plot(X_test.ravel(), y_test)
plt.title("mean squared error: {0:.3g}".format(mean_squared_error(model.predict(X_poly), y)))
plt.ylim(-4, 14)
interact(plot_fit, degree=[1, 40], Npts=[2, 100]);
Explanation: When we increase the degree to this extent, it's clear that the resulting fit is no longer reflecting the true underlying distribution, but is more sensitive to the noise in the training data. For this reason, we call it a high-variance model, and we say that it over-fits the data.
Just for fun, let's use IPython's interact capability (only in IPython 2.0+) to explore this interactively:
End of explanation
X, y = make_data(120, error=1.0)
plt.scatter(X, y);
from sklearn.learning_curve import validation_curve
def rms_error(model, X, y):
y_pred = model.predict(X)
return np.sqrt(np.mean((y - y_pred) ** 2))
from sklearn.pipeline import make_pipeline
def PolynomialRegression(degree=2, **kwargs):
return make_pipeline(PolynomialFeatures(degree),
LinearRegression(**kwargs))
degree = np.arange(0, 18)
val_train, val_test = validation_curve(PolynomialRegression(), X, y,
'polynomialfeatures__degree', degree, cv=7,
scoring=rms_error)
Explanation: Detecting Over-fitting with Validation Curves
Clearly, computing the error on the training data is not enough (we saw this previously). As above, we can use cross-validation to get a better handle on how the model fit is working.
Let's do this here, again using the validation_curve utility. To make things more clear, we'll use a slightly larger dataset:
End of explanation
def plot_with_err(x, data, **kwargs):
mu, std = data.mean(1), data.std(1)
lines = plt.plot(x, mu, '-', **kwargs)
plt.fill_between(x, mu - std, mu + std, edgecolor='none',
facecolor=lines[0].get_color(), alpha=0.2)
plot_with_err(degree, val_train, label='training scores')
plot_with_err(degree, val_test, label='validation scores')
plt.xlabel('degree'); plt.ylabel('rms error')
plt.legend();
Explanation: Now let's plot the validation curves:
End of explanation
model = PolynomialRegression(4).fit(X, y)
plt.scatter(X, y)
plt.plot(X_test, model.predict(X_test));
Explanation: Notice the trend here, which is common for this type of plot.
For a small model complexity, the training error and validation error are very similar. This indicates that the model is under-fitting the data: it doesn't have enough complexity to represent the data. Another way of putting it is that this is a high-bias model.
As the model complexity grows, the training and validation scores diverge. This indicates that the model is over-fitting the data: it has so much flexibility, that it fits the noise rather than the underlying trend. Another way of putting it is that this is a high-variance model.
Note that the training score (nearly) always improves with model complexity. This is because a more complicated model can fit the noise better, so the model improves. The validation data generally has a sweet spot, which here is around 5 terms.
Here's our best-fit model according to the cross-validation:
End of explanation
from sklearn.learning_curve import learning_curve
def plot_learning_curve(degree=3):
train_sizes = np.linspace(0.05, 1, 20)
N_train, val_train, val_test = learning_curve(PolynomialRegression(degree),
X, y, train_sizes, cv=5,
scoring=rms_error)
plot_with_err(N_train, val_train, label='training scores')
plot_with_err(N_train, val_test, label='validation scores')
plt.xlabel('Training Set Size'); plt.ylabel('rms error')
plt.ylim(0, 3)
plt.xlim(5, 80)
plt.legend()
Explanation: Detecting Data Sufficiency with Learning Curves
As you might guess, the exact turning-point of the tradeoff between bias and variance is highly dependent on the number of training points used. Here we'll illustrate the use of learning curves, which display this property.
The idea is to plot the mean-squared-error for the training and test set as a function of Number of Training Points
End of explanation
plot_learning_curve(1)
Explanation: Let's see what the learning curves look like for a linear model:
End of explanation
plot_learning_curve(3)
Explanation: This shows a typical learning curve: for very few training points, there is a large separation between the training and test error, which indicates over-fitting. Given the same model, for a large number of training points, the training and testing errors converge, which indicates potential under-fitting.
As you add more data points, the training error will never increase, and the testing error will never decrease (why do you think this is?)
It is easy to see that, in this plot, if you'd like to reduce the MSE down to the nominal value of 1.0 (which is the magnitude of the scatter we put in when constructing the data), then adding more samples will never get you there. For $d=1$, the two curves have converged and cannot move lower. What about for a larger value of $d$?
End of explanation
plot_learning_curve(10)
Explanation: Here we see that by adding more model complexity, we've managed to lower the level of convergence to an rms error of 1.0!
What if we get even more complex?
End of explanation
import pandas as pd
url = 'https://raw.githubusercontent.com/justmarkham/DAT8/master/data/titanic.csv'
titanic = pd.read_csv(url, index_col='PassengerId')
# fill missing values for Age with the median age
titanic.Age.fillna(titanic.Age.median(), inplace=True)
# define X and y
feature_cols = ['Pclass', 'Parch', 'Age']
X = titanic[feature_cols]
y = titanic.Survived
# train/test split
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
# train a logistic regression model
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression(C=1e9)
logreg.fit(X_train, y_train)
# make predictions for testing set
y_pred_class = logreg.predict(X_test)
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test, y_pred_class)
Explanation: For an even more complex model, we still converge, but the convergence only happens for large amounts of training data.
So we see the following:
you can cause the lines to converge by adding more points or by simplifying the model.
you can bring the convergence error down only by increasing the complexity of the model.
Thus these curves can give you hints about how you might improve a sub-optimal model. If the curves are already close together, you need more model complexity. If the curves are far apart, you might also improve the model by adding more data.
To make this more concrete, imagine some telescope data in which the results are not robust enough. You must think about whether to spend your valuable telescope time observing more objects to get a larger training set, or more attributes of each object in order to improve the model. The answer to this question has real consequences, and can be addressed using these metrics.
Recall, Precision and F1-Score
Intuitively, precision is the ability
of the classifier not to label as positive a sample that is negative, and
recall is the
ability of the classifier to find all the positive samples.
The F-measure
($F_\beta$ and $F_1$ measures) can be interpreted as a weighted
harmonic mean of the precision and recall. A
$F_\beta$ measure reaches its best value at 1 and its worst score at 0.
With $\beta = 1$, $F_\beta$ and
$F_1$ are equivalent, and the recall and the precision are equally important.
End of explanation
from sklearn.metrics import precision_score, recall_score, f1_score
print('precision_score ', precision_score(y_test, y_pred_class))
print('recall_score ', recall_score(y_test, y_pred_class))
Explanation:
End of explanation
print('f1_score ', f1_score(y_test, y_pred_class))
Explanation: F1Score
The traditional F-measure or balanced F-score (F1 score) is the harmonic mean of precision and recall:
$$F_1 = 2 \cdot \frac{\mathrm{precision} \cdot \mathrm{recall}}{\mathrm{precision} + \mathrm{recall}}.$$
End of explanation |
10,029 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 12
Step1: If you want to create a tuple with a single value, add a comma (,) after the value, but don’t add parenthesis
You can also use the built in function tuple
Step2: Most list operators (e.g., bracket and slice) work on tuples
If you try to modify one of the elements, you get an error
Step3: Tuple assignment
Often, you may want to swap the values of two variables
This is done conventionally using a temporary variable for storage
Step4: Python allows you to do it using a tuple assignment
Step5: On the left is a tuple of varaibles and on the right is a tuple of expressions
Note that the number on each side of the equality sign must be the same
The right side can be any kind of sequence (e.g., a string or list)
Step6: Tuples as return values
A function can only return one value
However, if we make that value a tuple, we can effectively return multiple values
For example, the divmod function takes two (2) arguments and retunrs a tuple of two (2) values, the quotient and remainder
Step7: Note the use of a tuple assignment
Here is how to build a function that returns a tuple
Step8: Note that min and max are built-in functions
Variable-length argument tuples
All the functions we have built and used required a specific number of arguments
You can use tuples to build functions that accept a variable number of arguments
Prepend the argument’s variable name with an * to do this
It is referred to as the gather operator
Step9: The complement is the scatter operator
It allows you to pass a sequence of values as individual arguments to the function
Step10: Lists and tuples
The zip function takes two or more sequences and "zips" them into a list of tuples
Step11: Essentially, it returns an iterator over a list of tuples
If the sequences aren't the same length, the result has the length of the shorter one
Step12: You can also use tuple assignment to get the individual values
Step13: You can combine zip, for and tuple assignment to traverse two (or more) sequences at the same time
Step14: If you need the indices, use the enumerate function
Step15: Dictionaries and tuples
The method items returns a tuple of key-value pairs from a dictionary
Step16: Remember that there is no particular ordering in a dictionary
You can also add a list of tuples to a dictionary using the update method
Using items, tuple assignment, and a for loop is an easy way to traverse the keys and values of a dictionary
Step17: Since tuples are immutable, you can even use them as keys in a dictionary | Python Code:
a_tuple = ( 'a', 'b', 'c', 'd', 'e' )
a_tuple = 'a', 'b', 'c', 'd', 'e'
a_tuple = 'a',
type( a_tuple )
Explanation: Chapter 12: Tuples
Contents
- Tuples are immutable
- Tuple assignment
- Tuples as return values
- Variable-length argument tuples
- Lists and tuples
- Dictionaries and tuples
- Comparing tuples
- Sequences of sequences
- Debugging
- Exercises
This notebook is based on "Think Python, 2Ed" by Allen B. Downey <br>
https://greenteapress.com/wp/think-python-2e/
Tuples are immutable
A tuple is a sequence of values
They values can be any type and are index by integers
They are similar to lists, except:
Tuples are immutable
Tuples values usually have different types (unlike lists which generally hold only a single type)
There are multiple ways to create a tuple
End of explanation
a_tuple = tuple()
print( a_tuple )
a_tuple = tuple( 'lupins' )
print( a_tuple )
Explanation: If you want to create a tuple with a single value, add a comma (,) after the value, but don’t add parenthesis
You can also use the built in function tuple
End of explanation
a_tuple = ( 'a', 'b', 'c', 'd', 'e' )
print( a_tuple[1:3] )
print( a_tuple[:3] )
print( a_tuple[1:] )
# Uncomment to see error
# a_tuple[0] = 'z'
Explanation: Most list operators (e.g., bracket and slice) work on tuples
If you try to modify one of the elements, you get an error
End of explanation
a = 5
b = 6
# Conventional value swap
temp = a
a = b
b = temp
print( a, b )
Explanation: Tuple assignment
Often, you may want to swap the values of two variables
This is done conventionally using a temporary variable for storage
End of explanation
a, b = b, a
print( a, b )
Explanation: Python allows you to do it using a tuple assignment
End of explanation
addr = '[email protected]'
uname, domain = addr.split( '@' )
print( uname )
print( domain )
Explanation: On the left is a tuple of varaibles and on the right is a tuple of expressions
Note that the number on each side of the equality sign must be the same
The right side can be any kind of sequence (e.g., a string or list)
End of explanation
quotient, remainder = divmod( 7, 3 )
print( quotient )
print( remainder )
Explanation: Tuples as return values
A function can only return one value
However, if we make that value a tuple, we can effectively return multiple values
For example, the divmod function takes two (2) arguments and retunrs a tuple of two (2) values, the quotient and remainder
End of explanation
def min_max( a_tuple ):
return min( a_tuple ), max( a_tuple )
numbers = ( 13, 7, 55, 42 )
min_num, max_num = min_max( numbers )
print( min_num )
print( max_num )
Explanation: Note the use of a tuple assignment
Here is how to build a function that returns a tuple
End of explanation
def printall( *args ):
print( args )
printall( 1 , 2.0 , '3' )
Explanation: Note that min and max are built-in functions
Variable-length argument tuples
All the functions we have built and used required a specific number of arguments
You can use tuples to build functions that accept a variable number of arguments
Prepend the argument’s variable name with an * to do this
It is referred to as the gather operator
End of explanation
a_tuple = ( 7, 3 )
# divmod( a_tuple ) # Uncomment to see error
divmod( *a_tuple )
Explanation: The complement is the scatter operator
It allows you to pass a sequence of values as individual arguments to the function
End of explanation
a_string = 'abc'
a_list = [ 0, 1, 2 ]
for element in zip( a_string, a_list ):
print( element )
Explanation: Lists and tuples
The zip function takes two or more sequences and "zips" them into a list of tuples
End of explanation
for element in zip( 'Peter', 'Tony' ):
print( element )
Explanation: Essentially, it returns an iterator over a list of tuples
If the sequences aren't the same length, the result has the length of the shorter one
End of explanation
a_list = [ ('a', 0), ('b', 1), ('c', 2) ]
for letter, number in a_list:
print( letter, number )
Explanation: You can also use tuple assignment to get the individual values
End of explanation
def has_match( tuple1, tuple2 ):
result = False
for x, y in zip( tuple1, tuple2 ):
if( x == y ):
result = True
return result
Explanation: You can combine zip, for and tuple assignment to traverse two (or more) sequences at the same time
End of explanation
for index , element in enumerate( 'abc' ):
print( index, element )
Explanation: If you need the indices, use the enumerate function
End of explanation
a_dict = { 'a': 0, 'b':1, 'c':2 }
dict_items = a_dict.items()
print( type( dict_items ) )
print( dict_items )
for element in dict_items:
print( element )
Explanation: Dictionaries and tuples
The method items returns a tuple of key-value pairs from a dictionary
End of explanation
for key, value in a_dict.items():
print( key, value )
Explanation: Remember that there is no particular ordering in a dictionary
You can also add a list of tuples to a dictionary using the update method
Using items, tuple assignment, and a for loop is an easy way to traverse the keys and values of a dictionary
End of explanation
directory = dict()
directory[ 'Smith', 'Bob' ] = '555-1234'
directory[ 'Doe', 'Jane' ] = '555-9786'
for last, first in directory:
print( first, last, directory[last, first] )
Explanation: Since tuples are immutable, you can even use them as keys in a dictionary
End of explanation |
10,030 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Name
Data preparation using SparkSQL on YARN with Cloud Dataproc
Label
Cloud Dataproc, GCP, Cloud Storage, YARN, SparkSQL, Kubeflow, pipelines, components
Summary
A Kubeflow Pipeline component to prepare data by submitting a SparkSql job on YARN to Cloud Dataproc.
Details
Intended use
Use the component to run an Apache SparkSql job as one preprocessing step in a Kubeflow Pipeline.
Runtime arguments
Argument| Description | Optional | Data type| Accepted values| Default |
Step1: Load the component using KFP SDK
Step2: Sample
Note
Step3: Example pipeline that uses the component
Step4: Compile the pipeline
Step5: Submit the pipeline for execution | Python Code:
%%capture --no-stderr
KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz'
!pip3 install $KFP_PACKAGE --upgrade
Explanation: Name
Data preparation using SparkSQL on YARN with Cloud Dataproc
Label
Cloud Dataproc, GCP, Cloud Storage, YARN, SparkSQL, Kubeflow, pipelines, components
Summary
A Kubeflow Pipeline component to prepare data by submitting a SparkSql job on YARN to Cloud Dataproc.
Details
Intended use
Use the component to run an Apache SparkSql job as one preprocessing step in a Kubeflow Pipeline.
Runtime arguments
Argument| Description | Optional | Data type| Accepted values| Default |
:--- | :---------- | :--- | :------- | :------ | :------
project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No| GCPProjectID | | |
region | The Cloud Dataproc region to handle the request. | No | GCPRegion|
cluster_name | The name of the cluster to run the job. | No | String| | |
queries | The queries to execute the SparkSQL job. Specify multiple queries in one string by separating them with semicolons. You do not need to terminate queries with semicolons. | Yes | List | | None |
query_file_uri | The HCFS URI of the script that contains the SparkSQL queries.| Yes | GCSPath | | None |
script_variables | Mapping of the query’s variable names to their values (equivalent to the SparkSQL command: SET name="value";).| Yes| Dict | | None |
sparksql_job | The payload of a SparkSqlJob. | Yes | Dict | | None |
job | The payload of a Dataproc job. | Yes | Dict | | None |
wait_interval | The number of seconds to pause between polling the operation. | Yes |Integer | | 30 |
Output
Name | Description | Type
:--- | :---------- | :---
job_id | The ID of the created job. | String
Cautions & requirements
To use the component, you must:
* Set up a GCP project by following this guide.
* Create a new cluster.
* The component can authenticate to GCP. Refer to Authenticating Pipelines to GCP for details.
* Grant the Kubeflow user service account the role roles/dataproc.editor on the project.
Detailed Description
This component creates a Pig job from Dataproc submit job REST API.
Follow these steps to use the component in a pipeline:
1. Install the Kubeflow Pipeline SDK:
End of explanation
import kfp.components as comp
dataproc_submit_sparksql_job_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/01a23ae8672d3b18e88adf3036071496aca3552d/components/gcp/dataproc/submit_sparksql_job/component.yaml')
help(dataproc_submit_sparksql_job_op)
Explanation: Load the component using KFP SDK
End of explanation
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
QUERY = '''
DROP TABLE IF EXISTS natality_csv;
CREATE EXTERNAL TABLE natality_csv (
source_year BIGINT, year BIGINT, month BIGINT, day BIGINT, wday BIGINT,
state STRING, is_male BOOLEAN, child_race BIGINT, weight_pounds FLOAT,
plurality BIGINT, apgar_1min BIGINT, apgar_5min BIGINT,
mother_residence_state STRING, mother_race BIGINT, mother_age BIGINT,
gestation_weeks BIGINT, lmp STRING, mother_married BOOLEAN,
mother_birth_state STRING, cigarette_use BOOLEAN, cigarettes_per_day BIGINT,
alcohol_use BOOLEAN, drinks_per_week BIGINT, weight_gain_pounds BIGINT,
born_alive_alive BIGINT, born_alive_dead BIGINT, born_dead BIGINT,
ever_born BIGINT, father_race BIGINT, father_age BIGINT,
record_weight BIGINT
)
ROW FORMAT DELIMITED FIELDS TERMINATED BY ','
LOCATION 'gs://public-datasets/natality/csv';
SELECT * FROM natality_csv LIMIT 10;'''
EXPERIMENT_NAME = 'Dataproc - Submit SparkSQL Job'
Explanation: Sample
Note: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template.
Setup a Dataproc cluster
Create a new Dataproc cluster (or reuse an existing one) before running the sample code.
Prepare a SparkSQL job
Either put your SparkSQL queries in the queires list, or upload your SparkSQL queries into a file to a Cloud Storage bucket and then enter the Cloud Storage bucket’s path in query_file_uri. In this sample, we will use a hard coded query in the queries list to select data from a public CSV file from Cloud Storage.
For more details about Spark SQL, see Spark SQL, DataFrames and Datasets Guide
Set sample parameters
End of explanation
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataproc submit SparkSQL job pipeline',
description='Dataproc submit SparkSQL job pipeline'
)
def dataproc_submit_sparksql_job_pipeline(
project_id = PROJECT_ID,
region = REGION,
cluster_name = CLUSTER_NAME,
queries = json.dumps([QUERY]),
query_file_uri = '',
script_variables = '',
sparksql_job='',
job='',
wait_interval='30'
):
dataproc_submit_sparksql_job_op(
project_id=project_id,
region=region,
cluster_name=cluster_name,
queries=queries,
query_file_uri=query_file_uri,
script_variables=script_variables,
sparksql_job=sparksql_job,
job=job,
wait_interval=wait_interval)
Explanation: Example pipeline that uses the component
End of explanation
pipeline_func = dataproc_submit_sparksql_job_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
Explanation: Compile the pipeline
End of explanation
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
Explanation: Submit the pipeline for execution
End of explanation |
10,031 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Q 5.1
Step1: (a) The squared reconstruction error vs iteration number.
Step2: (b) Let us say that the number of assignments for a mean is the number of points assigned to that
mean. Plot the number of assignments for each center in descending order.
Step3: (c) Visualize the 16 centers that you learned, and display them in an order in that corresponds to
the frequency in which they were assigned (if you use a grid, just describe the ordering).
Step4: Q 5.2
k = 250, MNIST data transformed by first 50 PCA components.
Step5: (a) The squared reconstruction error vs iteration number.
Step6: (b) Let us say that the number of assignments for a mean is the number of points assigned to that
mean. Plot the number of assignments for each center in descending order.
Step7: (c) Visualize 16 of these centers, chosen randomly. Display them in the order in an order in that
corresponds to the frequency in which they were assigned.
Step8: 5.2 Classification with K-means
(4 points) For K = 16, what are your training and test 0/1 losses?
Step9: (4 points) For K = 250, what are your training and test 0/1 losses?
Step10: Check out centers that appear to be a mix of digits.
for k = 16, center number 5 appears to be a blend of 5 and 8. What are it's true labels? | Python Code:
km_16 = KMeans(k=16, train_X=X_train, train_y=y_train,
pca_obj=pca_training,
max_iter = 500,
test_X=X_test, test_y=y_test,
verbose=False)
km_16.run()
Explanation: Q 5.1:
k = 16, MNIST data transformed by first 50 PCA components.
End of explanation
km_16_reconstruction_error = km_16.plot_squared_reconstruction_error()
km_16_reconstruction_error.savefig('../figures/k_means/k16_reconstruction_error.pdf',
bbox_inches='tight')
km_16_reconstruction_error = km_16.plot_squared_reconstruction_error()
km_16_reconstruction_error.savefig('../figures/k_means/k16_reconstruction_error.pdf',
bbox_inches='tight')
km_16.results_df.tail(2)
km_16_reconstruction_error_nromalized = \
km_16.plot_squared_reconstruction_error_normalized()
km_16_0_1_loss = km_16.plot_0_1_loss()
km_16_0_1_loss.savefig('../figures/k_means/k16_loss_01.pdf',
bbox_inches='tight')
Explanation: (a) The squared reconstruction error vs iteration number.
End of explanation
km_16_assignment_bars = km_16.plot_num_assignments_for_each_center()
km_16_assignment_bars.savefig('../figures/k_means/k16_assignment_bars.pdf',
bbox_inches='tight')
Explanation: (b) Let us say that the number of assignments for a mean is the number of points assigned to that
mean. Plot the number of assignments for each center in descending order.
End of explanation
km_16.visualize_center(km_16.center_coordinates[0])
km_16.visualize_n_centers(16, top=True)
Explanation: (c) Visualize the 16 centers that you learned, and display them in an order in that corresponds to
the frequency in which they were assigned (if you use a grid, just describe the ordering).
End of explanation
km_250 = KMeans(k=250, train_X=X_train, train_y=y_train,
pca_obj=pca_training,
max_iter = 500,
test_X=X_test, test_y=y_test,
verbose=False)
km_250.run()
Explanation: Q 5.2
k = 250, MNIST data transformed by first 50 PCA components.
End of explanation
km_250_reconstruction = km_250.plot_squared_reconstruction_error()
km_250_reconstruction.savefig('../figures/k_means/k250_reconstruction_error.pdf',
bbox_inches='tight')
km_250.results_df.tail(1).T
Explanation: (a) The squared reconstruction error vs iteration number.
End of explanation
km_250_assignment_bars = km_250.plot_num_assignments_for_each_center()
km_250_assignment_bars.savefig('../figures/k_means/k250_assignment_bars.pdf',
bbox_inches='tight')
Explanation: (b) Let us say that the number of assignments for a mean is the number of points assigned to that
mean. Plot the number of assignments for each center in descending order.
End of explanation
km_250.visualize_n_centers(16, top=True)
km_250.visualize_n_centers(16, top=False)
# just for fun
km_250_loss_01 = km_250.plot_0_1_loss()
Explanation: (c) Visualize 16 of these centers, chosen randomly. Display them in the order in an order in that
corresponds to the frequency in which they were assigned.
End of explanation
import copy
def assess_test_data(self):
test_model = copy.copy(self)
test_model.X = self.test_X
test_model.y = self.test_y
test_model.N, test_model.d = test_model.X.shape
## model characteristics
#test_model.assignments = None # cluster assignment. Does not know about labels.
#test_model.predictions = None # label assignment. Does not know about cluster.
test_model.results_df = None
# todo: rename
test_model.results_df_cluster_assignment_counts = None
test_model.set_point_assignments()
test_model.set_centers_classes()
test_model.set_predicted_labels()
test_model.record_count_of_assignments_to_each_mean()
test_model.record_fit_statistics()
print("test results:")
print(test_model.results_df.T)
self.test_model = test_model
return test_model.results_df.T
km_16_test_results = assess_test_data(km_16)
print(km_16_test_results.to_latex())
km_16.assess_test_data().to
Explanation: 5.2 Classification with K-means
(4 points) For K = 16, what are your training and test 0/1 losses?
End of explanation
km_250.assess_test_data()
km_250_test_results = assess_test_data(km_250)
print("")
print(km_250_test_results.to_latex())
Explanation: (4 points) For K = 250, what are your training and test 0/1 losses?
End of explanation
km_16.verbose = True
km_16.clusters_by_num_in_cluster().head(6)
km_16.set_centers_classes()
Explanation: Check out centers that appear to be a mix of digits.
for k = 16, center number 5 appears to be a blend of 5 and 8. What are it's true labels?
End of explanation |
10,032 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
We're going to improve the tweet_enricher.py script from Gnip-Analysis-Pipeline. We'll make a simplified version and create variations that improve it in various ways.
To enrich tweets, we will need
Step2: The tweet enricher
Step3: Enrichment classes
Step5: Convenience and simplification
remove JSON-(de)serialization
use only one enrichment class, derived from a base class
Step6: The problem
Step8: commentary
tweets are run sequentially
tweet dictionary is mutated by enrich
yield enriched tweet when ready
problems
there's no reason for subsequent enrichment operations to be blocked by a sleep call (imagine the sleep is a web request).
Threads
Despite what you make have heard about Python's Global Interpreter Lock (GIL), core library (read
Step10: commentary
run each enrich call in a separate thread
memory is shared between threads
execution takes roughly as long as the slowest enrichment
problems
we had to maintain lists of threads and enriched tweets
no limitation on number of threads
store all enriched tweets in memory
Step12: commentary
we limited the number of alive threads
we store no more than max_threads tweets in memory
problems
awkward queue length management
have to manage individual threads
Futures
A Future is an object that represents a deferred computation that may or may not have completed. That computation might be run on a separate thread but the Future itself doesn't care where the operation is run.
Step14: commentary
futures.as_completed yields results as execution finishes
this is better than sequentially join-ing threads, as above, because results are yielded as soon as they become available
problems
we haven't limited the number of concurrent workers
we can't yield any results until we've finished looping through the tweet
we have to maintain a dict of all tweets and futures
Change the enrichment protocol
Currently, classes follow the enrichment protocol by defining an enrich method with the appropriate signature. Nothing is specififed regarding the return type or value.
We will now change this protocol such that the enrich method returns the enriched tweet dictionary, rather than relying on the mutability of the tweet dictionary passed to enrich. This allows us
Step16: commentary
as before, results are yielded as soon as they are ready
we keep no explicit record of all tweets
problems
we don't get any results until we've iterated through all tweets, so we still keep an implicit list of all tweets
we have no limitation on the number of concurrent Future objects
Step18: commentary
no explicit list of futures
callback function is run in the main thread
putting the print statement in the callback function allows the output to run asynchronously
problems
we haven't limited the number of queued operations in the executor
Step21: commentary
we can now safely stream tweets into the enrich function, without queueing every tweet in the executor
problems
the buffering of calls to submit is still a bit of a hack, and potentially slow
Step23: commentary
tweets are proccessed and returned fully asynchronously on a fixed pool of threads
the queue throttles the incoming tweet stream
problems
what about CPU-bound tasks? | Python Code:
DT_FORMAT_STR = "%Y-%m-%dT%H:%M:%S.%f"
def stream_of_tweets(n=10):
# generator function to generate sequential tweets
for i in range(n):
time.sleep(0.01)
tweet = {
'body':'I am tweet #' + str(i),
'postedTime':datetime.datetime.now().strftime(DT_FORMAT_STR)
}
yield json.dumps(tweet)
for tweet in stream_of_tweets(2):
print(tweet)
for tweet in stream_of_tweets():
print(tweet)
Explanation: Introduction
We're going to improve the tweet_enricher.py script from Gnip-Analysis-Pipeline. We'll make a simplified version and create variations that improve it in various ways.
To enrich tweets, we will need:
* a tweet source
* a version of the enricher script (we'll use a function)
* one or more enrichment classes
See the README for an explanation of these components.
A stream of tweets
The enrichment step of the analysis pipeline is designed to work on a potentially infinite stream of tweets. A generator will simulate this nicely.
End of explanation
def enrich_tweets_1(istream,enrichment_class_list):
simplified copy of tweet_enricher.py
enrichment_instance_list = [enrichment_class() for enrichment_class in enrichment_class_list]
for tweet_str in istream:
try:
tweet = json.loads(tweet_str)
except ValueError:
continue
for instance in enrichment_instance_list:
instance.enrich(tweet)
sys.stdout.write( json.dumps(tweet) + '\n')
Explanation: The tweet enricher
End of explanation
class TestEnrichment():
value = 42
def enrich(self,tweet):
if 'enrichments' not in tweet:
tweet['enrichments'] = {}
tweet['enrichments']['TestEnrichment'] = self.value
class TestEnrichment2():
value = 48
def enrich(self,tweet):
if 'enrichments' not in tweet:
tweet['enrichments'] = {}
tweet['enrichments']['TestEnrichment2'] = self.value
enrich_tweets_1(stream_of_tweets(5),[TestEnrichment,TestEnrichment2])
Explanation: Enrichment classes
End of explanation
DT_FORMAT_STR = "%Y-%m-%dT%H:%M:%S.%f"
def stream_of_tweets(n=10):
# generator function to generate sequential tweets
for i in range(n):
time.sleep(0.01)
tweet = {
'body':'I am tweet #' + str(i),
'postedTime':datetime.datetime.now().strftime(DT_FORMAT_STR)
}
yield tweet # <<-- this is the only change from above
class EnrichmentBase():
def enrich(self,tweet):
if 'enrichments' not in tweet:
tweet['enrichments'] = {}
tweet['enrichments'][type(self).__name__] = self.enrichment_value(tweet)
class TestEnrichment(EnrichmentBase):
def enrichment_value(self,tweet):
return 42
def enrich_tweets_2(istream,enrichment_class,**kwargs):
simplify `enrich_tweets_1 :
only one enrichment
generator function
leave tweets as dict objects
enrichment_instance = enrichment_class()
for tweet in istream:
enrichment_instance.enrich(tweet)
sys.stdout.write( str(tweet) + '\n')
%%time
enrich_tweets_2(
istream=stream_of_tweets(5),
enrichment_class=TestEnrichment
)
Explanation: Convenience and simplification
remove JSON-(de)serialization
use only one enrichment class, derived from a base class
End of explanation
class SlowEnrichment(EnrichmentBase):
def enrichment_value(self,tweet):
# get the tweet number from body
# and sleep accordingly
seconds = int(tweet['body'][-1]) + 1
time.sleep(seconds)
return str(seconds) + ' second nap'
%%time
enrich_tweets_2(
istream=stream_of_tweets(5),
enrichment_class=SlowEnrichment
)
Explanation: The problem
End of explanation
import threading
def enrich_tweets_3(istream,enrichment_class):
use threads to run `enrich`
enrichment_instance = enrichment_class()
# we need to hang onto the threads spawned
threads = []
# ...and the tweets
enriched_tweets = []
for tweet in istream:
# run `enrich` in a new thread
thread = threading.Thread(
target=enrichment_instance.enrich,
args=(tweet,)
)
thread.start() # runs the function in a new thread
threads.append(thread)
enriched_tweets.append(tweet)
sys.stderr.write('submitted all tweets to threads' + '\n')
for thread in threads:
thread.join() # blocks until thread finishes
sys.stderr.write('all threads finished' + '\n')
for enriched_tweet in enriched_tweets:
sys.stdout.write( str(enriched_tweet) + '\n')
%%time
enrich_tweets_3(
istream=stream_of_tweets(5),
enrichment_class=SlowEnrichment
)
Explanation: commentary
tweets are run sequentially
tweet dictionary is mutated by enrich
yield enriched tweet when ready
problems
there's no reason for subsequent enrichment operations to be blocked by a sleep call (imagine the sleep is a web request).
Threads
Despite what you make have heard about Python's Global Interpreter Lock (GIL), core library (read: those written in C) routines can release the GIL while they are waiting on the operating system. The time.sleep function is a good example of this, but extends to things like the requests package.
End of explanation
def enrich_tweets_4(istream,enrichment_class,**kwargs):
better use of threads
enrichment_instance = enrichment_class()
queue = [] # queue of (thread,tweet) tuples
max_threads = kwargs['max_threads']
for tweet in istream:
# run `enrich` in a new thread
thread = threading.Thread(
target=enrichment_instance.enrich,
args=(tweet,)
)
thread.start()
queue.append((thread,tweet))
# don't accept more tweets until a thread is free
while len(queue) >= max_threads:
# iterate through all threads
# when threads are dead, remove from queue and yield tweet
new_queue = []
for thread,tweet in queue:
if thread.is_alive():
new_queue.append((thread,tweet))
else:
sys.stdout.write( str(tweet) + '\n') # print enriched tweet
queue = new_queue
time.sleep(0.1)
sys.stderr.write('submitted all tweets to threads' + '\n')
# cleanup threads that didn't finish while iterating through tweets
for thread,tweet in queue:
thread.join()
time.sleep(0.01)
sys.stdout.write( str(tweet) + '\n')
%%time
enrich_tweets_4(
istream=stream_of_tweets(5),
enrichment_class=SlowEnrichment,
max_threads = 1 # play with this number
)
Explanation: commentary
run each enrich call in a separate thread
memory is shared between threads
execution takes roughly as long as the slowest enrichment
problems
we had to maintain lists of threads and enriched tweets
no limitation on number of threads
store all enriched tweets in memory
End of explanation
from concurrent import futures
def enrich_tweets_5(istream,enrichment_class,**kwargs):
use concurrent.futures instead of bare Threads
enrichment_instance = enrichment_class()
with futures.ThreadPoolExecutor(max_workers=kwargs['max_workers']) as executor:
future_to_tweet = {}
for tweet in istream:
# run `enrich` in a new thread, via a Future
future = executor.submit(
enrichment_instance.enrich,
tweet
)
future_to_tweet[future] = tweet
sys.stderr.write('submitted all tweets as futures' + '\n')
for future in futures.as_completed(future_to_tweet):
sys.stdout.write( str(future_to_tweet[future]) + '\n')
%%time
enrich_tweets_5(
istream=stream_of_tweets(5),
enrichment_class=SlowEnrichment,
max_workers = 5
)
Explanation: commentary
we limited the number of alive threads
we store no more than max_threads tweets in memory
problems
awkward queue length management
have to manage individual threads
Futures
A Future is an object that represents a deferred computation that may or may not have completed. That computation might be run on a separate thread but the Future itself doesn't care where the operation is run.
End of explanation
class NewEnrichmentBase():
def enrich(self,tweet):
if 'enrichments' not in tweet:
tweet['enrichments'] = {}
tweet['enrichments'][type(self).__name__] = self.enrichment_value(tweet)
return tweet # <<-- the only new piece
class NewSlowEnrichment(NewEnrichmentBase):
def enrichment_value(self,tweet):
# get the tweet number from body
# and sleep accordingly
seconds = int(tweet['body'].split('#')[-1]) + 1
if seconds > 9:
seconds = 1
time.sleep(seconds)
return str(seconds) + ' second nap'
from concurrent import futures
def enrich_tweets_6(istream,enrichment_class,**kwargs):
new enrichment protocol
enrichment_instance = enrichment_class()
with futures.ThreadPoolExecutor(max_workers=kwargs['max_workers']) as executor:
futures_list = [] # <<-- this is now just a list of futures
for tweet in istream:
# run `enrich` in a new thread, via a Future
future = executor.submit(
enrichment_instance.enrich,
tweet
)
futures_list.append(future)
sys.stderr.write('submitted all tweets as futures' + '\n')
for future in futures.as_completed(futures_list):
sys.stdout.write( str(future.result()) + '\n')
%%time
enrich_tweets_6(
istream=stream_of_tweets(5),
enrichment_class=NewSlowEnrichment,
max_workers = 5
)
Explanation: commentary
futures.as_completed yields results as execution finishes
this is better than sequentially join-ing threads, as above, because results are yielded as soon as they become available
problems
we haven't limited the number of concurrent workers
we can't yield any results until we've finished looping through the tweet
we have to maintain a dict of all tweets and futures
Change the enrichment protocol
Currently, classes follow the enrichment protocol by defining an enrich method with the appropriate signature. Nothing is specififed regarding the return type or value.
We will now change this protocol such that the enrich method returns the enriched tweet dictionary, rather than relying on the mutability of the tweet dictionary passed to enrich. This allows us:
* to "store" tweets in the Future and retrieve the enriched versions, obviating the need to maintain a record of all observed tweets
* to generalize the submission interface such that we don't rely on the assumption of shared memory between the threads
End of explanation
from concurrent import futures
def enrich_tweets_7(istream,enrichment_class,**kwargs):
def print_the_tweet(future):
sys.stdout.write( str(future.result()) + '\n')
enrichment_instance = enrichment_class()
with futures.ThreadPoolExecutor(max_workers=kwargs['max_workers']) as executor:
for tweet in istream:
# run `enrich` in a new thread, via a Future
future = executor.submit(
enrichment_instance.enrich,
tweet
)
future.add_done_callback(print_the_tweet)
sys.stderr.write('submitted all tweets as futures' + '\n')
%%time
enrich_tweets_7(
istream=stream_of_tweets(5),
enrichment_class=NewSlowEnrichment,
max_workers = 5
)
Explanation: commentary
as before, results are yielded as soon as they are ready
we keep no explicit record of all tweets
problems
we don't get any results until we've iterated through all tweets, so we still keep an implicit list of all tweets
we have no limitation on the number of concurrent Future objects
End of explanation
from concurrent import futures
def enrich_tweets_8(istream,enrichment_class,**kwargs):
max_workers = kwargs['max_workers']
def print_the_tweet(future):
sys.stdout.write( str(future.result()) + '\n')
enrichment_instance = enrichment_class()
with futures.ThreadPoolExecutor(max_workers=max_workers) as executor:
futures_list = []
for tweet in istream:
# run `enrich` in a new thread, via a Future
future = executor.submit(
enrichment_instance.enrich,
tweet
)
future.add_done_callback(print_the_tweet)
futures_list.append(future)
futures_list[:] = [future for future in futures_list if future.running()]
while len(futures_list) >= max_workers:
futures_list[:] = [future for future in futures_list if future.running()]
time.sleep(0.5)
sys.stderr.write('submitted all tweets as futures' + '\n')
%%time
enrich_tweets_8(
istream=stream_of_tweets(50),
enrichment_class=NewSlowEnrichment,
max_workers = 5
)
Explanation: commentary
no explicit list of futures
callback function is run in the main thread
putting the print statement in the callback function allows the output to run asynchronously
problems
we haven't limited the number of queued operations in the executor
End of explanation
import queue
def enrich_tweets_9(istream,enrichment_class,**kwargs):
use a pool of threads, each running a worker reading from a common queue
max_workers = kwargs['max_workers']
queue_size = kwargs['queue_size']
enrichment_instance = enrichment_class()
def worker():
this function runs on new threads
and reads from a common queue
time.sleep(0.5)
while True:
tweet = q.get()
if tweet is None: # this is the signal to exit
break
enriched_tweet = enrichment_instance.enrich(tweet)
sys.stdout.write(str(enriched_tweet) + '\n')
q.task_done()
time.sleep(0.1)
thread_pool = [threading.Thread(target=worker) for _ in range(max_workers)]
[thread.start() for thread in thread_pool]
q = queue.Queue(maxsize=queue_size)
for tweet in istream:
q.put(tweet)
sys.stderr.write('submitted all tweets to threads' + '\n')
# block until queue is empty
q.join()
# kill the threads
for _ in range(len(thread_pool)):
q.put(None)
for thread in thread_pool:
thread.join()
%%time
enrich_tweets_9(
istream=stream_of_tweets(10),
enrichment_class=NewSlowEnrichment,
max_workers = 1,
queue_size=5
)
Explanation: commentary
we can now safely stream tweets into the enrich function, without queueing every tweet in the executor
problems
the buffering of calls to submit is still a bit of a hack, and potentially slow
End of explanation
from random import randrange
import hashlib
class CPUBoundEnrichment(NewEnrichmentBase):
def enrichment_value(self,tweet):
# make a SHA-256 hash of random byte arrays
data = bytearray(randrange(256) for i in range(2**21))
algo = hashlib.new('sha256')
algo.update(data)
return algo.hexdigest()
def enrich_tweets_10(istream,enrichment_class,**kwargs):
use a `ProcessPoolExecutor` to manage processes
max_workers=kwargs['max_workers']
executor_name=kwargs['executor_name']
def print_the_tweet(future):
sys.stdout.write( str(future.result()) + '\n')
enrichment_instance = enrichment_class()
with getattr(futures,executor_name)(max_workers=max_workers) as executor: # <- this is the only change from #8
futures_list = []
for tweet in istream:
# run `enrich` in a new thread, via a Future
future = executor.submit(
enrichment_instance.enrich,
tweet
)
future.add_done_callback(print_the_tweet)
futures_list.append(future)
# have to throttle with this hack
futures_list[:] = [future for future in futures_list if future.running()]
while len(futures_list) >= max_workers:
futures_list[:] = [future for future in futures_list if future.running()]
time.sleep(0.5)
sys.stderr.write('submitted all tweets as futures' + '\n')
%%time
enrich_tweets_10(
istream=stream_of_tweets(10),
enrichment_class=CPUBoundEnrichment,
executor_name='ProcessPoolExecutor',
max_workers = 2,
)
Explanation: commentary
tweets are proccessed and returned fully asynchronously on a fixed pool of threads
the queue throttles the incoming tweet stream
problems
what about CPU-bound tasks?
End of explanation |
10,033 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lab 4 - Tensorflow ANN for regression
In this lab we will use Tensorflow to build an Artificial Neuron Network (ANN) for a regression task.
As opposed to the low-level implementation from the previous week, here we will use Tensorflow to automate many of the computation tasks in the neural network. Tensorflow is a higher-level open-source machine learning library released by Google last year which is made specifically to optimize and speed up the development and training of neural networks.
At its core, Tensorflow is very similar to numpy and other numerical computation libraries. Like numpy, it's main function is to do very fast computation on multi-dimensional datasets (such as computing the dot product between a vector of input values and a matrix of values representing the weights in a fully connected network). While numpy refers to such multi-dimensional data sets as 'arrays', Tensorflow calls them 'tensors', but fundamentally they are the same thing. The two main advantages of Tensorflow over custom low-level solutions are
Step1: Next, let's import the Boston housing prices dataset. This is included with the scikit-learn library, so we can import it directly from there. The data will come in as two numpy arrays, one with all the features, and one with the target (price). We will use pandas to convert this data to a DataFrame so we can visualize it. We will then print the first 5 entries of the dataset to see the kind of data we will be working with.
Step2: You can see that the dataset contains only continuous features, which we can feed directly into the neural network for training. The target is also a continuous variable, so we can use regression to try to predict the exact value of the target. You can see more information about this dataset by printing the 'DESCR' object stored in the data set.
Step3: Next, we will do some exploratory data visualization to get a general sense of the data and how the different features are related to each other and to the target we will try to predict. First, let's plot the correlations between each feature. Larger positive or negative correlation values indicate that the two features are related (large positive or negative correlation), while values closer to zero indicate that the features are not related (no correlation).
Step4: We can get a more detailed picture of the relationship between any two variables in the dataset by using seaborn's jointplot function and passing it two features of our data. This will show a single-dimension histogram distribution for each feature, as well as a two-dimension density scatter plot for how the two features are related. From the correlation matrix above, we can see that the RM feature has a strong positive correlation to the target, while the LSTAT feature has a strong negative correlation to the target. Let's create jointplots for both sets of features to see how they relate in more detail
Step5: As expected, the plots show a positive relationship between the RM feature and the target, and a negative relationship between the LSTAT feature and the target.
This type of exploratory visualization is not strictly necessary for using machine learning, but it does help to formulate your solution, and to troubleshoot your implementation incase you are not getting the results you want. For example, if you find that two features have a strong correlation with each other, you might want to include only one of them to speed up the training process. Similarly, you may want to exclude features that show little correlation to the target, since they have little influence over its value.
Now that we know a little bit about the data, let's prepare it for training with our neural network. We will follow a process similar to the previous lab
Step6: Next, we set up some variables that we will use to define our model. The first group are helper variables taken from the dataset which specify the number of samples in our training set, the number of features, and the number of outputs. The second group are the actual hyper-parameters which define how the model is structured and how it performs. In this case we will be building a neural network with two hidden layers, and the size of each hidden layer is controlled by a hyper-parameter. The other hyper-parameters include
Step7: Next, we define a few helper functions which will dictate how error will be measured for our model, and how the weights and biases should be defined.
The accuracy() function defines how we want to measure error in a regression problem. The function will take in two lists of values - predictions which represent predicted values, and targets which represent actual target values. In this case we simply compute the absolute difference between the two (the error) and return the average error using numpy's mean() fucntion.
The weight_variable() and bias_variable() functions help create parameter variables for our neural network model, formatted in the proper type for Tensorflow. Both functions take in a shape parameter and return a variable of that shape using the specified initialization. In this case we are using a 'truncated normal' distribution for the weights, and a constant value for the bias. For more information about various ways to initialize parameters in Tensorflow you can consult the documentation
Step8: Now we are ready to build our neural network model in Tensorflow.
Tensorflow operates in a slightly different way than the procedural logic we have been using in Python so far. Instead of telling Tensorflow the exact operations to run line by line, we build the entire neural network within a structure called a Graph. The Graph does several things
Step9: Now that we have specified our model, we are ready to train it. We do this by iteratively calling the model, with each call representing one training step. At each step, we
Step10: Now that the model is trained, let's visualize the training process by plotting the error we achieved in the small training batch, the full training set, and the test set at each epoch. We will also print out the minimum loss we were able to achieve in the test set over all the training steps. | Python Code:
%matplotlib inline
import math
import random
import seaborn as sns
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.datasets import load_boston
import numpy as np
import tensorflow as tf
sns.set(style="ticks", color_codes=True)
Explanation: Lab 4 - Tensorflow ANN for regression
In this lab we will use Tensorflow to build an Artificial Neuron Network (ANN) for a regression task.
As opposed to the low-level implementation from the previous week, here we will use Tensorflow to automate many of the computation tasks in the neural network. Tensorflow is a higher-level open-source machine learning library released by Google last year which is made specifically to optimize and speed up the development and training of neural networks.
At its core, Tensorflow is very similar to numpy and other numerical computation libraries. Like numpy, it's main function is to do very fast computation on multi-dimensional datasets (such as computing the dot product between a vector of input values and a matrix of values representing the weights in a fully connected network). While numpy refers to such multi-dimensional data sets as 'arrays', Tensorflow calls them 'tensors', but fundamentally they are the same thing. The two main advantages of Tensorflow over custom low-level solutions are:
While it has a Python interface, much of the low-level computation is implemented in C/C++, making it run much faster than a native Python solution.
Many common aspects of neural networks such as computation of various losses and a variety of modern optimization techniques are implemented as built in methods, reducing their implementation to a single line of code. This also helps in development and testing of various solutions, as you can easily swap in and try various solutions without having to write all the code by hand.
You can get more details about various popular machine learning libraries in this comparison.
To test our basic network, we will use the Boston Housing Dataset, which represents data on 506 houses in Boston across 14 different features. One of the features is the median value of the house in $1000’s. This is a common data set for testing regression performance of machine learning algorithms. All 14 features are continuous values, making them easy to plug directly into a neural network (after normalizing ofcourse!). The common goal is to predict the median house value using the other columns as features.
This lab will conclude with two assignments:
Assignment 1 (at bottom of this notebook) asks you to experiment with various regularization parameters to reduce overfitting and improve the results of the model.
Assignment 2 (in the next notebook) asks you to take our regression problem and convert it to a classification problem.
Let's start by importing some of the libraries we will use for this tutorial:
End of explanation
#load data from scikit-learn library
dataset = load_boston()
#load data as DataFrame
houses = pd.DataFrame(dataset.data, columns=dataset.feature_names)
#add target data to DataFrame
houses['target'] = dataset.target
#print first 5 entries of data
print houses.head()
Explanation: Next, let's import the Boston housing prices dataset. This is included with the scikit-learn library, so we can import it directly from there. The data will come in as two numpy arrays, one with all the features, and one with the target (price). We will use pandas to convert this data to a DataFrame so we can visualize it. We will then print the first 5 entries of the dataset to see the kind of data we will be working with.
End of explanation
print dataset['DESCR']
Explanation: You can see that the dataset contains only continuous features, which we can feed directly into the neural network for training. The target is also a continuous variable, so we can use regression to try to predict the exact value of the target. You can see more information about this dataset by printing the 'DESCR' object stored in the data set.
End of explanation
# Create a datset of correlations between house features
corrmat = houses.corr()
# Set up the matplotlib figure
f, ax = plt.subplots(figsize=(9, 6))
# Draw the heatmap using seaborn
sns.set_context("notebook", font_scale=0.7, rc={"lines.linewidth": 1.5})
sns.heatmap(corrmat, annot=True, square=True)
f.tight_layout()
Explanation: Next, we will do some exploratory data visualization to get a general sense of the data and how the different features are related to each other and to the target we will try to predict. First, let's plot the correlations between each feature. Larger positive or negative correlation values indicate that the two features are related (large positive or negative correlation), while values closer to zero indicate that the features are not related (no correlation).
End of explanation
sns.jointplot(houses['target'], houses['RM'], kind='hex')
sns.jointplot(houses['target'], houses['LSTAT'], kind='hex')
Explanation: We can get a more detailed picture of the relationship between any two variables in the dataset by using seaborn's jointplot function and passing it two features of our data. This will show a single-dimension histogram distribution for each feature, as well as a two-dimension density scatter plot for how the two features are related. From the correlation matrix above, we can see that the RM feature has a strong positive correlation to the target, while the LSTAT feature has a strong negative correlation to the target. Let's create jointplots for both sets of features to see how they relate in more detail:
End of explanation
# convert housing data to numpy format
houses_array = houses.as_matrix().astype(float)
# split data into feature and target sets
X = houses_array[:, :-1]
y = houses_array[:, -1]
# normalize the data per feature by dividing by the maximum value in each column
X = X / X.max(axis=0)
# split data into training and test sets
trainingSplit = int(.7 * houses_array.shape[0])
X_train = X[:trainingSplit]
y_train = y[:trainingSplit]
X_test = X[trainingSplit:]
y_test = y[trainingSplit:]
print('Training set', X_train.shape, y_train.shape)
print('Test set', X_test.shape, y_test.shape)
Explanation: As expected, the plots show a positive relationship between the RM feature and the target, and a negative relationship between the LSTAT feature and the target.
This type of exploratory visualization is not strictly necessary for using machine learning, but it does help to formulate your solution, and to troubleshoot your implementation incase you are not getting the results you want. For example, if you find that two features have a strong correlation with each other, you might want to include only one of them to speed up the training process. Similarly, you may want to exclude features that show little correlation to the target, since they have little influence over its value.
Now that we know a little bit about the data, let's prepare it for training with our neural network. We will follow a process similar to the previous lab:
We will first re-split the data into a feature set (X) and a target set (y)
Then we will normalize the feature set so that the values range from 0 to 1
Finally, we will split both data sets into a training and test set.
End of explanation
# helper variables
num_samples = X_train.shape[0]
num_features = X_train.shape[1]
num_outputs = 1
# Hyper-parameters
batch_size = 50
num_hidden_1 = 16
num_hidden_2 = 16
learning_rate = 0.0001
training_epochs = 200
dropout_keep_prob = 1.0 # set to no dropout by default
# variable to control the resolution at which the training results are stored
display_step = 1
Explanation: Next, we set up some variables that we will use to define our model. The first group are helper variables taken from the dataset which specify the number of samples in our training set, the number of features, and the number of outputs. The second group are the actual hyper-parameters which define how the model is structured and how it performs. In this case we will be building a neural network with two hidden layers, and the size of each hidden layer is controlled by a hyper-parameter. The other hyper-parameters include:
batch size, which sets how many training samples are used at a time
learning rate which controls how quickly the gradient descent algorithm works
training epochs which sets how many rounds of training occurs
dropout keep probability, a regularization technique which controls how many neurons are 'dropped' randomly during each training step (note in Tensorflow this is specified as the 'keep probability' from 0 to 1, with 0 representing all neurons dropped, and 1 representing all neurons kept). You can read more about dropout here.
End of explanation
def accuracy(predictions, targets):
error = np.absolute(predictions.reshape(-1) - targets)
return np.mean(error)
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
Explanation: Next, we define a few helper functions which will dictate how error will be measured for our model, and how the weights and biases should be defined.
The accuracy() function defines how we want to measure error in a regression problem. The function will take in two lists of values - predictions which represent predicted values, and targets which represent actual target values. In this case we simply compute the absolute difference between the two (the error) and return the average error using numpy's mean() fucntion.
The weight_variable() and bias_variable() functions help create parameter variables for our neural network model, formatted in the proper type for Tensorflow. Both functions take in a shape parameter and return a variable of that shape using the specified initialization. In this case we are using a 'truncated normal' distribution for the weights, and a constant value for the bias. For more information about various ways to initialize parameters in Tensorflow you can consult the documentation
End of explanation
'''First we create a variable to store our graph'''
graph = tf.Graph()
'''Next we build our neural network within this graph variable'''
with graph.as_default():
'''Our training data will come in as x feature data and
y target data. We need to create tensorflow placeholders
to capture this data as it comes in'''
x = tf.placeholder(tf.float32, shape=(None, num_features))
_y = tf.placeholder(tf.float32, shape=(None))
'''Another placeholder stores the hyperparameter
that controls dropout'''
keep_prob = tf.placeholder(tf.float32)
'''Finally, we convert the test and train feature data sets
to tensorflow constants so we can use them to generate
predictions on both data sets'''
tf_X_test = tf.constant(X_test, dtype=tf.float32)
tf_X_train = tf.constant(X_train, dtype=tf.float32)
'''Next we create the parameter variables for the model.
Each layer of the neural network needs it's own weight
and bias variables which will be tuned during training.
The sizes of the parameter variables are determined by
the number of neurons in each layer.'''
W_fc1 = weight_variable([num_features, num_hidden_1])
b_fc1 = bias_variable([num_hidden_1])
W_fc2 = weight_variable([num_hidden_1, num_hidden_2])
b_fc2 = bias_variable([num_hidden_2])
W_fc3 = weight_variable([num_hidden_2, num_outputs])
b_fc3 = bias_variable([num_outputs])
'''Next, we define the forward computation of the model.
We do this by defining a function model() which takes in
a set of input data, and performs computations through
the network until it generates the output.'''
def model(data, keep):
# computing first hidden layer from input, using relu activation function
fc1 = tf.nn.sigmoid(tf.matmul(data, W_fc1) + b_fc1)
# adding dropout to first hidden layer
fc1_drop = tf.nn.dropout(fc1, keep)
# computing second hidden layer from first hidden layer, using relu activation function
fc2 = tf.nn.sigmoid(tf.matmul(fc1_drop, W_fc2) + b_fc2)
# adding dropout to second hidden layer
fc2_drop = tf.nn.dropout(fc2, keep)
# computing output layer from second hidden layer
# the output is a single neuron which is directly interpreted as the prediction of the target value
fc3 = tf.matmul(fc2_drop, W_fc3) + b_fc3
# the output is returned from the function
return fc3
'''Next we define a few calls to the model() function which
will return predictions for the current batch input data (x),
as well as the entire test and train feature set'''
prediction = model(x, keep_prob)
test_prediction = model(tf_X_test, 1.0)
train_prediction = model(tf_X_train, 1.0)
'''Finally, we define the loss and optimization functions
which control how the model is trained.
For the loss we will use the basic mean square error (MSE) function,
which tries to minimize the MSE between the predicted values and the
real values (_y) of the input dataset.
For the optimization function we will use basic Gradient Descent (SGD)
which will minimize the loss using the specified learning rate.'''
loss = tf.reduce_mean(tf.square(tf.sub(prediction, _y)))
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
'''We also create a saver variable which will allow us to
save our trained model for later use'''
saver = tf.train.Saver()
Explanation: Now we are ready to build our neural network model in Tensorflow.
Tensorflow operates in a slightly different way than the procedural logic we have been using in Python so far. Instead of telling Tensorflow the exact operations to run line by line, we build the entire neural network within a structure called a Graph. The Graph does several things:
describes the architecture of the network, including how many layers it has and how many neurons are in each layer
initializes all the parameters of the network
describes the 'forward' calculation of the network, or how input data is passed through the network layer by layer until it reaches the result
defines the loss function which describes how well the model is performing
specifies the optimization function which dictates how the parameters are tuned in order to minimize the loss
Once this graph is defined, we can work with it by 'executing' it on sets of training data and 'calling' different parts of the graph to get back results. Every time the graph is executed, Tensorflow will only do the minimum calculations necessary to generate the requested results. This makes Tensorflow very efficient, and allows us to structure very complex models while only testing and using certain portions at a time. In programming language theory, this type of programming is called 'lazy evaluation'.
End of explanation
# create an array to store the results of the optimization at each epoch
results = []
'''First we open a session of Tensorflow using our graph as the base.
While this session is active all the parameter values will be stored,
and each step of training will be using the same model.'''
with tf.Session(graph=graph) as session:
'''After we start a new session we first need to
initialize the values of all the variables.'''
tf.initialize_all_variables().run()
print('Initialized')
'''Now we iterate through each training epoch based on the hyper-parameter set above.
Each epoch represents a single pass through all the training data.
The total number of training steps is determined by the number of epochs and
the size of mini-batches relative to the size of the entire training set.'''
for epoch in range(training_epochs):
'''At the beginning of each epoch, we create a set of shuffled indexes
so that we are using the training data in a different order each time'''
indexes = range(num_samples)
random.shuffle(indexes)
'''Next we step through each mini-batch in the training set'''
for step in range(int(math.floor(num_samples/float(batch_size)))):
offset = step * batch_size
'''We subset the feature and target training sets to create each mini-batch'''
batch_data = X_train[indexes[offset:(offset + batch_size)]]
batch_labels = y_train[indexes[offset:(offset + batch_size)]]
'''Then, we create a 'feed dictionary' that will feed this data,
along with any other hyper-parameters such as the dropout probability,
to the model'''
feed_dict = {x : batch_data, _y : batch_labels, keep_prob: dropout_keep_prob}
'''Finally, we call the session's run() function, which will feed in
the current training data, and execute portions of the graph as necessary
to return the data we ask for.
The first argument of the run() function is a list specifying the
model variables we want it to compute and return from the function.
The most important is 'optimizer' which triggers all calculations necessary
to perform one training step. We also include 'loss' and 'prediction'
because we want these as ouputs from the function so we can keep
track of the training process.
The second argument specifies the feed dictionary that contains
all the data we want to pass into the model at each training step.'''
_, l, p = session.run([optimizer, loss, prediction], feed_dict=feed_dict)
'''At the end of each epoch, we will calcule the error of predictions
on the full training and test data set. We will then store the epoch number,
along with the mini-batch, training, and test accuracies to the 'results' array
so we can visualize the training process later. How often we save the data to
this array is specified by the display_step variable created above'''
if (epoch % display_step == 0):
batch_acc = accuracy(p, batch_labels)
train_acc = accuracy(train_prediction.eval(session=session), y_train)
test_acc = accuracy(test_prediction.eval(session=session), y_test)
results.append([epoch, batch_acc, train_acc, test_acc])
'''Once training is complete, we will save the trained model so that we can use it later'''
save_path = saver.save(session, "model_houses.ckpt")
print("Model saved in file: %s" % save_path)
Explanation: Now that we have specified our model, we are ready to train it. We do this by iteratively calling the model, with each call representing one training step. At each step, we:
Feed in a new set of training data. Remember that with SGD we only have to feed in a small set of data at a time. The size of each batch of training data is determined by the 'batch_size' hyper-parameter specified above.
Call the optimizer function by asking tensorflow to return the model's 'optimizer' variable. This starts a chain reaction in Tensorflow that executes all the computation necessary to train the model. The optimizer function itself will compute the gradients in the model and modify the weight and bias parameters in a way that minimizes the overall loss. Because it needs this loss to compute the gradients, it will also trigger the loss function, which will in turn trigger the model to compute predictions based on the input data. This sort of chain reaction is at the root of the 'lazy evaluation' model used by Tensorflow.
End of explanation
df = pd.DataFrame(data=results, columns = ["epoch", "batch_acc", "train_acc", "test_acc"])
df.set_index("epoch", drop=True, inplace=True)
fig, ax = plt.subplots(1, 1, figsize=(10, 4))
ax.plot(df)
ax.set(xlabel='Epoch',
ylabel='Error',
title='Training result')
ax.legend(df.columns, loc=1)
print "Minimum test loss:", np.min(df["test_acc"])
Explanation: Now that the model is trained, let's visualize the training process by plotting the error we achieved in the small training batch, the full training set, and the test set at each epoch. We will also print out the minimum loss we were able to achieve in the test set over all the training steps.
End of explanation |
10,034 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Management
In this guide you will learn how to load different data files into DataFrames and how to interact with the CARTO platform to upload DataFrames into tables and download tables or SQL queries into DataFrames.
CARTOframes is built on top of Pandas and GeoPandas. Therefore, it's compatible with all the data formats supported in those projects like GeoJSON, Shapefile, CSV, etc.
There are two main concepts we should know before continuing with the guide
Step1: Read a Shapefile
Shapefile is a complex format, compared to CSV or GeoJSON. To learn more about this format check GeoPandas documentation.
Step2: Read a CSV file
Compute geometry from longitude and latitude
Step3: Compute geometry from WKT/WKB
Step4: Read data from a CARTO table
Note
Step5: Read data from a CARTO SQL Query
Note
Step6: Upload data to CARTO
Note | Python Code:
from geopandas import read_file
gdf = read_file('https://libs.cartocdn.com/cartoframes/samples/starbucks_brooklyn_geocoded.geojson')
gdf.head()
Explanation: Data Management
In this guide you will learn how to load different data files into DataFrames and how to interact with the CARTO platform to upload DataFrames into tables and download tables or SQL queries into DataFrames.
CARTOframes is built on top of Pandas and GeoPandas. Therefore, it's compatible with all the data formats supported in those projects like GeoJSON, Shapefile, CSV, etc.
There are two main concepts we should know before continuing with the guide:
- A DataFrame is a two-dimensional data structure for generic data. It can be thought of as a table with rows and columns. It's composed of Series objects, which are one-dimensional data structures.
- A GeoDataFrame is a DataFrame with an extra geometry column. This geometry column is a GeoSeries object.
Every time we manage geographic data, a GeoDataFrame should be used. In case a DataFrame with an encoded geometry column is used (WKB, WKT, etc.), every method contains a geom_col param to provide the name of that column and decode the geometry internally.
For further learning you can checkout the Data Management examples.
Read a GeoJSON file
This is how to load geographic data from a GeoJSON file using GeoPandas. To read pure JSON files check this example.
End of explanation
from geopandas import read_file
gdf = read_file('https://libs.cartocdn.com/cartoframes/samples/starbucks_brooklyn_geocoded.zip')
gdf.head()
Explanation: Read a Shapefile
Shapefile is a complex format, compared to CSV or GeoJSON. To learn more about this format check GeoPandas documentation.
End of explanation
from pandas import read_csv
from geopandas import GeoDataFrame, points_from_xy
df = read_csv('https://libs.cartocdn.com/cartoframes/samples/sf_incidents.csv')
gdf = GeoDataFrame(df, geometry=points_from_xy(df['longitude'], df['latitude']))
gdf.head()
Explanation: Read a CSV file
Compute geometry from longitude and latitude
End of explanation
from pandas import read_csv
from geopandas import GeoDataFrame
from cartoframes.utils import decode_geometry
df = read_csv('https://libs.cartocdn.com/cartoframes/samples/starbucks_brooklyn_geocoded.csv')
gdf = GeoDataFrame(df, geometry=decode_geometry(df['the_geom']))
gdf.head()
Explanation: Compute geometry from WKT/WKB
End of explanation
from cartoframes.auth import set_default_credentials
set_default_credentials('cartoframes')
from cartoframes import read_carto
gdf = read_carto('starbucks_brooklyn')
gdf.head()
Explanation: Read data from a CARTO table
Note: You'll need your CARTO Account credentials to perform this action.
End of explanation
from cartoframes.auth import set_default_credentials
set_default_credentials('cartoframes')
from cartoframes import read_carto
gdf = read_carto("SELECT * FROM starbucks_brooklyn WHERE revenue > 1200000")
gdf.head()
Explanation: Read data from a CARTO SQL Query
Note: You'll need your CARTO Account credentials to perform this action.
End of explanation
from cartoframes.auth import set_default_credentials
set_default_credentials('creds.json')
from cartoframes import to_carto
to_carto(gdf, 'starbucks_brooklyn_filtered', if_exists='replace')
Explanation: Upload data to CARTO
Note: You'll need your CARTO Account credentials to perform this action.
End of explanation |
10,035 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compile and deploy the TFX pipeline to Kubeflow Pipelines
This notebook is the second of two notebooks that guide you through automating the Real-time Item-to-item Recommendation with BigQuery ML Matrix Factorization and ScaNN solution with a pipeline.
Use this notebook to compile the TFX pipeline to a Kubeflow Pipelines (KFP) package. This process creates an Argo YAML file in a .tar.gz package, and is accomplished through the following steps
Step1: Set environment variables
Update the following variables to reflect the values for your GCP environment
Step2: Run the Pipeline locally by using the Beam runner
Step3: Build the container image
The pipeline uses a custom container image, which is a derivative of the tensorflow/tfx
Step4: Compile the TFX pipeline using the TFX CLI
Use the TFX CLI to compile the TFX pipeline to the KFP format, which allows the pipeline to be deployed and executed on AI Platform Pipelines. The output is a .tar.gz package containing an Argo definition of your pipeline.
Step5: Deploy the compiled pipeline to KFP
Use the KFP CLI to deploy the pipeline to a hosted instance of KFP on AI Platform Pipelines | Python Code:
%load_ext autoreload
%autoreload 2
!pip install -q -U kfp
Explanation: Compile and deploy the TFX pipeline to Kubeflow Pipelines
This notebook is the second of two notebooks that guide you through automating the Real-time Item-to-item Recommendation with BigQuery ML Matrix Factorization and ScaNN solution with a pipeline.
Use this notebook to compile the TFX pipeline to a Kubeflow Pipelines (KFP) package. This process creates an Argo YAML file in a .tar.gz package, and is accomplished through the following steps:
Build a custom container image that includes the solution modules.
Compile the TFX Pipeline using the TFX command-line interface (CLI).
Deploy the compiled pipeline to KFP.
The pipeline workflow is implemented in the pipeline.py module. The runner.py module reads the configuration settings from the config.py module, defines the runtime parameters of the pipeline, and creates a KFP format that is executable on AI Platform pipelines.
Before starting this notebook, you must run the tfx01_interactive notebook to create the TFX pipeline.
Install required libraries
End of explanation
import os
os.environ["PROJECT_ID"] = "yourProject" # Set your project.
os.environ["BUCKET"] = "yourBucket" # Set your bucket.
os.environ["GKE_CLUSTER_NAME"] = "yourCluster" # Set your GKE cluster name.
os.environ["GKE_CLUSTER_ZONE"] = "yourClusterZone" # Set your GKE cluster zone.
os.environ["IMAGE_NAME"] = "tfx-ml"
os.environ["TAG"] = "tfx0.25.0"
os.environ[
"ML_IMAGE_URI"
] = f'gcr.io/{os.environ.get("PROJECT_ID")}/{os.environ.get("IMAGE_NAME")}:{os.environ.get("TAG")}'
os.environ["NAMESPACE"] = "kubeflow-pipelines"
os.environ["ARTIFACT_STORE_URI"] = f'gs://{os.environ.get("BUCKET")}/tfx_artifact_store'
os.environ["GCS_STAGING_PATH"] = f'{os.environ.get("ARTIFACT_STORE_URI")}/staging'
os.environ["RUNTIME_VERSION"] = "2.2"
os.environ["PYTHON_VERSION"] = "3.7"
os.environ["BEAM_RUNNER"] = "DirectRunner"
os.environ[
"MODEL_REGISTRY_URI"
] = f'{os.environ.get("ARTIFACT_STORE_URI")}/model_registry'
os.environ["PIPELINE_NAME"] = "tfx_bqml_scann"
from tfx_pipeline import config
for key, value in config.__dict__.items():
if key.isupper():
print(f"{key}: {value}")
Explanation: Set environment variables
Update the following variables to reflect the values for your GCP environment:
PROJECT_ID: The ID of the Google Cloud project you are using to implement this solution.
BUCKET: The name of the Cloud Storage bucket you created to use with this solution. The BUCKET value should be just the bucket name, so myBucket rather than gs://myBucket.
GKE_CLUSTER_NAME: The name of the Kubernetes Engine cluster used by the AI Platform pipeline. You can find this by looking at the Cluster column of the kubeflow-pipelines pipeline instance on the AI Platform Pipelines page.
GKE_CLUSTER_ZONE: The zone of the Kubernetes Engine cluster used by the AI Platform pipeline. You can find this by looking at the Zone column of the kubeflow-pipelines pipeline instance on the AI Platform Pipelines page.
End of explanation
import logging
import kfp
import ml_metadata as mlmd
import tensorflow as tf
import tfx
from ml_metadata.proto import metadata_store_pb2
from tfx.orchestration.beam.beam_dag_runner import BeamDagRunner
from tfx_pipeline import pipeline as pipeline_module
logging.getLogger().setLevel(logging.INFO)
print("TFX Version:", tfx.__version__)
pipeline_root = f"{config.ARTIFACT_STORE_URI}/{config.PIPELINE_NAME}_beamrunner"
model_regisrty_uri = f"{config.MODEL_REGISTRY_URI}_beamrunner"
local_mlmd_sqllite = "mlmd/mlmd.sqllite"
print(f"Pipeline artifacts root: {pipeline_root}")
print(f"Model registry location: {model_regisrty_uri}")
if tf.io.gfile.exists(pipeline_root):
print("Removing previous artifacts...")
tf.io.gfile.rmtree(pipeline_root)
if tf.io.gfile.exists("mlmd"):
print("Removing local mlmd SQLite...")
tf.io.gfile.rmtree("mlmd")
print("Creating mlmd directory...")
tf.io.gfile.mkdir("mlmd")
metadata_connection_config = metadata_store_pb2.ConnectionConfig()
metadata_connection_config.sqlite.filename_uri = local_mlmd_sqllite
metadata_connection_config.sqlite.connection_mode = 3
print("ML metadata store is ready.")
beam_pipeline_args = [
f"--runner=DirectRunner",
f"--project={config.PROJECT_ID}",
f"--temp_location={config.ARTIFACT_STORE_URI}/beam/tmp",
]
pipeline_module.SCHEMA_DIR = "tfx_pipeline/schema"
pipeline_module.LOOKUP_CREATOR_MODULE = "tfx_pipeline/lookup_creator.py"
pipeline_module.SCANN_INDEXER_MODULE = "tfx_pipeline/scann_indexer.py"
runner = BeamDagRunner()
pipeline = pipeline_module.create_pipeline(
pipeline_name=config.PIPELINE_NAME,
pipeline_root=pipeline_root,
project_id=config.PROJECT_ID,
bq_dataset_name=config.BQ_DATASET_NAME,
min_item_frequency=15,
max_group_size=10,
dimensions=50,
num_leaves=500,
eval_min_recall=0.8,
eval_max_latency=0.001,
ai_platform_training_args=None,
beam_pipeline_args=beam_pipeline_args,
model_regisrty_uri=model_regisrty_uri,
metadata_connection_config=metadata_connection_config,
enable_cache=True,
)
runner.run(pipeline)
Explanation: Run the Pipeline locally by using the Beam runner
End of explanation
!gcloud builds submit --tag $ML_IMAGE_URI tfx_pipeline
Explanation: Build the container image
The pipeline uses a custom container image, which is a derivative of the tensorflow/tfx:0.25.0 image, as a runtime execution environment for the pipeline's components. The container image is defined in a Dockerfile.
The container image installs the required libraries and copies over the modules from the solution's tfx_pipeline directory, where the custom components are implemented. The container image is also used by AI Platform Training for executing the training jobs.
Build the container image using Cloud Build and then store it in Cloud Container Registry:
End of explanation
!rm ${PIPELINE_NAME}.tar.gz
!tfx pipeline compile \
--engine=kubeflow \
--pipeline_path=tfx_pipeline/runner.py
Explanation: Compile the TFX pipeline using the TFX CLI
Use the TFX CLI to compile the TFX pipeline to the KFP format, which allows the pipeline to be deployed and executed on AI Platform Pipelines. The output is a .tar.gz package containing an Argo definition of your pipeline.
End of explanation
%%bash
gcloud container clusters get-credentials ${GKE_CLUSTER_NAME} --zone ${GKE_CLUSTER_ZONE}
export KFP_ENDPOINT=$(kubectl describe configmap inverse-proxy-config -n ${NAMESPACE} | grep "googleusercontent.com")
kfp --namespace=${NAMESPACE} --endpoint=${KFP_ENDPOINT} \
pipeline upload \
--pipeline-name=${PIPELINE_NAME} \
${PIPELINE_NAME}.tar.gz
Explanation: Deploy the compiled pipeline to KFP
Use the KFP CLI to deploy the pipeline to a hosted instance of KFP on AI Platform Pipelines:
End of explanation |
10,036 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Reading and writing files
Step1: It says that the file is opened, reminds the filename and indicates that it is in mode 'r', which means 'read'
You can call specific functions on an object with the syntax
Step2: Let's talk a bit about the kind of files we are examining here. It is an example of bad design of file format but a good exercise for interpreting in python. It is a set of records that our sensor box made over the course of 10 days. Each line is a record.
This line starts by a timestamp
Step3: Modules
All programming languages have a way to import modules (they come by other names | Python Code:
# We can create a file object and store it inside a variable.
# you can see objects as a different type of data
f=open("awanode-farmlab-2017-08-14.txt")
print(f)
Explanation: Reading and writing files
End of explanation
f=open("awanode-farmlab-2017-08-14.txt")
# The read() function reads the content of a file and returns it in a (very long) string
s=f.read()
# Here we split the content of file by using a special character. The character \ (antislash) indicates a special
# character \n is understood as a single character tthat means 'newline'
# This will return an array of strings that are the lines of the file
lines=s.split("\n")
s
# Let's peek at the first ten lines of the array
lines[2000:2010]
# Let's examine a random line
l=lines[2401]
l
Explanation: It says that the file is opened, reminds the filename and indicates that it is in mode 'r', which means 'read'
You can call specific functions on an object with the syntax:
object.function()
Actually you already did this with lists and strings, for instance when doing str.split(" ")
End of explanation
# Let's start by removing the rubish (special characters, whitespace) at
# the right of the string with the function rstrip()
l = l.rstrip()
l
# This kind of format is bad design because there are many types of
# separators used | ; and =
# Let's replace all of them with ;
l = l.replace("|",";")
l = l.replace("=",";")
l
# Finally, let's split that in an array:
l.split(";")
# Notice that we could have compressed all these `l = l.something()` lines
# into a single line:
l=lines[2401]
l.rstrip().replace("|",";").replace("=",";").split(";")
# We can also feed this array directly into variables using this convenient
# syntax:
a = l.rstrip().replace("|",";").replace("=",";").split(";")
(time,num, id1,val1, id2,val2, id3,val3, id4,val4, id5,val5, e) = a
int(val1)
Explanation: Let's talk a bit about the kind of files we are examining here. It is an example of bad design of file format but a good exercise for interpreting in python. It is a set of records that our sensor box made over the course of 10 days. Each line is a record.
This line starts by a timestamp: 10th May 2001 (the clock was not setup appropriately) at 3:3:29.
Then there is a sequence number, this is the 1098th packet emitted by the node since its last restart.
Then there is a series of id=value elements that indicate the id of a sensor and the value read by the device.
In the end there is a \r special character that is a carriage return, which is another way of indicating the end of a line.
End of explanation
# Let's try with a common library
import math
# To call a modules' function, use this syntax:
math.sqrt(9)
# If you are lazy, you can also import the library and give it a shorter name:
import math as m
m.sqrt(4)
# Sometime you prefer to import a single function or object from your library
# In that case you specify it this way:
from math import sqrt
sqrt(5)
Explanation: Modules
All programming languages have a way to import modules (they come by other names: libraries, DLLs, includes...). These are other programs that add functionalities to the language by giving access to new functions.
In python, to import a library, all you need to do is:
import <library-name>
End of explanation |
10,037 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
XML example and exercise
study examples of accessing nodes in XML tree structure
work on exercise to be completed and submitted
reference
Step1: XML example
for details about tree traversal and iterators, see https
Step2: XML exercise
Using data in 'data/mondial_database.xml', the examples above, and refering to https
Step3: 1. 10 countries with the lowest infant mortality rates
Step4: 2. 10 cities with the largest population
Step5: 3. 10 ethnic groups with the largest overall populations (sum of best/latest estimates over all countries)
Step6: 4. name and country of a) longest river, b) largest lake and c) airport at highest elevation | Python Code:
from xml.etree import ElementTree as ET
import pandas as pd
Explanation: XML example and exercise
study examples of accessing nodes in XML tree structure
work on exercise to be completed and submitted
reference: https://docs.python.org/2.7/library/xml.etree.elementtree.html
data source: http://www.dbis.informatik.uni-goettingen.de/Mondial
End of explanation
document_tree = ET.parse( './data/mondial_database_less.xml' )
# print names of all countries
for child in document_tree.getroot():
print(child.find('name').text)
# print names of all countries and their cities
for element in document_tree.iterfind('country'):
print('* ' + element.find('name').text + ':'),
capitals_string = ''
for subelement in element.getiterator('city'):
capitals_string += subelement.find('name').text + ', '
print(capitals_string[:-2])
Explanation: XML example
for details about tree traversal and iterators, see https://docs.python.org/2.7/library/xml.etree.elementtree.html
End of explanation
document = ET.parse( './data/mondial_database.xml' )
Explanation: XML exercise
Using data in 'data/mondial_database.xml', the examples above, and refering to https://docs.python.org/2.7/library/xml.etree.elementtree.html, find
10 countries with the lowest infant mortality rates
10 cities with the largest population
10 ethnic groups with the largest overall populations (sum of best/latest estimates over all countries)
name and country of a) longest river, b) largest lake and c) airport at highest elevation
End of explanation
names = []
infant_mortalities = []
for element in document.iterfind('country[infant_mortality]'):
names.append(element.find('name').text)
infant_mortalities.append(float(element.find('infant_mortality').text))
results = pd.DataFrame({'name': names, 'infant_mortality':infant_mortalities})
results.sort_values('infant_mortality').head(10)
Explanation: 1. 10 countries with the lowest infant mortality rates
End of explanation
cities = []
populations = []
for element in document.iterfind('country[city]'):
for sub in element.iterfind('city[population]'):
cities.append(sub.find('name').text)
pops = [int(p.text) for p in sub.findall('population')]
populations.append(pops[-1])
results = pd.DataFrame({'city': cities, 'population': populations})
results.sort_values('population', ascending=False).head(10)
Explanation: 2. 10 cities with the largest population
End of explanation
ethnic_groups = [e.text for e in document.findall('.//ethnicgroup')]
ethnic_dict = dict.fromkeys(ethnic_groups, 0)
for element in document.iterfind('country[ethnicgroup]'):
population = [int(p.text) for p in element.findall('population')][-1]
groups = [e.text for e in element.findall('ethnicgroup')]
percentages = [float(perc.get('percentage')) for perc in element.findall('ethnicgroup')]
for i, group in enumerate(groups):
ethnic_dict[group] += percentages[i] * population
round(pd.Series(ethnic_dict)).astype(int).sort_values(ascending=False).head(10)
Explanation: 3. 10 ethnic groups with the largest overall populations (sum of best/latest estimates over all countries)
End of explanation
# convert country code
country_dict = {}
for element in document.iterfind('country'):
country_dict[element.get('car_code')] = element.find('name').text
# find longest river
river_name = ''
river_code = ''
river_length = 0.0
for element in document.iterfind('river[length]'):
if float(element.find('length').text) > river_length:
river_length = float(element.find('length').text)
river_name = element.find('name').text
river_code = element.get('country')
countries = ', '.join([country_dict[c] for c in river_code.split(' ')])
print('longest river \n name: {}\n countries: {}'.format(river_name, countries))
# find largest lake
lake_name = ''
lake_code = ''
lake_area = 0.0
for element in document.iterfind('lake[area]'):
if float(element.find('area').text) > lake_area:
lake_area = float(element.find('area').text)
lake_name = element.find('name').text
lake_code = element.get('country')
countries = ', '.join([country_dict[c] for c in lake_code.split(' ')])
print('largest lake\n name: {}\n countries: {}'.format(lake_name, countries))
# find highest airport elevation
airport_name = ''
airport_code = ''
airport_elevation = 0.0
for element in document.iterfind('airport[elevation]'):
if element.find('elevation').text is None:
continue
if float(element.find('elevation').text) > airport_elevation:
airport_elevation = float(element.find('elevation').text)
airport_name = element.find('name').text
airport_code = element.get('country')
countries = ', '.join([country_dict[c] for c in airport_code.split(' ')])
print('highest airport elevation\n name: {}\n countries: {}'.format(airport_name, countries))
Explanation: 4. name and country of a) longest river, b) largest lake and c) airport at highest elevation
End of explanation |
10,038 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create dataframe
Step2: Make plot | Python Code:
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
Explanation: Title: Back To Back Bar Plot In MatPlotLib
Slug: matplotlib_back_to_back_bar_plot
Summary: Back To Back Bar Plot In MatPlotLib
Date: 2016-05-01 12:00
Category: Python
Tags: Data Visualization
Authors: Chris Albon
Based on: Sebastian Raschka.
Preliminaries
End of explanation
raw_data = {'first_name': ['Jason', 'Molly', 'Tina', 'Jake', 'Amy'],
'pre_score': [4, 24, 31, 2, 3],
'mid_score': [25, 94, 57, 62, 70],
'post_score': [5, 43, 23, 23, 51]}
df = pd.DataFrame(raw_data, columns = ['first_name', 'pre_score', 'mid_score', 'post_score'])
df
Explanation: Create dataframe
End of explanation
# input data, specifically the second and
# third rows, skipping the first column
x1 = df.ix[1, 1:]
x2 = df.ix[2, 1:]
# Create the bar labels
bar_labels = ['Pre Score', 'Mid Score', 'Post Score']
# Create a figure
fig = plt.figure(figsize=(8,6))
# Set the y position
y_pos = np.arange(len(x1))
y_pos = [x for x in y_pos]
plt.yticks(y_pos, bar_labels, fontsize=10)
# Create a horizontal bar in the position y_pos
plt.barh(y_pos,
# using x1 data
x1,
# that is centered
align='center',
# with alpha 0.4
alpha=0.4,
# and color green
color='#263F13')
# Create a horizontal bar in the position y_pos
plt.barh(y_pos,
# using NEGATIVE x2 data
-x2,
# that is centered
align='center',
# with alpha 0.4
alpha=0.4,
# and color green
color='#77A61D')
# annotation and labels
plt.xlabel('Tina\'s Score: Light Green. Molly\'s Score: Dark Green')
t = plt.title('Comparison of Molly and Tina\'s Score')
plt.ylim([-1,len(x1)+0.1])
plt.xlim([-max(x2)-10, max(x1)+10])
plt.grid()
plt.show()
Explanation: Make plot
End of explanation |
10,039 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chem 30324, Spring 2020, Homework 7
Due March 27, 2020
Variations on the hydrogen atom
Step1: So the normalized 1s wavefunction is $\tilde{R}_{10}(r) = \frac{2}{\sqrt[4]{\pi}} 2^{\frac{3}{4}} e^{-r^2} = (\frac{128}{\pi}) ^ {\frac{1}{4}} e^{-r^2} $.
2. Calculate the expectation value of the energy of your normalized guess. Is it greater or less than the true value?
$\because$ 1s orbital, n=1,l=0
$$\therefore \hat{H} = -\frac{1}{2}\frac{d^2}{dr^2} - \frac{1}{r}\frac{d}{dr}-\frac{1}{r}$$
$$\frac{d(\tilde{R}{10})}{dr}= -2rCe^{-r^2} $$
$$\frac{d^2(\tilde{R}{10})}{dr^2}=-2Ce^{-r^2}+4r^2Ce^{-r^2}$$
$$\hat{H}\tilde{R}{10}(r) = -\frac{1}{2}\frac{d^2(\tilde{R}{10})}{dr^2} - \frac{1}{r}\frac{d(\tilde{R}{10})}{dr}-\frac{1}{r}(\tilde{R}{10}) = Ce^{-r^2}-2r^2Ce^{-r^2} +2Ce^{-r^2} -\frac{Ce^{-r^2}}{r} = \frac{Ce^{-r^2}(-2r^3+3r-1)}{r}$$
The expectation value of the energy
Step2: 3. What does the variational principle say about the expectation value of the energy of your guess as you vary a parameter $\gamma$ in your guess, $R_{10}=e^{-\gamma r^2}$? Suggest a strategy for determining the "best" $\gamma$.
The variational principle says true wavefunction energy is always lower bound on energy of any trial wavefunction.
$$\langle ψ_\text{trial}^λ | \hat{H} | ψ_\text{trial}^λ\rangle =E_\text{trial}^λ \geq E_0$$
We can get the "best" $\gamma$ by optimizing the wavefunction with respect to variational parameter
Step3: Many-electrons means many troubles
Helium (He) is only one electron larger than hydrogen, but that one more electron makes a big difference in difficulty in setting up and solving the Schrödinger equation.
4. Write down in as much detail as you can the exact Schrödinger equation for the electrons in a He atom.
Schrödinger equation
Step4: 10. Why, qualitatively, do the energies vary as they do?
*There is a big energy decrease as it goes across the series because the electrostatic attraction of the nucleus for electrons increases as the
the number of protons increases.
11. Compare the energies of the highest-energy (valence) electrons compare across the series. Determine the wavelength of light necessary to remove each valence electron. What range of the spectrum is this light in? | Python Code:
import sympy as sy
import numpy as np
from sympy import *
r = Symbol('r')
I = integrate(exp(-2*r**2)*r**2,(r,0,+oo))
C = sqrt(1/I)
print(latex(simplify(C)))
Explanation: Chem 30324, Spring 2020, Homework 7
Due March 27, 2020
Variations on the hydrogen atom:
The variational principle guarantees that the expectation value of the energy of a guessed wavefunction is allows greater than that of the true lowest energy solution. Here you will apply the variational principle to the H atom. For this problem it is easiest to work in atomic units. In these units, $\hbar$, $a_0$, and $4\pi\epsilon_0$ are all equal to 1 and the unit of energy is the Hartree, equivalent to 27.212 eV. In atomic units the H atom Schrödinger equation is written:
$$\left {-\frac{1}{2}\frac{d^2}{dr^2} - \frac{1}{r}\frac{d}{dr}-\frac{1}{r}+\frac{l(l+1)}{2r^2} \right }R(r) = ER(r)$$
1. Suppose in a fit of panic you forget the 1s radial function when asked on an exam. Not wanting to leave the answer blank, you decide to guess something, and liking bell-shaped curves, you guess $R_{10}(r) = e^{-r^2}$. Normalize this guess. Do not forget to include the $r^2$ Jacobian integration factor.
Set $R_{10}(r)=Ce^{-r^2}$ , then normalized the guess to get C
$$\int_{0}^{\infty}C^2e^{-2r^2}r^2dr = 1$$, $$C = \sqrt \frac{1}{\int_{0}^{\infty}e^{-2r^2}r^2dr}$$
End of explanation
E = C**2*integrate((-2*r**4+3*r**2-r)*exp(-2*r**2),(r,0,oo))
print('Expected value is %0.4f Ha.'%E)
# Hydrogen atom energy equation is given in class notes
n=1
E_true = -1/(2*n**2) # unit Ha
print('The ture value is %0.4f Ha. So the expected value is greater than the true value.' %E_true)
Explanation: So the normalized 1s wavefunction is $\tilde{R}_{10}(r) = \frac{2}{\sqrt[4]{\pi}} 2^{\frac{3}{4}} e^{-r^2} = (\frac{128}{\pi}) ^ {\frac{1}{4}} e^{-r^2} $.
2. Calculate the expectation value of the energy of your normalized guess. Is it greater or less than the true value?
$\because$ 1s orbital, n=1,l=0
$$\therefore \hat{H} = -\frac{1}{2}\frac{d^2}{dr^2} - \frac{1}{r}\frac{d}{dr}-\frac{1}{r}$$
$$\frac{d(\tilde{R}{10})}{dr}= -2rCe^{-r^2} $$
$$\frac{d^2(\tilde{R}{10})}{dr^2}=-2Ce^{-r^2}+4r^2Ce^{-r^2}$$
$$\hat{H}\tilde{R}{10}(r) = -\frac{1}{2}\frac{d^2(\tilde{R}{10})}{dr^2} - \frac{1}{r}\frac{d(\tilde{R}{10})}{dr}-\frac{1}{r}(\tilde{R}{10}) = Ce^{-r^2}-2r^2Ce^{-r^2} +2Ce^{-r^2} -\frac{Ce^{-r^2}}{r} = \frac{Ce^{-r^2}(-2r^3+3r-1)}{r}$$
The expectation value of the energy:
$$\langle E\rangle = \int_{0}^{\infty}\tilde{R}{10}(r)\hat{H}\tilde{R}{10}(r)r^2dr = \int_{0}^{\infty} C^2(-2r^4+3r^2-r)e^{-2r^2} dr$$
End of explanation
gamma = symbols('gamma',positive=True) # We know the gamma has to be positive, or the R10 would be larger when r increase.
r = symbols("r",positive=True)
C = sqrt(1/integrate(exp(-2*gamma*r**2)*r**2,(r,0,oo)))
E = C**2*integrate((-2*gamma**2*r**4+3*gamma*r**2-r)*exp(-2*gamma*r**2),(r,0,oo)) # expectation value of energy as a function of gamma
gamma_best=solve(diff(E,gamma),gamma)
print("Expectation of energy:");print(E)
print("Best value of gamma is: %s, which equals to %f."% (gammabest,8/(9*np.pi)))
import math
gamma_best = 8/(9*np.pi)
E_best = 8*math.sqrt(2)*gamma_best**(3/2)*(-1/(4*gamma_best) + 3*math.sqrt(2)*math.sqrt(np.pi)/(32*math.sqrt(gamma_best)))/math.sqrt(np.pi)
print("Energy with the best gamma: %0.3f eV."%E_best)
import matplotlib.pyplot as plt
gamma = np.linspace(0.001,1.5,10000)
E = []
for x in gamma:
E.append(8*math.sqrt(2)*x**(3/2)*(-1/(4*x) + 3*math.sqrt(2)*math.sqrt(np.pi)/(32*math.sqrt(x)))/math.sqrt(np.pi))
plt.plot(gamma,E)
plt.xlabel("Gamma")
plt.ylabel("Energy(eV)")
plt.axvline(x=8/(9*np.pi),color='k',linestyle='--')
plt.axhline(y=E_best,color='k',linestyle='--')
plt.annotate('Lowest energy spot', xy=(8/(9*np.pi), E_best), xytext=(0.5,-0.2), arrowprops=dict(facecolor='black'))
plt.show()
Explanation: 3. What does the variational principle say about the expectation value of the energy of your guess as you vary a parameter $\gamma$ in your guess, $R_{10}=e^{-\gamma r^2}$? Suggest a strategy for determining the "best" $\gamma$.
The variational principle says true wavefunction energy is always lower bound on energy of any trial wavefunction.
$$\langle ψ_\text{trial}^λ | \hat{H} | ψ_\text{trial}^λ\rangle =E_\text{trial}^λ \geq E_0$$
We can get the "best" $\gamma$ by optimizing the wavefunction with respect to variational parameter: $$\frac{\partial\langle E\rangle}{\partial\gamma} = 0$$
3.5 Extra credit: Determine the best value of $\gamma$. Show and carefully justify your work to receive credit.
Normalize $R_{10}$: $\int_{0}^{\infty}r^2C^2e^{-2\gamma r^2}dr = 1$
$$C = \sqrt \frac{1}{\int_{0}^{\infty}e^{-2\gamma r^2}r^2dr}$$
$$\therefore \hat{H} = -\frac{1}{2}\frac{d^2}{dr^2} - \frac{1}{r}\frac{d}{dr}-\frac{1}{r}$$
$$\frac{d(\tilde{R}{10})}{dr}= -2\gamma rCe^{-\gamma r^2} $$
$$\frac{d^2(\tilde{R}{10})}{dr^2}=-2\gamma Ce^{-\gamma r^2}+4\gamma ^2 r^2Ce^{-\gamma r^2}$$
$$\hat{H}\tilde{R}{10}(r) = -\frac{1}{2}\frac{d^2(\tilde{R}{10})}{dr^2} - \frac{1}{r}\frac{d(\tilde{R}{10})}{dr}-\frac{1}{r}(\tilde{R}{10}) = \gamma Ce^{-\gamma r^2}-2\gamma ^2r^2Ce^{- \gamma r^2} +2\gamma Ce^{-\gamma r^2} -\frac{Ce^{-\gamma r^2}}{r} = \frac{Ce^{-\gamma r^2}(-2\gamma ^2r^3+3\gamma r-1)}{r}$$
$\langle E\rangle = \int_{0}^{\infty}\tilde{R}{10}(r)\hat{H}\tilde{R}{10}(r)r^2dr = \int_{0}^{\infty} C^2(-2\gamma^2 r^4+3\gamma r^2-r)e^{-2\gamma r^2} dr$
End of explanation
# From http://www.genstrom.net/public/biology/common/en/em_spectrum.html
print("The 1s energies become increasingly negative with inceasing Z. Light must become increasingly energetic to kick out one of them.")
hc = 1239.8 #eV*nm
E = [13.23430*27.212, 20.01336*27.212, 28.13652*27.212, 37.71226*27.212,48.64339*27.212,30.45968*27.212] # eV
lamb = [] #nm
for e in E:
lamb.append(hc/e)
print(lamb,"nm.\nThey corresponds to X-rays.")
Explanation: Many-electrons means many troubles
Helium (He) is only one electron larger than hydrogen, but that one more electron makes a big difference in difficulty in setting up and solving the Schrödinger equation.
4. Write down in as much detail as you can the exact Schrödinger equation for the electrons in a He atom.
Schrödinger equation:
$$\hat{H}\Psi(r_1,r_2)=E\Psi(r_1,r_2)$$
$$\hat{H}=\hat{h_1}+\hat{h_2}+\frac{e^2}{4\pi\epsilon_0}\frac{1}{|r_1-r_2|}$$
$$\hat{h_1}=-\frac{\hbar^2}{2m_e}\nabla^2_1-\frac{2e^2}{4\pi\epsilon_0}\frac{1}{r_1}$$
$$\hat{h_2}=-\frac{\hbar^2}{2m_e}\nabla^2_2-\frac{2e^2}{4\pi\epsilon_0}\frac{1}{r_2}$$
5. This equation is conventionally solved within the "independent electron" approximation, by writing an effective one-electron Schrödinger equation with approximate potentials (shown below in atomic units). Briefly, what does it mean to solve this equation "self-consistently"?
$$\left{-\frac{1}{2}\nabla^2 - \frac{2}{r} + \hat v_\mathrm{Coul}[\psi_i] + \hat
v_\mathrm{ex}[\psi_i]+\hat v_\mathrm{corr}[\psi_i] \right}\psi=\epsilon\psi$$
All the potential terms $\hat{\nu}$ depend on the $\psi$ solved. The self-consistently means we get a $\psi$ solution, then we use it to calculate new potential terms and solve for new $\psi$, and repeat until we get a $\psi$ same with the last one.
6. How many solutions are needed to describe the electrons in a He atom? Provide a possible set of quantum numbers ($n, l, m_l , m _s$) for each electron.
One solution for each orbital, so there needs to be only one solution to describe the electrons in the He atom.
Two possible set of quantum numbers for each electron:
$$n=1, l=0, m_l=0, m_s=+\frac{1}{2}$$
$$n=1, l=0, m_l=0, m_s=-\frac{1}{2}$$
7. The Schrödinger equation has five terms, or operators, on the left. Identify the physical meaning of each term and the sign of the expectation value when it is applied to one of the solutions.
$-\frac{1}{2}\nabla^2$: Kinetic energy. Always positive.
$-\frac{2}{r}$: Due to the attraction between the electron and the nucleus. Negative.
$\hat{\nu}_{coul}$: Classical repulsion between distinguishable electron “clouds”. Positive.
$\hat{\nu}_{ex}$: Accounts for electron indistinguishability (Pauli principle for fermions). Decreases
Coulomb repulsion because electrons of like spin intrinsically avoid one another. Negative.
$\hat{\nu}_{corr}$: Decrease in Coulomb repulsion due to dynamic ability of electrons to avoid
one another; “fixes” orbital approximation. Negative.
Sophisticated computer programs that solve the many-electron Schrödinger equation are now widely available and powerful tool for predicting the properties of atoms, molecules, solids, and interfaces. Density functional theory (DFT) is the most common set of approximations for the electron-electron interactions used today. In this problem you’ll do a DFT calculation using the Orca program (https://www.its.hku.hk/services/research/hpc/software/orca).
Now, let’s set up your calculation (you may do this with a partner or two if you choose):
Log into the Webmo server https://www.webmo.net/demoserver/cgi-bin/webmo/login.cgi using "guest" as your username and password.
Select New Job-Creat New Job.
Use the available tools to draw an atom on the screen.
Use the right arrow at the bottom to proceed to the Computational Engines.
Choose Orca
Select “Molecular Orbitals” for the Calculation type, “PBE” for theory, “def2-SVP” for the basis set, “0” for the charge, an appropriate value for the "Multiplicity", and check “Unrestricted.”
Select the right arrow to run the calculation.
From the job manager window choose the completed calculation to view the results.
For fun, click on the Magnifying Glass icons to see the molecular orbitals in 3-D. You may have to play around with the Display Settings and Preferences to get good views.
8. Perform calculations across the first row of the periodic table (B, C, N, O, F, Ne). Make a table of energies of the occupied orbitals and identify them by their shell ( $n = 1, 2, \ldots$) and subshell (s, p, d, ...).
|B(doublet)|Energy (Hartree)|C(triplet)|Energy (Hartree)|N(quarlet)|Energy (Hartree)|O(triplet)|Energy (Hartree)|F(doublet)|Energy (Hartree)|Ne(singlet)|Energy (Hartree)|
|-|-|-|-|-|-|-|-|-|-|-|-|
|1s|-13.23430|1s|-20.01336|1s|-28.13652|1s|-37.71226|1s|-48.64339|1s|-30.45968|
|2s|-0.65884|2s|-0.93550|2s|-1.23866|2s|-1.63718|2s|-2.06968|2s|-1.26438|
|2p|-0.15147|2p|-0.43435|2p|-0.87602|2p|-1.30625|2p|-1.89967|2p|-1.34278|
9. Contrast the energies of the 1s electrons across the series. Determine the wavelength of light necessary to remove each 1s electron. What range of the spectrum is this light in?
End of explanation
hc = 1239.8 #eV*nm
E = [0.15147*27.212, 0.43435*27.212, 0.87602*27.212, 1.30625*27.212,1.89967*27.212,1.34278*27.212] # eV
lamb = [] #nm
for e in E:
lamb.append(hc/e)
print(lamb,"nm.\nThey corresponds to UVs")
Explanation: 10. Why, qualitatively, do the energies vary as they do?
*There is a big energy decrease as it goes across the series because the electrostatic attraction of the nucleus for electrons increases as the
the number of protons increases.
11. Compare the energies of the highest-energy (valence) electrons compare across the series. Determine the wavelength of light necessary to remove each valence electron. What range of the spectrum is this light in?
End of explanation |
10,040 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Populate local MDCS instance with student data and metadata
Import MDCS API tool module
Step1: Host and user information
Step2: List of file prefixes for micrograph images and XML metadata
Step3: For each name in the list | Python Code:
import mdcs
Explanation: Populate local MDCS instance with student data and metadata
Import MDCS API tool module
End of explanation
user='admin'
pswd='admin'
host='http://127.0.0.1:8000'
template_name='DiffusionDemo'
Explanation: Host and user information
End of explanation
name_list=[
"GE-DiffusionCouple-IN100-IN718",
"GE-DiffusionCouple-IN718-R95",
"GE-DiffusionCouple-R95-R88",
"GE-DiffusionCouple-R88-IN100"
]
Explanation: List of file prefixes for micrograph images and XML metadata
End of explanation
for name in name_list:
xml_name=name+".xml"
tif_name=name+".tif"
print "Uploading:",tif_name
url = mdcs.blob.upload(tif_name,host,user,pswd)
print "Reading:",xml_name
with open(xml_name, 'r') as f:
content = f.read()
content = content.replace("http://127.0.0.1:8000/rest/blob?id=REPLACE-ME-BLOB-ID",url)
print "Uploading:",xml_name
response = mdcs.curate_as(xml_name,name,host,user,pswd,template_title=template_name,content=content)
print "Response:",response
Explanation: For each name in the list:
Upload micrograph
Read XML metadata
Replace generic URL with unique URL for micrograph
Upload XML metadata record
End of explanation |
10,041 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Rational approximations of 𝝿
The fractions 22/7 and 355/113 are good approximations of pi. Let's find more.
Step1: Spoiler alert
Step2: We'll need to go to larger and larger denominators to get more accuracy. Let's define success as finding an approximation that gets more digits right than preceeding approximations. 22/7 gets 3 digits correct. 355/13 gets 7, which is more than enough for most practical purposes.
Step3: So, let's make it easy to count how many digits match. | Python Code:
from math import pi
pi
Explanation: Rational approximations of 𝝿
The fractions 22/7 and 355/113 are good approximations of pi. Let's find more.
End of explanation
pi.as_integer_ratio()
f"{884279719003555/281474976710656:0.48f}"
Explanation: Spoiler alert: Who knew that Python floats have this handy method? We won't do better than this.
End of explanation
22/7
355/113
Explanation: We'll need to go to larger and larger denominators to get more accuracy. Let's define success as finding an approximation that gets more digits right than preceeding approximations. 22/7 gets 3 digits correct. 355/13 gets 7, which is more than enough for most practical purposes.
End of explanation
def digits_match(a,b):
d = abs(b-a)
if d==0.0:
return len(str(b))-1
i = 0
p=1
while d < p:
i += 1
p /= 10
return i
digits_match(22/7, pi)
digits_match(355/113, pi)
%%time
best_so_far = 1
for den in range(7,26_000_000):
# numerator that is closest to but less than pi
# or maybe the next higher numerator is better?
for i in range(0,2):
num = int(den*pi) + i
pi_approx = num/den
digits = digits_match(pi_approx, pi)
if digits > best_so_far:
best_so_far = digits
frac = f"{num}/{den}"
print(f"{frac:>24} = {pi_approx:<25} {digits:>3} err={abs(pi_approx-pi):0.25f}")
Explanation: So, let's make it easy to count how many digits match.
End of explanation |
10,042 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Chapter 19 - More about Natural Language Processing Tools (spaCy)
Text data is unstructured. But if you want to extract information from text, then you often need to process that data into a more structured representation. The common idea for all Natural Language Processing (NLP) tools is that they try to structure or transform text in some meaningful way. You have already learned about four basic NLP steps
Step2: Now, let's first load spaCy. We import the spaCy module and load the English tokenizer, tagger, parser, NER and word vectors.
Step3: nlp is now a Python object representing the English NLP pipeline that we can use to process a text.
EXTRA
Step4: 2.2 Using spaCy
Parsing a text with spaCy after loading a language model is as easy as follows
Step5: doc is now a Python object of the class Doc. It is a container for accessing linguistic annotations and a sequence of Token objects.
Doc, Token and Span objects
At this point, there are three important types of objects to remember
Step6: Please note that even though these look like strings, they are not
Step7: These Token objects have many useful methods and attributes, which we can list by using dir(). We haven't really talked about attributes during this course, but while methods are operations or activities performed by that object, attributes are 'static' features of the objects. Methods are called using parantheses (as we have seen with str.upper(), for instance), while attributes are indicated without parantheses. We will see some examples below.
You can find more detailed information about the token methods and attributes in the documentation.
Step8: Let's inspect some of the attributes of the tokens. Can you figure out what they mean? Feel free to try out a few more.
Step9: Notice that some of the attributes end with an underscore. For example, tokens have both lemma and lemma_ attributes. The lemma attribute represents the id of the lemma (integer), while the lemma_ attribute represents the unicode string representation of the lemma. In practice, you will mostly use the lemma_ attribute.
Step10: You can also use spacy.explain to find out more about certain labels
Step11: You can create a Span object from the slice doc[start
Step12: Text, sentences and noun_chunks
If you call the dir() function on a Doc object, you will see that it has a range of methods and attributes. You can read more about them in the documentation. Below, we highlight three of them
Step13: First of all, text simply gives you the whole document as a string
Step14: sents can be used to get all the sentences. Notice that it will create a so-called 'generator'. For now, you don't have to understand exactly what a generator is (if you like, you can read more about them online). Just remember that we can use generators to iterate over an object in a fast and efficient way.
Step15: If you find this difficult to comprehend, you can also simply convert it to a list and then loop over the list. Remember that this is less efficient, though.
Step16: The benefit of converting it to a list is that we can use indices to select certain sentences. For example, in the following we only print some information about the tokens in the second sentence.
Step17: Similarly, noun_chunks can be used to create a generator for all noun chunks in the text.
Step18: Named Entities
Finally, we can also very easily access the Named Entities in a text using ents. As you can see below, it will create a tuple of the entities recognized in the text. Each entity is again a span of tokens, and you can access the type of the entity with the label_ attribute of Span.
Step19: Pretty cool, but what does NORP mean? Again, you can use spacy.explain() to find out
Step20: Next, you will want to define which annotators to use and which output format should be produced (text, json, xml, conll, conllu, serialized). Annotating the document then is very easy. Note that Stanford CoreNLP uses some large models that can take a long time to load. You can read more about it here.
Step21: In the next cells, we will simply show some examples of how to access the linguistic annotations if you use the properties as shown above. If you'd like to continue working with Stanford CoreNLP in the future, you will likely have to experiment a bit more.
Step22: 4. NLTK vs. spaCy vs. CoreNLP
There might be different reasons why you want to use NLTK, spaCy or Stanford CoreNLP. There are differences in efficiency, quality, user friendliness, functionalities, output formats, etc. At this moment, we advise you to go with spaCy because of its ease in use and high quality performance.
Here's an example of both NLTK and spaCy in action.
The example text is a case in point. What goes wrong here?
Try experimenting with the text to see what the differences are.
Step23: Do you want to learn more about the differences between NLTK, spaCy and CoreNLP? Here are some links
Step24: Exercise 1
Step25: Exercise 2
Step26: Exercise 3
Step27: Now create a function called get_vocabulary that takes one positional parameter filenames. It should read in all filenames and return a set called unique_words, that contains all unique words in the files.
Step28: Exercise 4 | Python Code:
%%capture
!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/Data.zip
!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/images.zip
!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/Extra_Material.zip
!unzip Data.zip -d ../
!unzip images.zip -d ./
!unzip Extra_Material.zip -d ../
!rm Data.zip
!rm Extra_Material.zip
!rm images.zip
Explanation: <a href="https://colab.research.google.com/github/cltl/python-for-text-analysis/blob/colab/Chapters-colab/Chapter_19_More_about_Natural_Language_Processing_Tools_(spaCy).ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
End of explanation
!pip install -U spacy
%%bash
python -m spacy download en_core_web_sm
Explanation: Chapter 19 - More about Natural Language Processing Tools (spaCy)
Text data is unstructured. But if you want to extract information from text, then you often need to process that data into a more structured representation. The common idea for all Natural Language Processing (NLP) tools is that they try to structure or transform text in some meaningful way. You have already learned about four basic NLP steps: sentence splitting, tokenization, POS-tagging and lemmatization. For all of these, we have used the NLTK library, which is widely used in the field of NLP. However, there are some competitors out there that are worthwhile to have a look at. One of them is spaCy, which is fast and accurate and supports multiple languages.
At the end of this chapter, you will be able to:
- work with spaCy
- find some additional NLP tools
1. The NLP pipeline
There are many tools and libraries designed to solve NLP problems. In Chapter 15, we have already seen the NLTK library for tokenization, sentence splitting, part-of-speech tagging and lemmatization. However, there are many more NLP tasks and off-the-shelf tools to perform them. These tasks often depend on each other and are therefore put into a sequence; such a sequence of NLP tasks is called an NLP pipeline. Some of the most common NLP tasks are:
Tokenization: splitting texts into individual words
Sentence splitting: splitting texts into sentences
Part-of-speech (POS) tagging: identifying the parts of speech of words in context (verbs, nouns, adjectives, etc.)
Morphological analysis: separating words into morphemes and identifying their classes (e.g. tense/aspect of verbs)
Stemming: identifying the stems of words in context by removing inflectional/derivational affixes, such as 'troubl' for 'trouble/troubling/troubled'
Lemmatization: identifying the lemmas (dictionary forms) of words in context, such as 'go' for 'go/goes/going/went'
Word Sense Disambiguation (WSD): assigning the correct meaning to words in context
Stop words recognition: identifying commonly used words (such as 'the', 'a(n)', 'in', etc.) in text, possibly to ignore them in other tasks
Named Entity Recognition (NER): identifying people, locations, organizations, etc. in text
Constituency/dependency parsing: analyzing the grammatical structure of a sentence
Semantic Role Labeling (SRL): analyzing the semantic structure of a sentence (who does what to whom, where and when)
Sentiment Analysis: determining whether a text is mostly positive or negative
Word Vectors (or Word Embeddings) and Semantic Similarity: representating the meaning of words as rows of real valued numbers where each point captures a dimension of the word's meaning and where semantically similar words have similar vectors (very popular these days)
You don't always need all these modules. But it's important to know that they are
there, so that you can use them when the need arises.
1.1 How can you use these modules?
Let's be clear about this: you don't always need to use Python for this. There are
some very strong NLP programs out there that don't rely on Python. You can typically
call these programs from the command line. Some examples are:
Treetagger is a POS-tagger
and lemmatizer in one. It provides support for many different languages. If you want to
call Treetagger from Python, use treetaggerwrapper.
Treetagger-python also works, but is much slower.
Stanford's CoreNLP is a very powerful system
that is able to process English, German, Spanish, French, Chinese and Arabic. (Each to
a different extent, though. The pipeline for English is most complete.) There are also
Python wrappers available, such as py-corenlp.
The Maltparser has models for English, Swedish, French, and Spanish.
Having said that, there are many NLP-tools that have been developed for Python:
Natural Language ToolKit (NLTK): Incredibly versatile library with a bit of everything.
The only downside is that it's not the fastest library out there, and it lags behind the
state-of-the-art.
Access to several corpora.
Create a POS-tagger. (Some of these are actually state-of-the-art if you have enough training data.)
Perform corpus analyses.
Interface with WordNet.
Pattern: A module that describes itself as a 'web mining module'. Implements a
tokenizer, tagger, parser, and sentiment analyzer for multiple different languages.
Also provides an API for Google, Twitter, Wikipedia and Bing.
Textblob: Another general NLP library that builds on the NLTK and Pattern.
Gensim: For building vector spaces and topic models.
Corpkit is a module for corpus building and corpus management. Includes an interface to the Stanford CoreNLP parser.
SpaCy: Tokenizer, POS-tagger, parser and named entity recogniser for English, German, Spanish, Portugese, French, Italian and Dutch (more languages in progress). It can also predict similarity using word embeddings.
2. spaCy
spaCy provides a rather complete NLP pipeline: it takes a raw document and performs tokenization, POS-tagging, stop word recognition, morphological analysis, lemmatization, sentence splitting, dependency parsing and Named Entity Recognition (NER). It also supports similarity prediction, but that is outside of the scope of this notebook. The advantage of SpaCy is that it is really fast, and it has a good accuracy. In addition, it currently supports multiple languages, among which: English, German, Spanish, Portugese, French, Italian and Dutch.
In this notebook, we will show you the basic usage. If you want to learn more, please visit spaCy's website; it has extensive documentation and provides excellent user guides.
2.1 Installing and loading spaCy
To install spaCy, check out the instructions here. On this page, it is explained exactly how to install spaCy for your operating system, package manager and desired language model(s). Simply run the suggested commands in your terminal or cmd. Alternatively, you can probably also just run the following cells in this notebook:
End of explanation
import spacy
nlp = spacy.load('en_core_web_sm') # other languages: de, es, pt, fr, it, nl
Explanation: Now, let's first load spaCy. We import the spaCy module and load the English tokenizer, tagger, parser, NER and word vectors.
End of explanation
#%%bash
#python -m spacy download en_core_web_md
#%%bash
#python -m spacy download en_core_web_lg
# uncomment one of the lines below if you want to load the medium or large model instead of the small one
# nlp = spacy.load('en_core_web_md')
# nlp = spacy.load('en_core_web_lg')
Explanation: nlp is now a Python object representing the English NLP pipeline that we can use to process a text.
EXTRA: Larger models
For English, there are three models ranging from 'small' to 'large':
en_core_web_sm
en_core_web_md
en_core_web_lg
By default, the smallest one is loaded. Larger models should have a better accuracy, but take longer to load. If you like, you can use them instead. You will first need to download them.
End of explanation
doc = nlp("I have an awesome cat. It's sitting on the mat that I bought yesterday.")
Explanation: 2.2 Using spaCy
Parsing a text with spaCy after loading a language model is as easy as follows:
End of explanation
# Iterate over the tokens
for token in doc:
print(token)
print()
# Select one single token by index
first_token = doc[0]
print("First token:", first_token)
Explanation: doc is now a Python object of the class Doc. It is a container for accessing linguistic annotations and a sequence of Token objects.
Doc, Token and Span objects
At this point, there are three important types of objects to remember:
A Doc is a sequence of Token objects.
A Token object represents an individual token — i.e. a word, punctuation symbol, whitespace, etc. It has attributes representing linguistic annotations.
A Span object is a slice from a Doc object and a sequence of Token objects.
Since Doc is a sequence of Token objects, we can iterate over all of the tokens in the text as shown below, or select a single token from the sequence:
End of explanation
for token in doc:
print(token, "\t", type(token))
Explanation: Please note that even though these look like strings, they are not:
End of explanation
dir(first_token)
Explanation: These Token objects have many useful methods and attributes, which we can list by using dir(). We haven't really talked about attributes during this course, but while methods are operations or activities performed by that object, attributes are 'static' features of the objects. Methods are called using parantheses (as we have seen with str.upper(), for instance), while attributes are indicated without parantheses. We will see some examples below.
You can find more detailed information about the token methods and attributes in the documentation.
End of explanation
# Print attributes of tokens
for token in doc:
print(token.text, token.lemma_, token.pos_, token.tag_, token.dep_, token.shape_)
Explanation: Let's inspect some of the attributes of the tokens. Can you figure out what they mean? Feel free to try out a few more.
End of explanation
for token in doc:
print(token.lemma, token.lemma_)
Explanation: Notice that some of the attributes end with an underscore. For example, tokens have both lemma and lemma_ attributes. The lemma attribute represents the id of the lemma (integer), while the lemma_ attribute represents the unicode string representation of the lemma. In practice, you will mostly use the lemma_ attribute.
End of explanation
# try out some more, such as NN, ADP, PRP, VBD, VBP, VBZ, WDT, aux, nsubj, pobj, dobj, npadvmod
spacy.explain("VBZ")
Explanation: You can also use spacy.explain to find out more about certain labels:
End of explanation
# Create a Span
a_slice = doc[2:5]
print(a_slice, type(a_slice))
# Iterate over Span
for token in a_slice:
print(token.lemma_, token.pos_)
Explanation: You can create a Span object from the slice doc[start : end]. For instance, doc[2:5] produces a span consisting of tokens 2, 3 and 4. Stepped slices (e.g. doc[start : end : step]) are not supported, as Span objects must be contiguous (cannot have gaps). You can use negative indices and open-ended ranges, which have their normal Python semantics.
End of explanation
dir(doc)
Explanation: Text, sentences and noun_chunks
If you call the dir() function on a Doc object, you will see that it has a range of methods and attributes. You can read more about them in the documentation. Below, we highlight three of them: text, sents and noun_chunks.
End of explanation
print(doc.text)
print(type(doc.text))
Explanation: First of all, text simply gives you the whole document as a string:
End of explanation
# Get all the sentences as a generator
print(doc.sents, type(doc.sents))
# We can use the generator to loop over the sentences; each sentence is a span of tokens
for sentence in doc.sents:
print(sentence, type(sentence))
Explanation: sents can be used to get all the sentences. Notice that it will create a so-called 'generator'. For now, you don't have to understand exactly what a generator is (if you like, you can read more about them online). Just remember that we can use generators to iterate over an object in a fast and efficient way.
End of explanation
# You can also store the sentences in a list and then loop over the list
sentences = list(doc.sents)
for sentence in sentences:
print(sentence, type(sentence))
Explanation: If you find this difficult to comprehend, you can also simply convert it to a list and then loop over the list. Remember that this is less efficient, though.
End of explanation
# Print some information about the tokens in the second sentence.
sentences = list(doc.sents)
for token in sentences[1]:
data = '\t'.join([token.orth_,
token.lemma_,
token.pos_,
token.tag_,
str(token.i), # Turn index into string
str(token.idx)]) # Turn index into string
print(data)
Explanation: The benefit of converting it to a list is that we can use indices to select certain sentences. For example, in the following we only print some information about the tokens in the second sentence.
End of explanation
# Get all the noun chunks as a generator
print(doc.noun_chunks, type(doc.noun_chunks))
# You can loop over a generator; each noun chunk is a span of tokens
for chunk in doc.noun_chunks:
print(chunk, type(chunk))
print()
Explanation: Similarly, noun_chunks can be used to create a generator for all noun chunks in the text.
End of explanation
# Here's a slightly longer text, from the Wikipedia page about Harry Potter.
harry_potter = "Harry Potter is a series of fantasy novels written by British author J. K. Rowling.\
The novels chronicle the life of a young wizard, Harry Potter, and his friends Hermione Granger and Ron Weasley,\
all of whom are students at Hogwarts School of Witchcraft and Wizardry.\
The main story arc concerns Harry's struggle against Lord Voldemort, a dark wizard who intends to become immortal,\
overthrow the wizard governing body known as the Ministry of Magic, and subjugate all wizards and Muggles."
doc = nlp(harry_potter)
print(doc.ents)
print(type(doc.ents))
# Each entity is a span of tokens and is labeled with the type of entity
for entity in doc.ents:
print(entity, "\t", entity.label_, "\t", type(entity))
Explanation: Named Entities
Finally, we can also very easily access the Named Entities in a text using ents. As you can see below, it will create a tuple of the entities recognized in the text. Each entity is again a span of tokens, and you can access the type of the entity with the label_ attribute of Span.
End of explanation
%%bash
pip install pycorenlp
# https://colab.research.google.com/github/stanfordnlp/stanza/blob/master/demo/Stanza_CoreNLP_Interface.ipynb
# Install stanza; note that the prefix "!" is not needed if you are running in a terminal
!pip install stanza
# Import stanza
import stanza
# Download the Stanford CoreNLP package with Stanza's installation command
# This'll take several minutes, depending on the network speed
corenlp_dir = './corenlp'
stanza.install_corenlp(dir=corenlp_dir)
# Set the CORENLP_HOME environment variable to point to the installation location
import os
os.environ["CORENLP_HOME"] = corenlp_dir
# Examine the CoreNLP installation folder to make sure the installation is successful
!ls $CORENLP_HOME
# Import client module
from stanza.server import CoreNLPClient
# Construct a CoreNLPClient with some basic annotators, a memory allocation of 4GB, and port number 9001
client = CoreNLPClient(
annotators=['tokenize','ssplit', 'pos', 'lemma', 'ner'],
memory='4G',
endpoint='http://localhost:9001',
be_quiet=True)
print(client)
# Start the background server and wait for some time
# Note that in practice this is totally optional, as by default the server will be started when the first annotation is performed
client.start()
import time; time.sleep(10)
# Print background processes and look for java
# You should be able to see a StanfordCoreNLPServer java process running in the background
!ps -o pid,cmd | grep java
from pycorenlp import StanfordCoreNLP
nlp = StanfordCoreNLP('http://localhost:9000')
Explanation: Pretty cool, but what does NORP mean? Again, you can use spacy.explain() to find out:
3. EXTRA: Stanford CoreNLP
Another very popular NLP pipeline is Stanford CoreNLP. You can use the tool from the command line, but there are also some useful Python wrappers that make use of the Stanford CoreNLP API, such as pycorenlp. As you might want to use this in the future, we will provide you with a quick start guide. To use the code below, you will have to do the following:
Download Stanford CoreNLP here.
Install pycorenlp (run pip install pycorenlp in your terminal, or simply run the cell below).
Open a terminal and run the following commands (replace with the correct directory names):
cd LOCATION_OF_CORENLP/stanford-corenlp-full-2018-02-27
java -mx4g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer
This step you will always have to do if you want to use the Stanford CoreNLP API.
End of explanation
# Annotate some text
# text = "Albert Einstein was a German-born theoretical physicist. He developed the theory of relativity."
text = "Harry Potter is a series of fantasy novels written by British author J. K. Rowling.\
The novels chronicle the life of a young wizard, Harry Potter, and his friends Hermione Granger and Ron Weasley,\
all of whom are students at Hogwarts School of Witchcraft and Wizardry.\
The main story arc concerns Harry's struggle against Lord Voldemort, a dark wizard who intends to become immortal,\
overthrow the wizard governing body known as the Ministry of Magic, and subjugate all wizards and Muggles."
properties= {'annotators': 'tokenize, ssplit, pos, lemma, parse',
'outputFormat': 'json'}
doc = client.annotate(text, properties=properties)
print(type(doc))
Explanation: Next, you will want to define which annotators to use and which output format should be produced (text, json, xml, conll, conllu, serialized). Annotating the document then is very easy. Note that Stanford CoreNLP uses some large models that can take a long time to load. You can read more about it here.
End of explanation
doc.keys()
sentences = doc["sentences"]
first_sentence = sentences[0]
first_sentence.keys()
first_sentence["parse"]
first_sentence["basicDependencies"]
first_sentence["tokens"]
for sent in doc["sentences"]:
for token in sent["tokens"]:
word = token["word"]
lemma = token["lemma"]
pos = token["pos"]
print(word, lemma, pos)
# find out what the entity label 'NORP' means
spacy.explain("NORP")
Explanation: In the next cells, we will simply show some examples of how to access the linguistic annotations if you use the properties as shown above. If you'd like to continue working with Stanford CoreNLP in the future, you will likely have to experiment a bit more.
End of explanation
import nltk
import spacy
nlp = spacy.load('en_core_web_sm')
nltk.download('averaged_perceptron_tagger')
text = "I like cheese very much"
print("NLTK results:")
nltk_tagged = nltk.pos_tag(text.split())
print(nltk_tagged)
print()
print("spaCy results:")
doc = nlp(text)
spacy_tagged = []
for token in doc:
tag_data = (token.orth_, token.tag_,)
spacy_tagged.append(tag_data)
print(spacy_tagged)
Explanation: 4. NLTK vs. spaCy vs. CoreNLP
There might be different reasons why you want to use NLTK, spaCy or Stanford CoreNLP. There are differences in efficiency, quality, user friendliness, functionalities, output formats, etc. At this moment, we advise you to go with spaCy because of its ease in use and high quality performance.
Here's an example of both NLTK and spaCy in action.
The example text is a case in point. What goes wrong here?
Try experimenting with the text to see what the differences are.
End of explanation
import spacy
nlp = spacy.load('en_core_web_sm')
Explanation: Do you want to learn more about the differences between NLTK, spaCy and CoreNLP? Here are some links:
- Facts & Figures (spaCy)
- About speed (CoreNLP vs. spaCy)
- NLTK vs. spaCy: Natural Language Processing in Python
- What are the advantages of Spacy vs NLTK?
- 5 Heroic Python NLP Libraries
5. Some other useful modules for cleaning and preprocessing
Data is often messy, noisy or includes irrelevant information. Therefore, chances are big that you will need to do some cleaning before you can start with your analysis. This is especially true for social media texts, such as tweets, chats, and emails. Typically, these texts are informal and notoriously noisy. Normalising them to be able to process them with NLP tools is a NLP challenge in itself and fully discussing it goes beyond the scope of this course. However, you may find the following modules useful in your project:
tweet-preprocessor: This library makes it easy to clean, parse or tokenize the tweets. It supports cleaning, tokenizing and parsing of URLs, hashtags, reserved words, mentions, emojis and smileys.
emot: Emot is a python library to extract the emojis and emoticons from a text (string). All the emojis and emoticons are taken from a reliable source, i.e. Wikipedia.org.
autocorrect: Spelling corrector (Python 3).
html: Can be used to remove HTML tags.
chardet: Universal encoding detector for Python 2 and 3.
ftfy: Fixes broken unicode strings.
If you are interested in reading more about these topic, these papers discuss preprocessing and normalization:
Assessing the Consequences of Text Preprocessing Decisions (Denny & Spirling 2016). This paper is a bit long, but it provides a nice discussion of common preprocessing steps and their potential effects.
What to do about bad language on the internet (Eisenstein 2013). This is a quick read that we recommend everyone to at least look through.
And here is a nice blog about character encoding.
Exercises
End of explanation
doc = nlp("I have an awesome cat. It's sitting on the mat that I bought yesterday.")
for token in doc:
print(token.pos_, token.tag_)
spacy.explain("PRON")
Explanation: Exercise 1:
What is the difference between token.pos_ and token.tag_? Read the docs to find out.
What do the different labels mean? Use space.explain to inspect some of them. You can also refer to this page for a complete overview.
End of explanation
filename = "../Data/Charlie/charlie.txt"
# read the file and process with spaCy
# print all information about the tokens
# print all information about the sentences
# print all information about the noun chunks
# print all information about the entities
Explanation: Exercise 2:
Let's practice a bit with processing files. Open the file charlie.txt for reading and use read() to read its content as a string. Then use spaCy to annotate this string and print the information below. Remember: you can use dir() to remind yourself of the attributes.
For each token in the text:
1. Text
2. Lemma
3. POS tag
4. Whether it's a stopword or not
5. Whether it's a punctuation mark or not
For each sentence in the text:
1. The complete text
2. The number of tokens
3. The complete text in lowercase letters
4. The text, lemma and POS of the first word
For each noun chunk in the text:
1. The complete text
2. The number of tokens
3. The complete text in lowercase letters
4. The text, lemma and POS of the first word
For each named entity in the text:
1. The complete text
2. The number of tokens
3. The complete text in lowercase letters
4. The text, lemma and POS of the first word
End of explanation
import glob
filenames = glob.glob("../Data/Dreams/*.txt")
print(filenames)
Explanation: Exercise 3:
Remember how we can use the os and glob modules to process multiple files? For example, we can read all .txt files in the Dreams folder like this:
End of explanation
def get_vocabulary(filenames):
# your code here
# test your function here
unique_words = get_vocabulary(filenames)
print(unique_words, len(unique_words))
assert len(unique_words) == 415 # if your code is correct, this should not raise an error
Explanation: Now create a function called get_vocabulary that takes one positional parameter filenames. It should read in all filenames and return a set called unique_words, that contains all unique words in the files.
End of explanation
import glob
filenames = glob.glob("../Data/Dreams/*.txt")
print(filenames)
def get_sentences_with_keyword(filenames, keyword=None):
#your code here
# test your function here
sentences = get_sentences_with_keyword(filenames, keyword="toy")
print(sentences)
assert len(sentences) == 4 # if your code is correct, this should not raise an error
Explanation: Exercise 4:
Create a function called get_sentences_with_keyword that takes one positional parameter filenames and one keyword parameter filenames with default value None. It should read in all filenames and return a list called sentences that contains all sentences (the complete texts) with the keyword.
Hints:
- It's best to check for the lemmas of each token
- Lowercase both your keyword and the lemma
End of explanation |
10,043 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Prepare data so the format is as compatible with the 2018 data as possible
Step1: Examine response rates per day
Step2: The high spike seen on 1/13/20 aligns with the time when the survey was publicized on Twitter. To consider the potential effects of this, we examine how the response rate varied by various demographic information.
Examine response rates by contribution length, level and interest in next level
Step3: The survey was advertised on Twitter, and two groups had the largest number of disproportionate responses. Those responses came from either contributors working on their membership, or users that have been contributing less than a year. The largest group is users that fall into both categories.
Step4: Univariate histograms
In the following sections, we look at the rest of the demographic variables individually. This allows us to see who responded to the survey.
Step5: 2-Way Cross Tabulations
Before we look at the relation between demographic data and questions of interest, we look at two-way cross tabulations in demographic data.
Step6: Most Important Project
The following plots use offset stacked bar charts, showing the overall rankings of the most important project. They also display the specific distributions of rankings, for each choice.
Step7: Mentoring is the most important project, with very few respondents rating it negatively, followed by contributing to documentation.
Step8: It is reasonable to expect that different roles in Kubernetes may value different projects more highly. The plot above shows that for many issues and role, this is not true. Some items of note are while most groups rate Cleaning up the OWNERS file as their least important, there is a clear trend for Subproject Owners and Reviewers to view this as more important, although a large portion of them still rate this low. Similarly Subproject Owners view mentoring as less important than other groups.
Step9: Similarly to contributor roles, the interest in the next level does not appear to be a major factor in the ranking order. Mentoring is still very important to almost all levels of interest, with a minor exception being Subproject Owners. The group that stands out a bit are those who aren't interested in the next level, who value GitHub Management higher than some other projects.
Step10: Another interesting take away is that the most important projects do not vary widely across the length of contribution. Once again, Mentoring is the most important project across all demographics.
Analysis of Common Blockers
In this section, we use offset stacked bar charts again. They visualize which blockers cause the most issues for contributors.
Step11: The most frequent blocker across all contributors is debugging test failures, followed by finding issues to work on.
Step12: When we look at blockers, by the length of the contributor, we can see that contributors across all lengths have the most issues with debugging test failures. But, finding important issues varies across the groups. Below, we look closer at these two groups.
Step13: When it comes to debugging, it is less of an issue for new contributors, most likely because they are not as focused on contributing code yet. After their first year, it becomes a larger issue, but slowly improves over time.
Step14: Looking at contributors that have trouble finding issues to work on there is a clear trend that the longer you are a Kubernetes contributor, the less of an issue this becomes, suggesting a continued effort is needed to surface good first issues, and make new contributors aware of them.
Step15: When we segment the contributors by level, we again see that debugging test failures is the largest blocker among all groups. Most blockers affect contributor levels in similar patterns. The one slight exception, though, is that Subproject Owners are the only cohort to not struggle with finding the right SIG.
Step16: This in-depth view confirms that debugging test failures is an issue across all contributor levels, but is a larger issue for Subproject Owners and Approvers.
Step17: When we look at the spread of blockers across interest in the next level, we see that those are interested are the most likely to struggle finding the best issues to work on. In the plot below, this is shown in more detail.
Step18: When we look at the spread of blockers across interest in the next level, we see that those are interested are the most likely to struggle finding the best issues to work on. In the plot below, this is shown in more detail.
Because it is expected that the large increase in Twitter users may have affected the results of the survey, we looked at the users who reported using Twitter as their primary source of news, and how they compared to those who didn't.
Step19: Contributors who use Twitter as their primary source of news, about Kubernetes, are less likely to report struggling with debugging test failures. This is primarily because many Twitter users are newer ones.
Step20: Conversely, those who use Twitter do struggle to find Issues to Work on, again because most contributors who primarily use Twitter for their news tend to be new users.
First Place News is Seen
Step21: Most contributors are getting their news primarily from the official dev mailing list.
Step22: Looking across each level of the contributor ladder, most levels display the same patterns, with all groups primarily using the dev mailing list. The second most common source of news is the three Slack channels.
Step23: Looking at news sources by interest in next level, we can see that many people who aren't interested rely on the kubernetes-sig-contribex mailing list at a much higher proportion than the other groups. Those who are interested in the next level, either through mentoring or by themselves, tend to use Twitter more. But, this is likely an artifact of the survey being advertised on Twitter.
When we look news use by the length of time, we see that compared to other groups, contributors who have been contributing for less than a year rely on the dev mailing list. They replace this with Twitter, and possibly Slack.
Twitter Users
Because of the large increase in responses after the survey was advertised on Twitter, we pay special attention to what type of users list Twitter as their primary source of news.
Step24: Of users who get their news primarily through Twitter, most are members, or those working on becoming members
Step25: Many contributors, who use Twitter as their primary news source, have been contributing for less than a year. There is also a large proportion of users who have been contributing for two to three years. It is unclear why this cohort appears to use Twitter in large numbers, compared to users who have been contributing for one to two years. It is also unclear that this cohort appears to use Twitter at a level proportionately greater to even new contributors.
k/community Use
Step26: Of the contributors that rely on the k/community GitHub page, there are relatively equal proportions from all contributor length cohorts.
Step27: The distribution of contributors by their levels is an interesting mix, showing that both the highest and lowest levels of the ladder rely on the k/community GitHub. They rely on this more than the middle levels. This may be a way to connect the two communities, especially on issues of Mentoring support.
Step28: The above plot shows the proportion of users in each bucket created by the two-way faceting, and so it can be a bit misleading. For example, 100% of users who have been contributing one to two years and do not know about the existence of the contributor ladder check k/community first. Using the cross-tabulations above, this is only four people. We can see that across all lengths of contributions, both members and those working on membership use the k/community page.
Analysis of Contribution Areas
Step29: As the Kubernetes community moves towards using more repositories to better organize the code, we can see that more
contributions are being made in other repositories. Most of these are still under the Kuberentes project. Documentation is the second highest area of contributions.
Step30: When we exclude first year users, the pattern remains mostly the same, with Documentation being replaced as the second most commonly contributed area by code insides k8s/k8s.
Step31: The contribution areas vary by the user level on the ladder, with those working on membership. They are unaware that there is a ladder focusing more on documentation than the other levels. Unsurprisingly, a large proportion of those who do not know there is ladder, have not yet contributed.
Step32: Looking at contribution areas by length of time contributing, it is clear that the primary area that new contributors work with is documentation. Among no cohort is the largest area of contribution the core k8s/k8s repository, showing the ongoing organization effort is successful.
Step33: Contributors with employer support are more likely to contribute to the main repository, but a healthy portion of those without employer support, or with a complicated support situation, also contribute. The main areas that see less contributions from those without employer support are community development and plugin work.
Step34: Removing the new users, and repeating the analysis done above does, not change the overall distributions much.
Resource Use Analysis
Step35: Among all contributors, Slack and GitHub are the most frequently used resources, while dicuss.kubernetes.io and unofficial channels are almost never used.
Step36: When segmenting out the resource use by contribution length, the pattern stays roughly the same across all cohorts. Google Docs, which is used in more in administrative tasks, increases the longer a contributor is involved in the project.
Step37: The use of resources, across interest in the next level, shows only one major difference between the groups. Contributors not interested in the next level tend to use GitHub discussions, much less than other groups.
Step38: The level of the contributor on the ladder shows a large difference between those that use Google Groups and Mailing Lists, as well as those who use Google Docs, etc. The primary users of Zoom meetings tend to be Subproject Owners.
Step39: The largest group not using Google Groups are those who do not know that there is a contributor ladder. This suggests that advertising the group may lead to more people knowing about the very existence of a contributor ladder. Or, that the existence of the contributor ladder is discussed more on Google Groups, as compared to other channels.
Step40: The use of Google Drive, which is primarily used for administrative tasks, increases the longer a contributor is involved in the project, which is not a surprising outcome.
Step41: There is a slight tendency that the longer the contributor is involved in the project, the less they use YouTube. This is a very weak association, though, and hides the fact that most contributors across all lengths do not use YouTube.
Step42: The one group that does tend to use the YouTube recording, at least a few times a month, are those working on membership. This suggests that the resources available on YouTube are helpful to a subset of the community.
Use of Help Wanted Labels
Step43: A majority of users, across all demographics, make use of the Help Wanted and Good First Issue labels on GitHub.
Step44: The relative proportions of contributors who use the labels does not change with the length of contribution. The one exception being that very few contributors, who have been doing so for 3+ years, don't use the labels.
Step45: The plot above shows that these labels are especially helpful for those who are interested in the next level of the contributor ladder.
Step46: When analyzing the help wanted labels across levels of the contributor ladder, most groups do not have a large majority class, indicating that this is not a variable that predicts the usefulness of the labels.
Interest in Mentoring
Step47: Most contributors feel that they do not have enough experience to mentor others, suggesting that more outreach can be done. This can make all but the newest contributors feel confident that they have something to offer.
Step48: A majority of those who already mentor, as well as those who are interested in mentoring, have employers that support their work on Kubernetes. Those who have a complicated relationship with their employer are the only group to whom the most common response was not having enough time, or support.
Step49: There is no clear pattern between the interest to mentor and interest in the next contributor level. The only exception is that those who want to mentor feel like they don't know enough to do so.
Participation in Meet our Contributors (MoC)
Step50: Across all contributors, most do not know about the existence of Meet our Contributors.
Step51: Among all contributors who are interested in the next level of the ladder, most do still not know about MoC. This suggests a larger outreach would be useful, as most who do watch find it helpful.
Step52: As before, across all cohorts of contributor levels, most do not know about MoC. But, for those who do watch it, they find it helpful. The only levels where more contributors know of it, compared to those that don't, are subproject owners and approvers.
In the next series of plots, we analyze only those contributors who do not know about MoC.
Step53: Across all levels of the contributor ladder, many who are interested in the next level do not know about the existence of MoC.
Step54: The plot above shows that a majority of those unaware, have not been contributors for very long. This is regardless of their level on the contributor ladder.
Step55: The plot above shows that MoC is found useful by those who watch it. This is the case for those who have either attained the highest level on the ladder, or are interested in the next level. This holds true across all levels of the ladder. This suggests that MoC should not only cover information helpful to those trying to become members, but also those who wish to become approvers, reviewers, and subproject owners.
Step56: The majority of those who found MoC useful are contributors who are working towards their membership. This is suggesting that while most of the material might be geared towards them, the previous plot shows the importance of striking a balance between the two.
Ways to Increase Attendance at Thursday Meetings
Step57: The primary reason contributors don't attend Thursday meetings is that they have too many meetings in their personal lives. As this is not something the Kubernetes community can control, we suggest they focus on the second most common suggestion
Step58: Across contributor levels, the dominant reason for their attendance would be "fewer meetings in my personal schedule". What is interesting is that for those who are not yet members, it is less of a dominating reason than other cohorts. These contributors give almost equal weight to many different changes, some of which may be appropriate to the Thursday meeting, but some of which may indicate the need for new types of outreach programming.
Step59: Segmenting the contributors, by their length of contribution, does not reveal any patterns that are widely different than when looking at all the contributors as a whole.
Step60: When looking at the distribution of contributors, who would attend the meetings if they were held at a different time, we can see a large impact that location has. While the number of contributors located in Oceania and Africa is small, it makes forming significant conclusions more difficult. There are many contributors from Asia, indicating that the timing of the meetings is a major barrier to a large portion. This is simply because of the timezones they live in.
Reasons for Not Attending Summits
Step61: The largest reason for not attending the summits is that contributors feel they do not have enough funding to attend.
Step62: When we look at the reasons for not attending the summits dependent the length of time a contributor has been involved with the project, we see that in addition to lacking funding, the longer tenured contributors tend to help at other events co-located with KubeCon even during the summits.
Step63: As with above, the higher up the ladder one is, the more likely the are to be helping out at another event. Interestingly, while approvers are higher on the ladder than reviewers, they are less likely to be attending KubeCon, as well as the summits.
Step64: Unsurprisingly, funding is a greater barrier to attendance to those who only work on Kubernetes on their own time, but is still a concern for about a third of those with some support from their employer.
Agreement with Statements
Step65: Overall, the plot above displays the proportions one would hope to see. Many contributors are confident in their ability to understand continuous integration, and the related error messages enough to debug their code, while not feeling overburdened by test failures or notifications. | Python Code:
survey_data = prepare_2019.get_df(
"contribex-survey-2019.csv"
)
Explanation: Prepare data so the format is as compatible with the 2018 data as possible
End of explanation
(
p9.ggplot(survey_data, p9.aes(x="date_taken"))
+ p9.geom_bar()
+ p9.theme(axis_text_x=p9.element_text(angle=45, ha="right"))
+ p9.labs(x="Survey Date", y="Number of Responses", title="Responses Per Day")
)
Explanation: Examine response rates per day
End of explanation
response_rates = (
survey_data.groupby(["date_taken", "Contributing_Length", "Level_of_Contributor"])
.count()
.reindex(
pd.MultiIndex.from_product(
[
survey_data[survey_data[y].notnull()][y].unique().tolist()
for y in ["date_taken", "Contributing_Length", "Level_of_Contributor"]
],
names=["date_taken", "Contributing_Length", "Level_of_Contributor"],
),
fill_value=0,
)
.reset_index()
)
response_rates = response_rates.assign(
grp=response_rates.Contributing_Length.str.cat(response_rates.Level_of_Contributor)
)
(
p9.ggplot(response_rates,
p9.aes(x='factor(date_taken)',
y='Respondent_ID',
group='grp',
linetype='Contributing_Length',
color='Level_of_Contributor')) +
p9.geom_line() +
p9.labs(x='Survey Data',
linetype = "Length of Contribution",
color='Contributor Level',
y='Number of Responses') +
p9.theme(axis_text_x = p9.element_text(angle=45, ha='right'))
)
Explanation: The high spike seen on 1/13/20 aligns with the time when the survey was publicized on Twitter. To consider the potential effects of this, we examine how the response rate varied by various demographic information.
Examine response rates by contribution length, level and interest in next level
End of explanation
(
p9.ggplot(survey_data, p9.aes(x="date_taken", fill="factor(Contributing_Length)"))
+ p9.geom_bar()
+ p9.theme(axis_text_x=p9.element_text(angle=45, ha="right"))
+ p9.labs(x="Survey Date", y="Number of Responses", title="Responses Per Day", fill='Contributing Length')
)
(
p9.ggplot(
survey_data[survey_data["Level_of_Contributor"].notnull()],
p9.aes(x="date_taken", fill="factor(Level_of_Contributor)"),
)
+ p9.geom_bar()
+ p9.theme(axis_text_x=p9.element_text(angle=45, ha="right"))
+ p9.labs(x="Survey Date", y="Number of Responses", title="Responses Per Day", fill='Level of Contributor')
)
(
p9.ggplot(
survey_data[survey_data["Interested_in_next_level"].notnull()],
p9.aes(x="date_taken", fill="factor(Interested_in_next_level)"),
)
+ p9.geom_bar()
+ p9.theme(axis_text_x=p9.element_text(angle=45, ha="right"))
+ p9.labs(x="Survey Date", y="Number of Responses", title="Responses Per Day", fill="Interest in Next Level")
)
Explanation: The survey was advertised on Twitter, and two groups had the largest number of disproportionate responses. Those responses came from either contributors working on their membership, or users that have been contributing less than a year. The largest group is users that fall into both categories.
End of explanation
(
p9.ggplot(survey_data, p9.aes(x="Contributing_Length"))
+ p9.geom_bar()
+ p9.theme(axis_text_x=p9.element_text(angle=45))
+ p9.scale_x_discrete(
limits=[
"less than one year",
"one to two years",
"two to three years",
"3+ years",
]
)
+ p9.ggtitle("Number of Contributors by Length of Contribution")
+ p9.xlab("Length of Contribution")
+ p9.ylab("Number of Contributors")
)
(
p9.ggplot(
survey_data[survey_data["Level_of_Contributor"].notnull()],
p9.aes(x="Level_of_Contributor"),
)
+ p9.geom_bar()
+ p9.theme(axis_text_x=p9.element_text(angle=45, ha="right"))
+ p9.labs(
title="Number of Contributors by Contributor Level",
x="Contributor Level",
y="Number of Contributors",
)
+ p9.scale_x_discrete(labels=lambda x: ["\n".join(wrap(label, 20)) for label in x])
)
(
p9.ggplot(
survey_data[survey_data["World_Region"].notnull()], p9.aes(x="World_Region")
)
+ p9.geom_bar()
+ p9.labs(
title="Number of Contributors by World Region",
x="World Region",
y="Number of Contributors",
)
)
(
p9.ggplot(
survey_data[survey_data["Interested_in_next_level"].notnull()],
p9.aes(x="Interested_in_next_level"),
)
+ p9.geom_bar()
+ p9.theme(axis_text_x=p9.element_text(angle=45, ha="right"))
+ p9.labs(
title="Number of Contributors by Interest in Next Level",
x="Interest in Next Level",
y="Number of Contributors",
)
+ p9.scale_x_discrete(labels=lambda x: ["\n".join(wrap(label, 20)) for label in x])
)
(
p9.ggplot(survey_data, p9.aes(x="Contribute_to_other_OSS"))
+ p9.geom_bar()
+ p9.theme(axis_text_x=p9.element_text(angle=45, ha="right"))
+ p9.scale_x_discrete(
limits=["this is my first open source project!", "1 other", "2 or more"]
)
+ p9.ggtitle("Participation in Other Open Source Projects")
+ p9.xlab("Number of other OSS Projects")
+ p9.ylab("Number of Contributors")
)
employer_support = (
p9.ggplot(survey_data, p9.aes(x="Upstream_supported_at_employer"))
+ p9.geom_bar()
+ p9.theme(axis_text_x=p9.element_text(angle=45, ha="right"))
+ p9.labs(title="Support by Employer", x="Support Level", y="Count")
)
employer_support
Explanation: Univariate histograms
In the following sections, we look at the rest of the demographic variables individually. This allows us to see who responded to the survey.
End of explanation
pd.crosstab(survey_data.World_Region, survey_data.Level_of_Contributor)
pd.crosstab(survey_data.Contributing_Length, survey_data.Level_of_Contributor).loc[
["less than one year", "one to two years", "two to three years", "three+ years"]
]
pd.crosstab(survey_data.Contributing_Length, survey_data.Contribute_to_other_OSS).loc[
["less than one year", "one to two years", "two to three years", "three+ years"],
["this is my first open source project!", "1 other", "2 or more"],
]
pd.crosstab(
survey_data.Level_of_Contributor, survey_data.Upstream_supported_at_employer
)
pd.crosstab(
survey_data.Interested_in_next_level, survey_data.Upstream_supported_at_employer
)
pd.crosstab(survey_data.Contributing_Length, survey_data.Upstream_supported_at_employer)
pd.crosstab(survey_data.World_Region,
survey_data.Contribute_to_other_OSS)[['this is my first open source project!','1 other','2 or more']]
Explanation: 2-Way Cross Tabulations
Before we look at the relation between demographic data and questions of interest, we look at two-way cross tabulations in demographic data.
End of explanation
(
make_likert_chart(
survey_data,
"Most_Important_Proj:",
["1", "2", "3", "4", "5", "6", "7"],
max_value=7,
sort_x=True,
)
+ p9.labs(
x="Project",
color="Ranking",
fill="Ranking",
y="",
title="Distribution of Ranking of Most Important Projects",
)
)
Explanation: Most Important Project
The following plots use offset stacked bar charts, showing the overall rankings of the most important project. They also display the specific distributions of rankings, for each choice.
End of explanation
(
make_likert_chart(
survey_data,
"Most_Important_Proj:",
["1", "2", "3", "4", "5", "6", "7"],
facet_by=["Level_of_Contributor", "."],
max_value=7,
sort_x=True,
)
+ p9.labs(
x="Project",
y="",
fill="Ranking",
color="Ranking",
title="Rankings of projects in order of importance (1-7) by Contributor Level",
)
+ p9.theme(strip_text_y=p9.element_text(margin={"r": 0.9, "units": "in"}))
)
Explanation: Mentoring is the most important project, with very few respondents rating it negatively, followed by contributing to documentation.
End of explanation
(
make_likert_chart(
survey_data,
"Most_Important_Proj:",
["1", "2", "3", "4", "5", "6", "7"],
facet_by=["Interested_in_next_level", "."],
max_value=7,
sort_x=True,
)
+ p9.labs(
title="Rankings of projects in order of importance (1-7) by Interest in Next Level",
y="",
x="Project",
color="Ranking",
fill="Ranking",
)
+ p9.theme(strip_text_y=p9.element_text(margin={"r": 0.9, "units": "in"}))
)
Explanation: It is reasonable to expect that different roles in Kubernetes may value different projects more highly. The plot above shows that for many issues and role, this is not true. Some items of note are while most groups rate Cleaning up the OWNERS file as their least important, there is a clear trend for Subproject Owners and Reviewers to view this as more important, although a large portion of them still rate this low. Similarly Subproject Owners view mentoring as less important than other groups.
End of explanation
(
make_likert_chart(
survey_data,
"Most_Important_Proj:",
["1", "2", "3", "4", "5", "6", "7"],
facet_by=["Contributing_Length", "."],
max_value=7,
sort_x=True,
)
+ p9.labs(
title="Rankings of projects in order of importance (1-7) by Length of Contribution",
y="",
x="Project",
color="Ranking",
fill="Ranking",
)
+ p9.theme(strip_text_y=p9.element_text(margin={"r": 0.9, "units": "in"}))
)
Explanation: Similarly to contributor roles, the interest in the next level does not appear to be a major factor in the ranking order. Mentoring is still very important to almost all levels of interest, with a minor exception being Subproject Owners. The group that stands out a bit are those who aren't interested in the next level, who value GitHub Management higher than some other projects.
End of explanation
blocker_ratings = list(
reversed(
[
"A frequent blocker",
"Often a problem",
"Sometimes a problem",
"Rarely a problem",
"Not a problem",
]
)
)
(
make_likert_chart(survey_data, "Blocker:", blocker_ratings)
+ p9.labs(
title="Common Blockers", color="Severity", fill="Severity", x="Blocker", y=""
)
)
Explanation: Another interesting take away is that the most important projects do not vary widely across the length of contribution. Once again, Mentoring is the most important project across all demographics.
Analysis of Common Blockers
In this section, we use offset stacked bar charts again. They visualize which blockers cause the most issues for contributors.
End of explanation
(
make_likert_chart(survey_data,'Blocker:',
blocker_ratings,
['Contributing_Length','.'],
wrap_facets=True) +
p9.labs(x='Blocker',
y='',
fill='Rating',
color='Rating',
title='Common Blockers by Length of Contribution') +
p9.theme(strip_text_y = p9.element_text(margin={'r':.9,'units':'in'}))
)
Explanation: The most frequent blocker across all contributors is debugging test failures, followed by finding issues to work on.
End of explanation
(
make_single_likert_chart(survey_data,
'Blocker:_Debugging_test_failures',
'Contributing_Length',
blocker_ratings) +
p9.labs(x='Contributing Length',
y='',
fill="Rating",
color="Rating",
title='Debugging Test Failures Blocker by Contribution Length') +
p9.scale_x_discrete(limits=['less than one year', 'one to two years', 'two to three years', '3+ years'])
)
Explanation: When we look at blockers, by the length of the contributor, we can see that contributors across all lengths have the most issues with debugging test failures. But, finding important issues varies across the groups. Below, we look closer at these two groups.
End of explanation
(
make_single_likert_chart(survey_data,
'Blocker:_Finding_appropriate_issues_to_work_on',
'Contributing_Length',
blocker_ratings) +
p9.labs(x='Contributing Length',
y='',
fill="Rating",
color="Rating",
title='Finding Issues to Work on Blocker by Length of Contribution') +
p9.scale_x_discrete(limits=['less than one year',
'one to two years',
'two to three years',
'3+ years'])
)
Explanation: When it comes to debugging, it is less of an issue for new contributors, most likely because they are not as focused on contributing code yet. After their first year, it becomes a larger issue, but slowly improves over time.
End of explanation
(
make_likert_chart(survey_data,'Blocker:',
blocker_ratings,
['Level_of_Contributor','.']) +
p9.labs(x='Blocker',
y='',
fill='Rating',
color='Rating',
title='Common Blockers by Contributor Level') +
p9.theme(strip_text_y = p9.element_text(margin={'r':.9,'units':'in'}))
)
Explanation: Looking at contributors that have trouble finding issues to work on there is a clear trend that the longer you are a Kubernetes contributor, the less of an issue this becomes, suggesting a continued effort is needed to surface good first issues, and make new contributors aware of them.
End of explanation
(
make_single_likert_chart(
survey_data,
"Blocker:_Debugging_test_failures",
"Level_of_Contributor",
blocker_ratings,
)
+ p9.labs(
x="Contributor Level",
y="",
fill="Rating",
color="Rating",
title="Debugging Test Failures Blocker by Level of Contributor",
)
)
Explanation: When we segment the contributors by level, we again see that debugging test failures is the largest blocker among all groups. Most blockers affect contributor levels in similar patterns. The one slight exception, though, is that Subproject Owners are the only cohort to not struggle with finding the right SIG.
End of explanation
(
make_likert_chart(
survey_data, "Blocker:", blocker_ratings, ["Interested_in_next_level", "."]
)
+ p9.labs(
x="Blocker",
y="",
fill="Rating",
color="Rating",
title="Common Blockers by Interest in Next Level",
)
+ p9.theme(strip_text_y=p9.element_text(margin={"r": 0.9, "units": "in"}))
)
Explanation: This in-depth view confirms that debugging test failures is an issue across all contributor levels, but is a larger issue for Subproject Owners and Approvers.
End of explanation
(
make_single_likert_chart(survey_data,
'Blocker:_Finding_appropriate_issues_to_work_on',
'Interested_in_next_level',
blocker_ratings) +
p9.labs(x='Interest in next level',
y='Percent',fill="Rating",
color="Rating",
title='Finding Issues to Work on Blocker by Interest in the Next Level')
)
Explanation: When we look at the spread of blockers across interest in the next level, we see that those are interested are the most likely to struggle finding the best issues to work on. In the plot below, this is shown in more detail.
End of explanation
survey_data.loc[:, "Check_for_news:_@kubernetesio_twitter"] = survey_data[
"Check_for_news:_@kubernetesio_twitter"
].astype(str)
(
make_single_likert_chart(
survey_data,
"Blocker:_Debugging_test_failures",
"Check_for_news:_@kubernetesio_twitter",
blocker_ratings,
)
+ p9.labs(
x="Twitter Use",
y="",
fill="Rating",
color="Rating",
title="Debugging Test Failures Blocker by Twitter Use",
)
+ p9.scale_x_discrete(labels=["Doesn't Use Twitter", "Uses Twitter"])
)
Explanation: When we look at the spread of blockers across interest in the next level, we see that those are interested are the most likely to struggle finding the best issues to work on. In the plot below, this is shown in more detail.
Because it is expected that the large increase in Twitter users may have affected the results of the survey, we looked at the users who reported using Twitter as their primary source of news, and how they compared to those who didn't.
End of explanation
(
make_single_likert_chart(
survey_data,
"Blocker:_Finding_appropriate_issues_to_work_on",
"Check_for_news:_@kubernetesio_twitter",
blocker_ratings,
)
+ p9.labs(
x="Twitter Use",
y="",
fill="Rating",
color="Rating",
title="Finding Issues Blocker by Twitter Use",
)
+ p9.scale_x_discrete(labels=["Doesn't Use Twitter", "Uses Twitter"])
)
Explanation: Contributors who use Twitter as their primary source of news, about Kubernetes, are less likely to report struggling with debugging test failures. This is primarily because many Twitter users are newer ones.
End of explanation
#Convert back to an int after converting to a string for categorical views above
survey_data.loc[:,'Check_for_news:_@kubernetesio_twitter'] = survey_data[
'Check_for_news:_@kubernetesio_twitter'].astype(int)
(
make_bar_chart(survey_data,'Check_for_news:') +
p9.labs(title='Where Contributors See News First',
x='News Source',
y='Count')
)
Explanation: Conversely, those who use Twitter do struggle to find Issues to Work on, again because most contributors who primarily use Twitter for their news tend to be new users.
First Place News is Seen
End of explanation
(
make_bar_chart(
survey_data, "Check_for_news:", ["Level_of_Contributor", "."], proportional=True
)
+ p9.labs(
title="Where Contributors See News First by Contributor Level",
x="News Source",
y="Proportion",
fill="News Source",
)
+ p9.theme(strip_text_y=p9.element_text(margin={"r": 0.9, "units": "in"}))
)
Explanation: Most contributors are getting their news primarily from the official dev mailing list.
End of explanation
(
make_bar_chart(
survey_data,
"Check_for_news:",
["Interested_in_next_level", "."],
proportional=True,
)
+ p9.labs(
title="Where Contributors See News First by Interest in Next Level",
x="News Source",
y="Proportion",
fill="News Source",
)
+ p9.theme(strip_text_y=p9.element_text(margin={"r": 0.9, "units": "in"}))
)
Explanation: Looking across each level of the contributor ladder, most levels display the same patterns, with all groups primarily using the dev mailing list. The second most common source of news is the three Slack channels.
End of explanation
(
make_single_bar_chart(
survey_data[survey_data["Level_of_Contributor"].notnull()],
"Check_for_news:_@kubernetesio_twitter",
"Level_of_Contributor",
proportionally=True,
)
+ p9.labs(
title="Proportion of Contributors, by contributor level, who get news through Twitter First",
y="Proportion",
x="Contributor Level",
)
)
Explanation: Looking at news sources by interest in next level, we can see that many people who aren't interested rely on the kubernetes-sig-contribex mailing list at a much higher proportion than the other groups. Those who are interested in the next level, either through mentoring or by themselves, tend to use Twitter more. But, this is likely an artifact of the survey being advertised on Twitter.
When we look news use by the length of time, we see that compared to other groups, contributors who have been contributing for less than a year rely on the dev mailing list. They replace this with Twitter, and possibly Slack.
Twitter Users
Because of the large increase in responses after the survey was advertised on Twitter, we pay special attention to what type of users list Twitter as their primary source of news.
End of explanation
(
make_single_bar_chart(
survey_data[survey_data["Level_of_Contributor"].notnull()],
"Check_for_news:_@kubernetesio_twitter",
"Contributing_Length",
proportionally=True,
)
+ p9.labs(
title="Proportion of Contributors, by contributor level, who get news through Twitter First",
y="Proportion",
x="Contributor Level",
)
)
Explanation: Of users who get their news primarily through Twitter, most are members, or those working on becoming members
End of explanation
(
make_single_bar_chart(survey_data[survey_data['Level_of_Contributor'].notnull()],
'Check_for_news:_kubernetes/community_repo_in_GitHub_(Issues_and/or_PRs)',
'Contributing_Length',proportionally=True) +
p9.scale_x_discrete(limits=['less than one year',
'one to two years',
'two to three years',
'3+ years']) +
p9.labs(x='Length of Contribution',
y='Proportion',
title='Proportion of Contributors who Check k/community GitHub first')
)
Explanation: Many contributors, who use Twitter as their primary news source, have been contributing for less than a year. There is also a large proportion of users who have been contributing for two to three years. It is unclear why this cohort appears to use Twitter in large numbers, compared to users who have been contributing for one to two years. It is also unclear that this cohort appears to use Twitter at a level proportionately greater to even new contributors.
k/community Use
End of explanation
(
make_single_bar_chart(
survey_data[survey_data["Level_of_Contributor"].notnull()],
"Check_for_news:_kubernetes/community_repo_in_GitHub_(Issues_and/or_PRs)",
"Level_of_Contributor",
proportionally=True,
)
+ p9.labs(
x="Contributor Level",
y="Proportion",
title="Proportion of Contributors who Check k/community GitHub first",
)
)
Explanation: Of the contributors that rely on the k/community GitHub page, there are relatively equal proportions from all contributor length cohorts.
End of explanation
(
make_single_bar_chart(
survey_data[survey_data["Level_of_Contributor"].notnull()],
"Check_for_news:_kubernetes/community_repo_in_GitHub_(Issues_and/or_PRs)",
"Level_of_Contributor",
proportionally=True,
facet2="Contributing_Length",
)
+ p9.labs(
x="Contributor Level",
y="Proportion",
title="Proportion of Contributors who Check k/community GitHub first",
)
+ p9.theme(strip_text_y=p9.element_text(margin={"r": 1.15, "units": "in"}))
)
Explanation: The distribution of contributors by their levels is an interesting mix, showing that both the highest and lowest levels of the ladder rely on the k/community GitHub. They rely on this more than the middle levels. This may be a way to connect the two communities, especially on issues of Mentoring support.
End of explanation
(
make_bar_chart(survey_data,'Contribute:') +
p9.labs(x='Contribution',y='Count',title="Areas Contributed To")
)
Explanation: The above plot shows the proportion of users in each bucket created by the two-way faceting, and so it can be a bit misleading. For example, 100% of users who have been contributing one to two years and do not know about the existence of the contributor ladder check k/community first. Using the cross-tabulations above, this is only four people. We can see that across all lengths of contributions, both members and those working on membership use the k/community page.
Analysis of Contribution Areas
End of explanation
(
make_bar_chart(survey_data.query("Contributing_Length != 'less than one year'"),'Contribute:') +
p9.labs(x='Contribution',y='Count',title="Areas Contributed To (Less than 1 year excluded)")
)
Explanation: As the Kubernetes community moves towards using more repositories to better organize the code, we can see that more
contributions are being made in other repositories. Most of these are still under the Kuberentes project. Documentation is the second highest area of contributions.
End of explanation
(
make_bar_chart(
survey_data,
"Contribute:",
facet_by=["Level_of_Contributor", "."],
proportional=True,
)
+ p9.labs(
x="Contribution", y="Count", title="Areas Contributed To", fill="Contribution"
)
+ p9.theme(strip_text_y=p9.element_text(margin={"r": 0.8, "units": "in"}))
)
Explanation: When we exclude first year users, the pattern remains mostly the same, with Documentation being replaced as the second most commonly contributed area by code insides k8s/k8s.
End of explanation
(
make_bar_chart(
survey_data,
"Contribute:",
facet_by=["Contributing_Length", "."],
proportional=True,
)
+ p9.labs(
x="Contribution", y="Count", title="Areas Contributed To", fill="Contribution"
)
+ p9.theme(strip_text_y=p9.element_text(margin={"r": 0.8, "units": "in"}))
)
Explanation: The contribution areas vary by the user level on the ladder, with those working on membership. They are unaware that there is a ladder focusing more on documentation than the other levels. Unsurprisingly, a large proportion of those who do not know there is ladder, have not yet contributed.
End of explanation
(
make_bar_chart(
survey_data,
"Contribute:",
facet_by=["Upstream_supported_at_employer", "."],
proportional=True,
)
+ p9.labs(
title="Contributions Given Employer Support",
x="Contribution",
y="Count",
fill="Contribution",
color="Contribution",
)
+ p9.theme(strip_text_y=p9.element_text(margin={"r": 1.15, "units": "in"}))
)
Explanation: Looking at contribution areas by length of time contributing, it is clear that the primary area that new contributors work with is documentation. Among no cohort is the largest area of contribution the core k8s/k8s repository, showing the ongoing organization effort is successful.
End of explanation
(
make_bar_chart(
survey_data.query("Contributing_Length != 'less than one year'"),
"Contribute:",
facet_by=["Upstream_supported_at_employer", "."],
proportional=True,
)
+ p9.labs(
title="Contributions Given Employer Suppot (Less than 1 year excluded)",
x="",
y="Count",
fill="Contribution",
color="Contribution",
)
+ p9.theme(strip_text_y=p9.element_text(margin={"r": 1.15, "units": "in"}))
)
Explanation: Contributors with employer support are more likely to contribute to the main repository, but a healthy portion of those without employer support, or with a complicated support situation, also contribute. The main areas that see less contributions from those without employer support are community development and plugin work.
End of explanation
use_ratings = [
"Every Day",
"Several Times a Week",
"Several Times a Month",
"Occasionally",
"Never",
]
use_ratings.reverse()
(
make_likert_chart(survey_data, "Use_freq:", use_ratings, max_is_high=True)
+ p9.labs(
x="Resource",
y="",
color="Frequency",
fill="Frequency",
title="Resource Use Frequency",
)
)
Explanation: Removing the new users, and repeating the analysis done above does, not change the overall distributions much.
Resource Use Analysis
End of explanation
(
make_likert_chart(
survey_data,
"Use_freq:",
use_ratings,
["Contributing_Length", "."],
max_is_high=True,
)
+ p9.labs(
x="Resource",
y="",
color="Frequency",
fill="Frequency",
title="Resource Use Frequency",
)
+ p9.theme(strip_text_y=p9.element_text(margin={"r": 0.8, "units": "in"}))
)
Explanation: Among all contributors, Slack and GitHub are the most frequently used resources, while dicuss.kubernetes.io and unofficial channels are almost never used.
End of explanation
(
make_likert_chart(
survey_data,
"Use_freq:",
use_ratings,
["Interested_in_next_level", "."],
max_is_high=True,
)
+ p9.labs(
x="Resource",
y="",
color="Frequency",
fill="Frequency",
title="Resource Use Frequency",
)
+ p9.theme(strip_text_y=p9.element_text(margin={"r": 0.95, "units": "in"}))
)
Explanation: When segmenting out the resource use by contribution length, the pattern stays roughly the same across all cohorts. Google Docs, which is used in more in administrative tasks, increases the longer a contributor is involved in the project.
End of explanation
(
make_likert_chart(
survey_data,
"Use_freq:",
use_ratings,
["Level_of_Contributor", "."],
max_is_high=True,
)
+ p9.labs(
x="Resource",
y="",
color="Frequency",
fill="Frequency",
title="Resource Use Frequency",
)
+ p9.theme(strip_text_y=p9.element_text(margin={"r": 0.8, "units": "in"}))
)
Explanation: The use of resources, across interest in the next level, shows only one major difference between the groups. Contributors not interested in the next level tend to use GitHub discussions, much less than other groups.
End of explanation
(
make_single_likert_chart(
survey_data,
"Use_freq:_Google_Groups/Mailing_Lists",
"Level_of_Contributor",
use_ratings,
five_is_high=True,
)
+ p9.labs(
title="Use of Google Groups",
x="Level of Contributor",
y="Percent",
fill="Frequency",
color="Frequency",
)
)
Explanation: The level of the contributor on the ladder shows a large difference between those that use Google Groups and Mailing Lists, as well as those who use Google Docs, etc. The primary users of Zoom meetings tend to be Subproject Owners.
End of explanation
(
make_single_likert_chart(
survey_data,
"Use_freq:_Google_Docs/Forms/Sheets,_etc_(meeting_agendas,_etc)",
"Contributing_Length",
use_ratings,
five_is_high=True,
)
+ p9.labs(
title="Use of Google Drive",
x="Length of Contributions",
y="Percent",
fill="Frequency",
color="Frequency",
)
+ p9.scale_x_discrete(
limits=[
"less than one year",
"one to two years",
"two to three years",
"3+ years",
]
)
)
Explanation: The largest group not using Google Groups are those who do not know that there is a contributor ladder. This suggests that advertising the group may lead to more people knowing about the very existence of a contributor ladder. Or, that the existence of the contributor ladder is discussed more on Google Groups, as compared to other channels.
End of explanation
(
make_single_likert_chart(survey_data,
'Use_freq:_YouTube_recordings_(community_meetings,_SIG/WG_meetings,_etc.)',
'Contributing_Length',
use_ratings,
five_is_high=True) +
p9.labs(title='Use of YouTube Recordings',
x='Length of Contributions',
y='Percent',
fill="Frequency",
color='Frequency') +
p9.scale_x_discrete(limits=['less than one year', 'one to two years', 'two to three years', '3+ years']) +
p9.ylim(-0.75,0.75)
)
Explanation: The use of Google Drive, which is primarily used for administrative tasks, increases the longer a contributor is involved in the project, which is not a surprising outcome.
End of explanation
(
make_single_likert_chart(survey_data[survey_data['Interested_in_next_level'].notnull()],
'Use_freq:_YouTube_recordings_(community_meetings,_SIG/WG_meetings,_etc.)',
'Level_of_Contributor',
use_ratings,
five_is_high=True) +
p9.labs(title='Use of YouTube Recordings',
x='Interest in next level',
y='Percent',
fill="Frequency",
color='Frequency') +
p9.ylim(-0.75,0.75)
)
Explanation: There is a slight tendency that the longer the contributor is involved in the project, the less they use YouTube. This is a very weak association, though, and hides the fact that most contributors across all lengths do not use YouTube.
End of explanation
help_wanted = survey_data[
survey_data[
"Do_you_use_the\xa0Help_Wanted_and/or_Good_First_Issue_labels_on_issues_you_file_to_find_contributors"
].isna()
== False
]
help_plot = (
p9.ggplot(
help_wanted,
p9.aes(
x="Do_you_use_the\xa0Help_Wanted_and/or_Good_First_Issue_labels_on_issues_you_file_to_find_contributors",
fill="Do_you_use_the\xa0Help_Wanted_and/or_Good_First_Issue_labels_on_issues_you_file_to_find_contributors",
),
)
+ p9.geom_bar(show_legend=False)
+ p9.theme(axis_text_x=p9.element_text(angle=45, ha="right"))
+ p9.labs(
x="Used Label",
title="Use of Help Wanted and/or Good First Issue Labels",
y="Count",
)
)
help_plot
Explanation: The one group that does tend to use the YouTube recording, at least a few times a month, are those working on membership. This suggests that the resources available on YouTube are helpful to a subset of the community.
Use of Help Wanted Labels
End of explanation
(
help_plot
+ p9.facet_grid(["Contributing_Length", "."])
+ p9.theme(
strip_text_y=p9.element_text(
angle=0, ha="left", margin={"r": 1.2, "units": "in"}
)
)
)
Explanation: A majority of users, across all demographics, make use of the Help Wanted and Good First Issue labels on GitHub.
End of explanation
(
p9.ggplot(
help_wanted[help_wanted["Interested_in_next_level"].notnull()],
p9.aes(
x="Do_you_use_the\xa0Help_Wanted_and/or_Good_First_Issue_labels_on_issues_you_file_to_find_contributors",
fill="Do_you_use_the\xa0Help_Wanted_and/or_Good_First_Issue_labels_on_issues_you_file_to_find_contributors",
),
)
+ p9.geom_bar(show_legend=False)
+ p9.theme(axis_text_x=p9.element_text(angle=45, ha="right"))
+ p9.labs(
x="Used Label",
title="Use of Help Wanted and/or Good First Issue Labels",
y="Count",
)
+ p9.facet_grid(
["Interested_in_next_level", "."],
labeller=lambda label: "\n".join(wrap(label.replace("/", "/ ").strip(), 20)),
)
+ p9.theme(
strip_text_y=p9.element_text(
angle=0, ha="left", margin={"r": 1.2, "units": "in"}
)
)
)
Explanation: The relative proportions of contributors who use the labels does not change with the length of contribution. The one exception being that very few contributors, who have been doing so for 3+ years, don't use the labels.
End of explanation
(
help_plot
+ p9.facet_grid(
["Level_of_Contributor", "."],
labeller=lambda label: "\n".join(wrap(label.replace("/", "/ ").strip(), 20)),
)
+ p9.theme(
strip_text_y=p9.element_text(
angle=0, ha="left", margin={"r": 1.34, "units": "in"}
)
)
)
Explanation: The plot above shows that these labels are especially helpful for those who are interested in the next level of the contributor ladder.
End of explanation
available_to_mentor = list(survey_data.columns)[-8]
mentoring_interest = survey_data[survey_data[available_to_mentor].isna() == False]
mentoring_plot = (
p9.ggplot(
mentoring_interest, p9.aes(x=available_to_mentor, fill=available_to_mentor)
)
+ p9.geom_bar(show_legend=False)
+ p9.theme(axis_text_x=p9.element_text(angle=45, ha="right"))
+ p9.labs(x="Interest", title="Interest in Mentoring GSOC or Outreach", y="Count")
+ p9.scale_x_discrete(
labels=lambda labels_list: [
"\n".join(wrap(label.replace("/", "/ ").strip(), 30))
for label in labels_list
]
)
)
mentoring_plot
Explanation: When analyzing the help wanted labels across levels of the contributor ladder, most groups do not have a large majority class, indicating that this is not a variable that predicts the usefulness of the labels.
Interest in Mentoring
End of explanation
(
mentoring_plot
+ p9.facet_grid(["Upstream_supported_at_employer", "."],
labeller=lambda label: "\n".join(wrap(label.replace("/", "/ ").strip(), 20)))
+ p9.theme(strip_text_y=p9.element_text(angle=0, ha="left"))
+ p9.theme(
strip_text_y=p9.element_text(
angle=0, ha="left", margin={"r": 1.34, "units": "in"}
)
)
)
Explanation: Most contributors feel that they do not have enough experience to mentor others, suggesting that more outreach can be done. This can make all but the newest contributors feel confident that they have something to offer.
End of explanation
(
mentoring_plot
+ p9.facet_grid(
["Interested_in_next_level", "."],
labeller=lambda label: "\n".join(wrap(label.replace("/", "/ ").strip(), 20)),
)
+ p9.theme(
strip_text_y=p9.element_text(
angle=0, ha="left", margin={"r": 1.34, "units": "in"}
)
)
)
Explanation: A majority of those who already mentor, as well as those who are interested in mentoring, have employers that support their work on Kubernetes. Those who have a complicated relationship with their employer are the only group to whom the most common response was not having enough time, or support.
End of explanation
moc_participation_name = list(survey_data.columns)[-9]
moc_data = survey_data[survey_data[moc_participation_name].isna() == False]
moc_plot = (
p9.ggplot(moc_data, p9.aes(x=moc_participation_name, fill=moc_participation_name))
+ p9.geom_bar(show_legend=False)
+ p9.theme(axis_text_x=p9.element_text(angle=45, ha="right"))
+ p9.labs(title="Watched or Participated in Meet Our Contributors", x="", y="Count")
)
moc_plot
Explanation: There is no clear pattern between the interest to mentor and interest in the next contributor level. The only exception is that those who want to mentor feel like they don't know enough to do so.
Participation in Meet our Contributors (MoC)
End of explanation
(
p9.ggplot(
moc_data[moc_data["Interested_in_next_level"].notnull()],
p9.aes(x=moc_participation_name, fill=moc_participation_name),
)
+ p9.geom_bar(show_legend=False)
+ p9.facet_grid(
["Interested_in_next_level", "."],
labeller=lambda label: "\n".join(wrap(label.replace("/", "/ ").strip(), 20)),
)
+ p9.theme(
strip_text_y=p9.element_text(
angle=0, ha="left", margin={"r": 1.3, "units": "in"}
),
axis_text_x=p9.element_text(angle=45, ha="right"),
)
+ p9.labs(
x="Watched MoC",
title="Interest in next Level of the Contributor Ladder\n compared to MoC Use",
)
)
Explanation: Across all contributors, most do not know about the existence of Meet our Contributors.
End of explanation
(
moc_plot
+ p9.facet_grid(
["Level_of_Contributor", "."],
labeller=lambda label: "\n".join(wrap(label.replace("/", "/ ").strip(), 20)),
)
+ p9.theme(
strip_text_y=p9.element_text(
angle=0, ha="left", margin={"r": 1.34, "units": "in"}
)
)
)
Explanation: Among all contributors who are interested in the next level of the ladder, most do still not know about MoC. This suggests a larger outreach would be useful, as most who do watch find it helpful.
End of explanation
(
p9.ggplot(
moc_data[moc_data['Interested_in_next_level'].notnull() &
(moc_data[moc_participation_name] == "no - didn't know this was a thing")],
p9.aes(x='Interested_in_next_level', fill='Interested_in_next_level'))
+ p9.geom_bar(show_legend=False)
+ p9.facet_grid(
['Level_of_Contributor','.'],
labeller=lambda label: "\n".join(wrap(label.replace("/", "/ ").strip(), 20))
)
+ p9.theme(
strip_text_y = p9.element_text(
angle=0,ha='left',margin={"r": 1.3, "units": "in"}
),
axis_text_x = p9.element_text(angle=45,ha='right')
)
+ p9.labs(
x = 'Interested in Next Level',
y = "Count",
title = "Contributors who don't know about MoC")
)
Explanation: As before, across all cohorts of contributor levels, most do not know about MoC. But, for those who do watch it, they find it helpful. The only levels where more contributors know of it, compared to those that don't, are subproject owners and approvers.
In the next series of plots, we analyze only those contributors who do not know about MoC.
End of explanation
(
p9.ggplot(
moc_data[
(moc_data[moc_participation_name] == "no - didn't know this was a thing")
],
p9.aes(x="Contributing_Length", fill="Contributing_Length"),
)
+ p9.geom_bar(show_legend=False)
+ p9.facet_grid(
["Level_of_Contributor", "."],
labeller=lambda label: "\n".join(wrap(label.replace("/", "/ ").strip(), 20)),
)
+ p9.theme(
strip_text_y=p9.element_text(
angle=0, ha="left", margin={"r": 1.34, "units": "in"}
),
axis_text_x=p9.element_text(angle=45, ha="right"),
)
+ p9.labs(
x="Length of Contribution",
y="Count",
title="Contributors who don't know about MoC",
)
)
Explanation: Across all levels of the contributor ladder, many who are interested in the next level do not know about the existence of MoC.
End of explanation
(
p9.ggplot(
moc_data[
moc_data["Interested_in_next_level"].notnull()
& (moc_data[moc_participation_name] == "yes - it was helpful")
],
p9.aes(x="Interested_in_next_level", fill="Interested_in_next_level"),
)
+ p9.geom_bar(show_legend=False)
+ p9.facet_grid(
["Level_of_Contributor", "."],
labeller=lambda label: "\n".join(wrap(label.replace("/", "/ ").strip(), 20)),
)
+ p9.theme(
strip_text_y=p9.element_text(
angle=0, ha="left", margin={"r": 1.34, "units": "in"}
),
axis_text_x=p9.element_text(angle=45, ha="right"),
)
+ p9.labs(
x="Interested in Next Level",
y="Count",
title="Contributors who watched or participated in \n MoC and found it helpful",
)
+ p9.ylim(0, 15) # Make the same scale as those who don't find it helpful
)
Explanation: The plot above shows that a majority of those unaware, have not been contributors for very long. This is regardless of their level on the contributor ladder.
End of explanation
(
p9.ggplot(
moc_data[(moc_data[moc_participation_name] == "yes - it was helpful")],
p9.aes(x="Contributing_Length", fill="Contributing_Length"),
)
+ p9.geom_bar(show_legend=False)
+ p9.facet_grid(
["Level_of_Contributor", "."],
labeller=lambda label: "\n".join(wrap(label.replace("/", "/ ").strip(), 20)),
)
+ p9.theme(
strip_text_y=p9.element_text(
angle=0, ha="left", margin={"r": 1.34, "units": "in"}
),
axis_text_x=p9.element_text(angle=45, ha="right"),
)
+ p9.labs(
x="Length of Contribution",
y="Count",
title="Contributors who watched or participated in \n MoC and found it helpful",
)
+ p9.ylim(0, 25) # Make the same scale as those who don't find it helpful
)
Explanation: The plot above shows that MoC is found useful by those who watch it. This is the case for those who have either attained the highest level on the ladder, or are interested in the next level. This holds true across all levels of the ladder. This suggests that MoC should not only cover information helpful to those trying to become members, but also those who wish to become approvers, reviewers, and subproject owners.
End of explanation
(
make_bar_chart(survey_data, "Would_attend_if:")
+ p9.labs(x="Change", y="Count", title="Would attend if")
)
Explanation: The majority of those who found MoC useful are contributors who are working towards their membership. This is suggesting that while most of the material might be geared towards them, the previous plot shows the importance of striking a balance between the two.
Ways to Increase Attendance at Thursday Meetings
End of explanation
(
make_bar_chart(
survey_data,
"Would_attend_if:",
[".", "Level_of_Contributor"],
proportional=True,
)
+ p9.labs(x="Change", y="Count", title="Would attend if", fill="Change")
+ p9.theme(
strip_text_y=p9.element_text(angle=0, ha="left", margin={"r": 1, "units": "in"})
)
)
Explanation: The primary reason contributors don't attend Thursday meetings is that they have too many meetings in their personal lives. As this is not something the Kubernetes community can control, we suggest they focus on the second most common suggestion: distributing a full agenda prior to the meeting.
End of explanation
(
make_bar_chart(
survey_data, "Would_attend_if:", [".", "Contributing_Length"], proportional=True
)
+ p9.labs(x="Change", y="Count", title="Would attend if", fill='Reason')
+ p9.theme(
strip_text_y=p9.element_text(angle=0, ha="left", margin={"r": 1, "units": "in"})
)
)
Explanation: Across contributor levels, the dominant reason for their attendance would be "fewer meetings in my personal schedule". What is interesting is that for those who are not yet members, it is less of a dominating reason than other cohorts. These contributors give almost equal weight to many different changes, some of which may be appropriate to the Thursday meeting, but some of which may indicate the need for new types of outreach programming.
End of explanation
(
make_single_bar_chart(survey_data[survey_data['World_Region'].notnull()],
'Would_attend_if:_Different_timeslot_for_the_meeting',
'World_Region',
proportionally=True
) +
p9.labs(x='Change',
y='Count',
title="Would attend if")
)
Explanation: Segmenting the contributors, by their length of contribution, does not reveal any patterns that are widely different than when looking at all the contributors as a whole.
End of explanation
unattendance_str = "If_you_haven't_been_able_to_attend_a_previous_summit,_was_there_a_primary_reason_why_(if_multiple,_list_the_leading_reason)"
unattendance_data = survey_data.dropna(subset=[unattendance_str])
reason_for_not_going = (
p9.ggplot(unattendance_data, p9.aes(x=unattendance_str))
+ p9.geom_bar()
+ p9.theme(axis_text_x=p9.element_text(angle=45, ha="right"))
+ p9.labs(
title="Reasons for not attending summits",
y="Number of Contributors",
x="Reason",
)
)
reason_for_not_going
Explanation: When looking at the distribution of contributors, who would attend the meetings if they were held at a different time, we can see a large impact that location has. While the number of contributors located in Oceania and Africa is small, it makes forming significant conclusions more difficult. There are many contributors from Asia, indicating that the timing of the meetings is a major barrier to a large portion. This is simply because of the timezones they live in.
Reasons for Not Attending Summits
End of explanation
unattendance_contrib = (
unattendance_data.groupby(["Contributing_Length", unattendance_str])
.count()["Respondent_ID"]
.reset_index()
.merge(
unattendance_data.groupby(["Contributing_Length"])
.count()["Respondent_ID"]
.reset_index(),
on="Contributing_Length",
)
)
unattendance_contrib = unattendance_contrib.assign(
percent=unattendance_contrib.Respondent_ID_x / unattendance_contrib.Respondent_ID_y
)
(
p9.ggplot(unattendance_contrib,
p9.aes(x=unattendance_str,y='percent',fill='Contributing_Length')) +
p9.geom_bar(stat='identity',position='dodge') +
p9.theme(axis_text_x = p9.element_text(angle=45,ha='right')) +
p9.labs(title="Reasons for not attending summits",
y = "Proportion of Contributors",
x= 'Reason',
fill="Contributing Length")
)
Explanation: The largest reason for not attending the summits is that contributors feel they do not have enough funding to attend.
End of explanation
unattendance_level = unattendance_data.groupby(['Level_of_Contributor',unattendance_str]).count()['Respondent_ID'].reset_index().merge(unattendance_data.groupby(['Level_of_Contributor']).count()['Respondent_ID'].reset_index(), on = 'Level_of_Contributor')
unattendance_level = unattendance_level.assign(percent = unattendance_level.Respondent_ID_x/unattendance_level.Respondent_ID_y)
(
p9.ggplot(unattendance_level,
p9.aes(x=unattendance_str,y='percent',fill='Level_of_Contributor')) +
p9.geom_bar(stat='identity',position=p9.position_dodge(preserve='single')) +
p9.theme(axis_text_x = p9.element_text(angle=45,ha='right')) +
p9.labs(title="Reasons for not attending summits",
y = "Number of Contributors",
x= 'Reason',
fill= 'Level of Contributor')
)
Explanation: When we look at the reasons for not attending the summits dependent the length of time a contributor has been involved with the project, we see that in addition to lacking funding, the longer tenured contributors tend to help at other events co-located with KubeCon even during the summits.
End of explanation
unattendance_support = (
unattendance_data.groupby(["Upstream_supported_at_employer", unattendance_str])
.count()["Respondent_ID"]
.reset_index()
.merge(
unattendance_data.groupby(["Upstream_supported_at_employer"])
.count()["Respondent_ID"]
.reset_index(),
on="Upstream_supported_at_employer",
)
)
unattendance_support = unattendance_support.assign(
percent=unattendance_support.Respondent_ID_x / unattendance_support.Respondent_ID_y
)
(
p9.ggplot(
unattendance_support,
p9.aes(x=unattendance_str, y="percent", fill="Upstream_supported_at_employer"),
)
+ p9.geom_bar(stat="identity", position=p9.position_dodge(preserve="single"))
+ p9.theme(axis_text_x=p9.element_text(angle=45, ha="right"))
+ p9.labs(
title="Reasons for not attending summits",
y="Number of Contributors",
x="Reason",
fill='Employer Support'
)
)
Explanation: As with above, the higher up the ladder one is, the more likely the are to be helping out at another event. Interestingly, while approvers are higher on the ladder than reviewers, they are less likely to be attending KubeCon, as well as the summits.
End of explanation
agree_ratings = ["Strongly Disgree", "Disagree", "Neutral", "Agree", "Strongly Agree"]
(
make_likert_chart(survey_data, "Agree:", agree_ratings, max_is_high=True)
+ p9.labs(x="Statement", y="Number of Responses", fill="Rating", color="Rating")
)
Explanation: Unsurprisingly, funding is a greater barrier to attendance to those who only work on Kubernetes on their own time, but is still a concern for about a third of those with some support from their employer.
Agreement with Statements
End of explanation
(
make_likert_chart(
survey_data[survey_data["Blocker:_Debugging_test_failures"] > 3],
"Agree:",
agree_ratings,
max_is_high=True,
)
+ p9.labs(x="Statement", y="Number of Responses", fill="Rating", color="Rating")
)
Explanation: Overall, the plot above displays the proportions one would hope to see. Many contributors are confident in their ability to understand continuous integration, and the related error messages enough to debug their code, while not feeling overburdened by test failures or notifications.
End of explanation |
10,044 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Analysis of a run with constraints, and a run without
I have made a little analysis to test our ability to date a tree with node order constraints. I use the following tree
Step1: Then I will compare two MCMC runs
Step2: Let's look at the age difference between two nodes constrained in an older-younger fashion (the first constraint.
Step3: They are all negative, and thus agree with the constraint.
We look at the other constraint.
Step4: Same thing.
Now we look into an unconstrained run.
Step5: The difference is not always negative. | Python Code:
import sys
from ete3 import Tree, TreeStyle, NodeStyle
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import math
import scipy
import re
t = Tree("(((a:0.1,b:0.1):0.2, (c:0.2,d:0.2):0.1):0.6, ((e:0.4,f:0.4):0.3, (g:0.5,h:0.5):0.2):0.2);")
ts = TreeStyle()
ts.min_leaf_separation= 0
ts.scale = 88
nstyle = NodeStyle()
nstyle["size"] = 0
for n in t.traverse():
n.set_style(nstyle)
t.render("%%inline", tree_style=ts)
Explanation: Analysis of a run with constraints, and a run without
I have made a little analysis to test our ability to date a tree with node order constraints. I use the following tree:
End of explanation
runConsMet = pd.read_csv("output/testConstraintsMet.log", sep="\t")
runConsMet.describe()
%matplotlib inline
plt.plot(runConsMet['Iteration'], runConsMet['Posterior'], 'b-')
plt.xlabel("Iteration", fontsize=15)
plt.ylabel("Posterior", fontsize=15)
%matplotlib inline
plt.plot(runConsMet['Iteration'], runConsMet['root'], 'b-')
plt.xlabel("Iteration", fontsize=15)
plt.ylabel("Root age", fontsize=15)
def readTreesFromRBOutput (file):
try:
f=open(file, 'r')
except IOError:
print ("Unknown file: "+file)
sys.exit()
line = ""
treeStrings = list()
for l in f:
if "teration" not in l:
m = re.sub('\[&index=\d+\]', "", l)
treeStrings.append(m.split()[4])
trees=list()
for l in treeStrings:
trees.append ( Tree( l ) )
return trees
trees = readTreesFromRBOutput("output/testConstraintsMet.trees")
def getNodeHeights( t ):
node2Height = dict()
id2Height = dict()
for node in t.traverse("postorder"):
if node not in node2Height:
node2Height[node] = 0.0
id2Height[node.name] = 0.0
if node.up:
if node.up.name =='':
leaves = node.up.get_leaves()
name=""
for l in leaves:
name += l.name
node.up.name=name
node2Height[node.up] = node2Height[node] + node.dist
id2Height[str(node.up.name)] = node2Height[node] + node.dist
# print node.name + " : " + str(node2Height[node])
#return node2Height,id2Height
return id2Height
allHeights = list()
for t in trees:
allHeights.append(getNodeHeights(t))
Explanation: Then I will compare two MCMC runs: one with 2 node order constraints, and one without.
The node order constraints are as follow:
c d a b
g h e f
meaning that the MRCA of cd is older than the MRCA of AB (same thing for the second line), just like in the tree.
First we take a short look at the MCMC run.
End of explanation
diffABMinusCD = list()
for a in allHeights:
diffABMinusCD.append(a["ab"] - a["cd"])
pd.DataFrame( diffABMinusCD ).describe()
Explanation: Let's look at the age difference between two nodes constrained in an older-younger fashion (the first constraint.
End of explanation
diffEFMinusGH = list()
for a in allHeights:
diffEFMinusGH.append(a["ef"] - a["gh"])
pd.DataFrame( diffEFMinusGH ).describe()
Explanation: They are all negative, and thus agree with the constraint.
We look at the other constraint.
End of explanation
treesNC = readTreesFromRBOutput("output/testNoConstraint.trees")
allHeightsNC = list()
for t in treesNC:
allHeightsNC.append(getNodeHeights(t))
diffABMinusCDNC = list()
for a in allHeightsNC:
diffABMinusCDNC.append(a["ab"] - a["cd"])
pd.DataFrame( diffABMinusCDNC ).describe()
Explanation: Same thing.
Now we look into an unconstrained run.
End of explanation
diffEFMinusGHNC = list()
for a in allHeightsNC:
diffEFMinusGHNC.append(a["ef"] - a["gh"])
pd.DataFrame( diffEFMinusGHNC ).describe()
Explanation: The difference is not always negative.
End of explanation |
10,045 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
Step1: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
Step2: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
Step3: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
Step4: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
Step5: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters
Step8: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
Step9: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, this process can have sharply diminishing returns and can waste computational resources if you use too many iterations. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. The ideal number of iterations would be a level that stops shortly after the validation loss is no longer decreasing.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. Normally a good choice to start at is 0.1; however, if you effectively divide the learning rate by n_records, try starting out with a learning rate of 1. In either case, if the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
In a model where all the weights are optimized, the more hidden nodes you have, the more accurate the predictions of the model will be. (A fully optimized model could have weights of zero, after all.) However, the more hidden nodes you have, the harder it will be to optimize the weights of the model, and the more likely it will be that suboptimal weights will lead to overfitting. With overfitting, the model will memorize the training data instead of learning the true pattern, and won't generalize well to unseen data.
Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose. You'll generally find that the best number of hidden nodes to use ends up being between the number of input and output nodes.
Step10: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly. | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
rides[:24*10].plot(x='dteday', y='cnt')
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
Explanation: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
self.activation_function = lambda x : 1 / (1 + np.exp(-x)) # Replace 0 with your sigmoid calculation.
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
(self.input_nodes, self.hidden_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.lr = learning_rate
#### TODO: Set self.activation_function to your implemented sigmoid function ####
#
# Note: in Python, you can define a function with a lambda expression,
# as shown below.
### If the lambda code above is not something you're familiar with,
# You can uncomment out the following three lines and put your
# implementation there instead.
#
#def sigmoid(x):
# return 0 # Replace 0 with your sigmoid calculation here
#self.activation_function = sigmoid
def train(self, features, targets):
''' Train the network on batch of features and targets.
Arguments
---------
features: 2D array, each row is one data record, each column is a feature
targets: 1D array of target values
'''
n_records = features.shape[0]
delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)
delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)
for X, y in zip(features, targets):
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer - Replace these values with your calculations.
hidden_inputs = np.dot(X,self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with your calculations.
final_inputs = np.dot(hidden_outputs,self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error - Replace this value with your calculations.
error = y-final_outputs # Output layer error is the difference between desired target and actual output.
# TODO: Calculate the backpropagated error term (delta) for the output
output_error_term = error
# TODO: Calculate the hidden layer's contribution to the error
hidden_error = np.dot(self.weights_hidden_to_output,output_error_term)
# TODO: Calculate the backpropagated error term (delta) for the hidden layer
hidden_error_term = hidden_error * hidden_outputs * (1 - hidden_outputs)
# Weight step (input to hidden)
delta_weights_i_h += hidden_error_term*X[:,None]
# Weight step (hidden to output)
delta_weights_h_o += output_error_term * hidden_outputs[:,None]
# TODO: Update the weights - Replace these values with your calculations.
self.weights_hidden_to_output += self.lr*delta_weights_h_o/n_records # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += self.lr*delta_weights_i_h/n_records # update input-to-hidden weights with gradient descent step
def run(self, features):
''' Run a forward pass through the network with input features
Arguments
---------
features: 1D array of feature values
'''
#### Implement the forward pass here ####
# TODO: Hidden layer - replace these values with the appropriate calculations.
hidden_inputs = np.dot(features,self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with the appropriate calculations.
final_inputs = np.dot(hidden_outputs,self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
Explanation: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
<img src="assets/neural_network.png" width=300px>
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
Explanation: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
End of explanation
import sys
### Set the hyperparameters here ###
iterations = 5000
learning_rate = 0.5
hidden_nodes =26
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim()
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, this process can have sharply diminishing returns and can waste computational resources if you use too many iterations. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. The ideal number of iterations would be a level that stops shortly after the validation loss is no longer decreasing.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. Normally a good choice to start at is 0.1; however, if you effectively divide the learning rate by n_records, try starting out with a learning rate of 1. In either case, if the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
In a model where all the weights are optimized, the more hidden nodes you have, the more accurate the predictions of the model will be. (A fully optimized model could have weights of zero, after all.) However, the more hidden nodes you have, the harder it will be to optimize the weights of the model, and the more likely it will be that suboptimal weights will lead to overfitting. With overfitting, the model will memorize the training data instead of learning the true pattern, and won't generalize well to unseen data.
Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose. You'll generally find that the best number of hidden nodes to use ends up being between the number of input and output nodes.
End of explanation
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation |
10,046 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Mmr
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step59: 14.2. Shortwave Bands
Is Required
Step60: 14.3. Longwave Bands
Is Required
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step62: 15.2. Twomey
Is Required
Step63: 15.3. Twomey Minimum Ccn
Is Required
Step64: 15.4. Drizzle
Is Required
Step65: 15.5. Cloud Lifetime
Is Required
Step66: 15.6. Longwave Bands
Is Required
Step67: 16. Model
Aerosol model
16.1. Overview
Is Required
Step68: 16.2. Processes
Is Required
Step69: 16.3. Coupling
Is Required
Step70: 16.4. Gas Phase Precursors
Is Required
Step71: 16.5. Scheme Type
Is Required
Step72: 16.6. Bulk Scheme Species
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mohc', 'hadgem3-gc31-mm', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: MOHC
Source ID: HADGEM3-GC31-MM
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:14
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation |
10,047 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step2: Jupyter Notebook backend demo
This example shows how vispy's low-level gloo interface can be used to display a WebGL canvas in a notebook. By default, vispy will detect that it is being run in a notebook and load the "ipynb_webgl" backend automatically. This does require that the vispy extension is installed.
The below functionality may require manually enabling the vispy extension. See the installation instructions for more details.
Due to the "state machine" nature of WebGL and the VisPy extension, you may need to restart the jupyter kernel followed by refreshing the browser page to clear the browser's state.
Step3: Every cell above was preparing our GL Canvas for operation. Now we will create the Canvas instance and because of the self.show() in our __init__ method our canvas will be shown and its timer started immediately.
Step4: We could also manually make a VispyWidget object and attach our canvas to it.
Step5: When timers are involved we can run the stop and start methods to turn them on/off and see the result in the widget displayed above. | Python Code:
import numpy as np
import vispy
import vispy.gloo as gloo
from vispy import app
from vispy.util.transforms import perspective, translate, rotate
# load the vispy bindings manually for the notebook which enables webGL
# %load_ext vispy
n = 100
a_position = np.random.uniform(-1, 1, (n, 3)).astype(np.float32)
a_id = np.random.randint(0, 30, (n, 1))
a_id = np.sort(a_id, axis=0).astype(np.float32)
VERT_SHADER =
uniform mat4 u_model;
uniform mat4 u_view;
uniform mat4 u_projection;
attribute vec3 a_position;
attribute float a_id;
varying float v_id;
void main (void) {
v_id = a_id;
gl_Position = u_projection * u_view * u_model * vec4(a_position,1.0);
}
FRAG_SHADER =
varying float v_id;
void main()
{
float f = fract(v_id);
// The second useless test is needed on OSX 10.8 (fuck)
if( (f > 0.0001) && (f < .9999) )
discard;
else
gl_FragColor = vec4(0,0,0,1);
}
class Canvas(app.Canvas):
# ---------------------------------
def __init__(self, size=None, show=True):
app.Canvas.__init__(self, keys='interactive', size=size)
self.program = gloo.Program(VERT_SHADER, FRAG_SHADER)
# Set uniform and attribute
self.program['a_id'] = gloo.VertexBuffer(a_id)
self.program['a_position'] = gloo.VertexBuffer(a_position)
self.translate = 5
self.view = translate((0, 0, -self.translate), dtype=np.float32)
self.model = np.eye(4, dtype=np.float32)
gloo.set_viewport(0, 0, self.physical_size[0], self.physical_size[1])
self.projection = perspective(45.0, self.size[0] /
float(self.size[1]), 1.0, 1000.0)
self.program['u_projection'] = self.projection
self.program['u_model'] = self.model
self.program['u_view'] = self.view
self.theta = 0
self.phi = 0
self.context.set_clear_color('white')
self.context.set_state('translucent')
self.timer = app.Timer('auto', connect=self.on_timer, start=True)
if show:
self.show()
# ---------------------------------
def on_key_press(self, event):
if event.text == ' ':
if self.timer.running:
self.timer.stop()
else:
self.timer.start()
# ---------------------------------
def on_timer(self, event):
self.theta += .5
self.phi += .5
self.model = np.dot(rotate(self.theta, (0, 0, 1)),
rotate(self.phi, (0, 1, 0)))
self.program['u_model'] = self.model
self.update()
# ---------------------------------
def on_resize(self, event):
gloo.set_viewport(0, 0, event.physical_size[0], event.physical_size[1])
self.projection = perspective(45.0, event.size[0] /
float(event.size[1]), 1.0, 1000.0)
self.program['u_projection'] = self.projection
# ---------------------------------
def on_mouse_wheel(self, event):
self.translate += event.delta[1]
self.translate = max(2, self.translate)
self.view = translate((0, 0, -self.translate))
self.program['u_view'] = self.view
self.update()
# ---------------------------------
def on_draw(self, event):
self.context.clear()
self.program.draw('line_strip')
Explanation: Jupyter Notebook backend demo
This example shows how vispy's low-level gloo interface can be used to display a WebGL canvas in a notebook. By default, vispy will detect that it is being run in a notebook and load the "ipynb_webgl" backend automatically. This does require that the vispy extension is installed.
The below functionality may require manually enabling the vispy extension. See the installation instructions for more details.
Due to the "state machine" nature of WebGL and the VisPy extension, you may need to restart the jupyter kernel followed by refreshing the browser page to clear the browser's state.
End of explanation
c = Canvas(size=(300, 300))
Explanation: Every cell above was preparing our GL Canvas for operation. Now we will create the Canvas instance and because of the self.show() in our __init__ method our canvas will be shown and its timer started immediately.
End of explanation
# from vispy.app.backends.ipython import VispyWidget
# w = VispyWidget()
# c2 = Canvas(size=(300, 300), show=False)
# w.set_canvas(c2)
# w
Explanation: We could also manually make a VispyWidget object and attach our canvas to it.
End of explanation
c.timer.stop()
c.timer.start()
Explanation: When timers are involved we can run the stop and start methods to turn them on/off and see the result in the widget displayed above.
End of explanation |
10,048 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Read data
Step1: Target variable
Step2: Target variable is Survived.
Quality metric
Your score is the percentage of passengers you correctly predict. That means - accuracy.
Model
One variable model
Let's build a very simple model, based on one variable.
That nobody will survived.
Step3: Run & evoluate single variable model
Step4: What do you think about this result?
Let's build more advanced model
Missing values
There're several methods how to manage missing values, let's fill out -1. | Python Code:
train_df = pd.read_csv('../input/train.csv')
test_df = pd.read_csv('../input/test.csv')
all_df = train_df.append(test_df)
all_df['is_test'] = all_df.Survived.isnull()
all_df.index = all_df.Survived
del all_df['Survived']
all_df.head()
Explanation: Read data
End of explanation
train_df.describe()
Explanation: Target variable
End of explanation
def select_features(df):
non_obj_feats = df.columns[ df.dtypes != 'object' ]
black_list = ['is_test']
return [feat for feat in non_obj_feats if feat not in black_list ]
def get_X_y(df):
feats = select_features(df)
X = df[feats].values
y = df.index.values.astype(int)
return X, y
def check_quality(model, X, y, n_folds=5, random_state=0, shuffle=False):
skf = StratifiedKFold(y, n_folds=n_folds, random_state=random_state, shuffle=shuffle)
scores = []
for train_index, test_index in skf:
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
score = accuracy_score(y_test, y_pred)
scores.append(score)
return np.mean(scores), np.std(scores)
def train_and_verify(all_df, model):
X, y = get_X_y( all_df[ all_df.is_test == False ] )
return check_quality(model, X, y)
class SingleVariableModel(BaseEstimator, ClassifierMixin):
def __init__(self, seed=1):
np.random.seed(seed)
def fit(self, X, y):
return self
def predict(self, X):
return [0] * len(X)
def __repr__(self):
return 'SingleVariableModel'
Explanation: Target variable is Survived.
Quality metric
Your score is the percentage of passengers you correctly predict. That means - accuracy.
Model
One variable model
Let's build a very simple model, based on one variable.
That nobody will survived.
End of explanation
train_and_verify(all_df, SingleVariableModel())
Explanation: Run & evoluate single variable model
End of explanation
all_df.fillna(-1, inplace=True)
train_and_verify(all_df, RandomForestClassifier())
Explanation: What do you think about this result?
Let's build more advanced model
Missing values
There're several methods how to manage missing values, let's fill out -1.
End of explanation |
10,049 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Part 3
Step1: Applying the Rotation
Let's try out our function, using it to derive the heptagon edges from the P1-P0 edge, and drawing the result, using the original "render" function that does not correct for the skewed coordinate frame
Step2: Clearly our rotation function generates the same heptagon vertices as those we derived manually in Part 2!
Perhaps we should not be surprised, but think about what we've produced here
Step3: Combining Scaling and Rotation
Scaling with heptagon number works the same way, of course, benefiting from exact integer arithmetic. Now that we have all the necessary ideas in hand, we can combine scaling and rotation to produce the "nautilus shell" figure shown above.
We'll start with triangle $\triangle P2P4P6$ from the heptagon, then scale by a factor of $\frac{\rho}{\sigma} = \rho-1$, and apply our rotation.
Step4: Drawing the Nested Heptagrams
For good measure, I'm including the code used to render the other figure shown at the top. Although this could be done recursively, with computed intersections for the nested vertices, I've just done manual computation. | Python Code:
# load the definitions from the previous notebooks
%run DrawingTheHeptagon.py
r = sigma-rho
s = rho-1
t = one-rho # the __sub__ function requires a HeptagonNumber on the left, so "1-rho" won't work
u = rho-1
def rotate(v) :
x, y = v
return ( r*x + t*y, s*x + u*y )
def plusv( v1, v2 ) :
h1, h2 = v1
h3, h4 = v2
return ( h1+h3, h2+h4 )
def scale( s, v ) :
x, y = v
return ( x*s, y*s )
Explanation: Part 3: Heptagon Rotations
In Part 1 and Part 2 we saw how we can represent and render these two figures, using a special kind of number that is custom made for creating figures with the symmetries and proportions of a heptagon. Now, we will explore how we can encode the symmetry in a rotation function, and we'll use that function to generate the vertices for the right-hand figure, the "nautilus shell".
<img src="heptagonSampler.png" width=1100 height=600 />
Deriving the Rotation Function
One of the defining characteristics of the regular heptagon is its order-7 rotational symmetry. A side-effect of our use of a natural coordinate frame for the heptagon vertices is that our vertices no longer lie on a circle, in that coordinate frame. However, it turns out that we can still encode a "rotation" function that maps each vertex to the next. We don't need any Trigonometry to do this; just a simple trick from Linear Algebra.
The way to derive such a mapping function involves setting up a pair of equations that relate "input" vectors to "output" vectors, and solving those equations. It sounds difficult, but with a clever choice of input vectors, it is very simple.
Looking at our heptagon, we can define our mapping in terms of any of the three kinds of diagonals. Let's try it using the longest diagonals, the blue ones. The trick to setting up easy equations is to work with vectors that have either a zero X-component or a zero Y-component. For that reason, we'll start with the P2-P6 diagonal as our first input, and derive the mapping that takes it to the P3-P0 diagonal.
<img src="heptagonRotationMapping.png" width=500 height=400/>
As we saw in Part 2, a 2-dimensional linear transformation can be written down as a pair of equations:
$$ x' = r x + t y $$
$$ y' = s x + u y $$
We wish to derive the values for $r$, $s$, $t$, and $u$. We can do this by plugging in our chosen input vector as $(x,y)$, and our output vector as $(x',y')$:
$$ \sigma = r \sigma^2 + 0 $$
$$ \rho\sigma = s \sigma^2 + 0 $$
Since our input vector had zero as a $y$ coordinate, these two equations are trivial to solve, giving us the values for $r$ and $s$. We can simplify by appling the identities from Part 1:
$$ r = \frac{1}{\sigma} = \sigma-\rho $$
$$ s = \frac{\rho}{\sigma} = \rho-1 $$
To find $u$ and $t$, we can start with the P4-P0 diagonal as our input vector, which will have a zero $x$ coordinate, and map it to P5-P1 as the output:
$$ -\rho\sigma = 0 + t \sigma^2 $$
$$ \rho\sigma = 0 + u \sigma $$
$$ t = -\frac{\rho}{\sigma} = 1-\rho $$
$$ u = \frac{\rho}{\sigma} = \rho-1 $$
If you prefer to think in terms of matrices, here is the transformation matrix we have derived:
$$
\begin{bmatrix}
\sigma-\rho & 1-\rho
\ \rho-1 & \rho-1
\end{bmatrix} =
\begin{bmatrix}
\frac{1}{\sigma} & \frac{-\sigma}{\rho}
\ \frac{\sigma}{\rho} & \frac{\sigma}{\rho}
\end{bmatrix}
$$
Now we can write our rotate function, which takes a vector as a 2-tuple, and produces another 2-tuple. While we're at it, we'll write simple functions for adding vectors and scaling them.
End of explanation
p0p1 = ( sigma, zero )
p0 = ( rho, zero )
def heptagonVerts(v,e):
result = []
vi = v
ei = e
for i in range(7):
result .append( vi )
vi = plusv( vi, ei )
ei = rotate( ei )
return result
rotationHeptagon = heptagonVerts( p0, p0p1 )
%pylab inline
fig = matplotlib.pyplot.figure(figsize=(6,6))
ax = fig.add_subplot(111)
ax.add_patch( drawPolygon( rotationHeptagon,'#dd0000', render ) )
ax.set_xlim(-0.5,5.5)
ax.set_ylim(-0.5,5.5)
Explanation: Applying the Rotation
Let's try out our function, using it to derive the heptagon edges from the P1-P0 edge, and drawing the result, using the original "render" function that does not correct for the skewed coordinate frame:
End of explanation
v = p0p1
for i in range( 22 ):
v = rotate( v )
x,y = v
print( str(x) + ", " + str(y) )
Explanation: Clearly our rotation function generates the same heptagon vertices as those we derived manually in Part 2!
Perhaps we should not be surprised, but think about what we've produced here: a "rotation" function that is not circular! We have modeled a mapping that moves a point roughly a seventh of the way around an elliptical path. Even if you've been through a linear algebra course, or a mechanics course, or any other engineering-oriented introduction to matrices, it is a safe bet that the course focused almost entirely on orthogonal transformations, as are used to model rigid body motions, and so you may be as pleased and surprised that this works as I was. My mathematics professors won't be pleased that I was surprised, of course, since we really haven't done anything remarkable here at all; we've simply used basic linear algebra in a way seldom illustrated.
Another point that bears mentioning here is the benefit we get from using integer-based heptagon number. We can iterate our rotation function as many times as we like, and we'll never see any drift or error in the results, as we eventually would if we used floating point numbers. The function is an exact implementation of cyclic operation of order seven, and we can see this if we just print out the values for few iterations:
End of explanation
origin = ( zero, zero )
def nautilus( n ):
result = []
def shrink(v):
return scale( rho_over_sigma, v )
p1 = ( sigma*sigma, zero )
p2 = ( rho, rho*rho )
for i in range(n):
result .append( origin )
result .append( p1 )
result .append( p2 )
p1 = p2
p2 = rotate( shrink( p2 ) )
return result
fig = plt.figure(figsize=(9,6))
ax = fig.add_subplot(111)
ax.add_patch( drawPolygon( nautilus( 22 ), '#0044aa', skewRender ) )
ax.set_xlim(-3,6)
ax.set_ylim(-2,4)
ax.axis('off')
Explanation: Combining Scaling and Rotation
Scaling with heptagon number works the same way, of course, benefiting from exact integer arithmetic. Now that we have all the necessary ideas in hand, we can combine scaling and rotation to produce the "nautilus shell" figure shown above.
We'll start with triangle $\triangle P2P4P6$ from the heptagon, then scale by a factor of $\frac{\rho}{\sigma} = \rho-1$, and apply our rotation.
End of explanation
def ngonPath( edge, n, skip=1, start=origin ):
result = []
vi = plusv( start, edge )
for i in range(n):
result .append( vi )
for j in range(skip):
edge = rotate( edge )
vi = plusv( vi, edge )
return result
one_edge = ( sigma, zero )
heptagon_path = ngonPath( one_edge, 7 )
rho_edge = plusv( one_edge, rotate( one_edge ) )
heptagram_rho_path = ngonPath( rho_edge, 7, 2 )
sigma_edge = ( zero, sigma+rho+1 )
heptagram_sigma_path = ngonPath( sigma_edge, 7, 4 )
inner_origin = ( sigma-rho, rho )
small_edge = ( rho_inv, zero )
small_rho_edge = plusv( small_edge, rotate( small_edge ) )
small_sigma_edge = ( zero, HeptagonNumber(-1,0,1) )
heptagon_path_shrunk = ngonPath( small_edge, 7, 1, inner_origin )
heptagram_rho_path_shrunk = ngonPath( small_rho_edge, 7, 2, inner_origin )
heptagram_sigma_path_shrunk = ngonPath( small_sigma_edge, 7, 4, inner_origin )
fig = plt.figure(figsize=(8,8))
ax = fig.add_subplot(111)
ax.add_patch( drawPolygon( heptagon_path,'#dd0000', skewRender ) )
ax.add_patch( drawPolygon( heptagram_rho_path,'#990099', skewRender ) )
ax.add_patch( drawPolygon( heptagram_sigma_path,'#0000dd', skewRender ) )
ax.add_patch( drawPolygon( heptagon_path_shrunk,'#dd0000', skewRender ) )
ax.add_patch( drawPolygon( heptagram_rho_path_shrunk,'#990099', skewRender ) )
ax.add_patch( drawPolygon( heptagram_sigma_path_shrunk,'#0000dd', skewRender ) )
ax.set_xlim(-2,4)
ax.set_ylim(-0.5,5.5)
ax.axis('off')
Explanation: Drawing the Nested Heptagrams
For good measure, I'm including the code used to render the other figure shown at the top. Although this could be done recursively, with computed intersections for the nested vertices, I've just done manual computation.
End of explanation |
10,050 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Vertex AI
Step1: Install the latest GA version of google-cloud-storage library as well.
Step2: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
Step3: Before you begin
GPU runtime
This tutorial does not require a GPU runtime.
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the following APIs
Step4: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas
Step5: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
Step6: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step7: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
Step8: Only if your bucket doesn't already exist
Step9: Finally, validate access to your Cloud Storage bucket by examining its contents
Step10: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Step11: Initialize Vertex SDK for Python
Initialize the Vertex SDK for Python for your project and corresponding bucket.
Step12: Set pre-built containers
Set the pre-built Docker container image for training and prediction.
For the latest list, see Pre-built containers for training.
For the latest list, see Pre-built containers for prediction.
Step13: Set machine type
Next, set the machine type to use for training and prediction.
Set the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure the compute resources for the VMs you will use for for training and prediction.
machine type
n1-standard
Step14: Examine the training package
Package layout
Before you start the training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.
PKG-INFO
README.md
setup.cfg
setup.py
trainer
__init__.py
task.py
The files setup.cfg and setup.py are the instructions for installing the package into the operating environment of the Docker image.
The file trainer/task.py is the Python script for executing the custom training job. Note, when we referred to it in the worker pool specification, we replace the directory slash with a dot (trainer.task) and dropped the file suffix (.py).
Package Assembly
In the following cells, you will assemble the training package.
Step15: Store training script on your Cloud Storage bucket
Next, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
Step16: Train a model
training.create-python-pre-built-container
Create and run custom training job
To train a custom model, you perform two steps
Step17: Example output
Step18: general.import-model
Upload the model
Next, upload your model to a Model resource using Model.upload() method, with the following parameters
Step19: Example output
Step20: Make the batch input file
Now make a batch input file, which you will store in your local Cloud Storage bucket. Each instance in the prediction request is a list of the form
Step21: Make the batch prediction request
Now that your Model resource is trained, you can make a batch prediction by invoking the batch_predict() method, with the following parameters
Step22: Example output
Step23: Example Output
Step24: Example Output
Step25: Example output
Step26: Make the prediction
Now that your Model resource is deployed to an Endpoint resource, you can do online predictions by sending prediction requests to the Endpoint resource.
Request
The format of each instance is
Step27: Example output
Step28: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial | Python Code:
import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
Explanation: Vertex AI: Vertex AI Migration: Custom Scikit-Learn model with pre-built training container
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/ai-platform-samples/blob/master/vertex-ai-samples/tree/master/notebooks/official/migration/UJ10%20Vertex%20SDK%20Custom%20Scikit-Learn%20with%20pre-built%20training%20container.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/ai-platform-samples/blob/master/vertex-ai-samples/tree/master/notebooks/official/migration/UJ10%20Vertex%20SDK%20Custom%20Scikit-Learn%20with%20pre-built%20training%20container.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
<br/><br/><br/>
Dataset
The dataset used for this tutorial is the UCI Machine Learning US Census Data (1990) dataset.The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket.
The dataset predicts whether a persons income will be above $50K USD.
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements. You need the following:
The Cloud Storage SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:
Install and initialize the SDK.
Install Python 3.
Install virtualenv and create a virtual environment that uses Python 3. Activate the virtual environment.
To install Jupyter, run pip3 install jupyter on the command-line in a terminal shell.
To launch Jupyter, run jupyter notebook on the command-line in a terminal shell.
Open this notebook in the Jupyter Notebook Dashboard.
Installation
Install the latest version of Vertex SDK for Python.
End of explanation
! pip3 install -U google-cloud-storage $USER_FLAG
if os.getenv("IS_TESTING"):
! pip3 install --upgrade tensorflow $USER_FLAG
Explanation: Install the latest GA version of google-cloud-storage library as well.
End of explanation
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
End of explanation
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
Explanation: Before you begin
GPU runtime
This tutorial does not require a GPU runtime.
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $.
End of explanation
REGION = "us-central1" # @param {type: "string"}
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.
Learn more about Vertex AI regions
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
End of explanation
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
End of explanation
! gsutil mb -l $REGION $BUCKET_NAME
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al $BUCKET_NAME
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
import google.cloud.aiplatform as aip
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
End of explanation
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
Explanation: Initialize Vertex SDK for Python
Initialize the Vertex SDK for Python for your project and corresponding bucket.
End of explanation
TRAIN_VERSION = "scikit-learn-cpu.0-23"
DEPLOY_VERSION = "sklearn-cpu.0-23"
TRAIN_IMAGE = "gcr.io/cloud-aiplatform/training/{}:latest".format(TRAIN_VERSION)
DEPLOY_IMAGE = "gcr.io/cloud-aiplatform/prediction/{}:latest".format(DEPLOY_VERSION)
Explanation: Set pre-built containers
Set the pre-built Docker container image for training and prediction.
For the latest list, see Pre-built containers for training.
For the latest list, see Pre-built containers for prediction.
End of explanation
if os.getenv("IS_TESTING_TRAIN_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", TRAIN_COMPUTE)
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
Explanation: Set machine type
Next, set the machine type to use for training and prediction.
Set the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure the compute resources for the VMs you will use for for training and prediction.
machine type
n1-standard: 3.75GB of memory per vCPU.
n1-highmem: 6.5GB of memory per vCPU
n1-highcpu: 0.9 GB of memory per vCPU
vCPUs: number of [2, 4, 8, 16, 32, 64, 96 ]
Note: The following is not supported for training:
standard: 2 vCPUs
highcpu: 2, 4 and 8 vCPUs
Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs.
End of explanation
# Make folder for Python training script
! rm -rf custom
! mkdir custom
# Add package information
! touch custom/README.md
setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0"
! echo "$setup_cfg" > custom/setup.cfg
setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'tensorflow_datasets==1.3.0',\n\n ],\n\n packages=setuptools.find_packages())"
! echo "$setup_py" > custom/setup.py
pkg_info = "Metadata-Version: 1.0\n\nName: US Census Data (1990) tabular binary classification\n\nVersion: 0.0.0\n\nSummary: Demostration training script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: [email protected]\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex"
! echo "$pkg_info" > custom/PKG-INFO
# Make the training subfolder
! mkdir custom/trainer
! touch custom/trainer/__init__.py
%%writefile custom/trainer/task.py
# Single Instance Training for Census Income
from sklearn.ensemble import RandomForestClassifier
import joblib
from sklearn.feature_selection import SelectKBest
from sklearn.pipeline import FeatureUnion
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import LabelBinarizer
import datetime
import pandas as pd
from google.cloud import storage
import numpy as np
import argparse
import os
import sys
parser = argparse.ArgumentParser()
parser.add_argument('--model-dir', dest='model_dir',
default=os.getenv('AIP_MODEL_DIR'), type=str, help='Model dir.')
args = parser.parse_args()
print('Python Version = {}'.format(sys.version))
# Public bucket holding the census data
bucket = storage.Client().bucket('cloud-samples-data')
# Path to the data inside the public bucket
blob = bucket.blob('ai-platform/sklearn/census_data/adult.data')
# Download the data
blob.download_to_filename('adult.data')
# Define the format of your input data including unused columns (These are the columns from the census data files)
COLUMNS = (
'age',
'workclass',
'fnlwgt',
'education',
'education-num',
'marital-status',
'occupation',
'relationship',
'race',
'sex',
'capital-gain',
'capital-loss',
'hours-per-week',
'native-country',
'income-level'
)
# Categorical columns are columns that need to be turned into a numerical value to be used by scikit-learn
CATEGORICAL_COLUMNS = (
'workclass',
'education',
'marital-status',
'occupation',
'relationship',
'race',
'sex',
'native-country'
)
# Load the training census dataset
with open('./adult.data', 'r') as train_data:
raw_training_data = pd.read_csv(train_data, header=None, names=COLUMNS)
# Remove the column we are trying to predict ('income-level') from our features list
# Convert the Dataframe to a lists of lists
train_features = raw_training_data.drop('income-level', axis=1).values.tolist()
# Create our training labels list, convert the Dataframe to a lists of lists
train_labels = (raw_training_data['income-level'] == ' >50K').values.tolist()
# Since the census data set has categorical features, we need to convert
# them to numerical values. We'll use a list of pipelines to convert each
# categorical column and then use FeatureUnion to combine them before calling
# the RandomForestClassifier.
categorical_pipelines = []
# Each categorical column needs to be extracted individually and converted to a numerical value.
# To do this, each categorical column will use a pipeline that extracts one feature column via
# SelectKBest(k=1) and a LabelBinarizer() to convert the categorical value to a numerical one.
# A scores array (created below) will select and extract the feature column. The scores array is
# created by iterating over the COLUMNS and checking if it is a CATEGORICAL_COLUMN.
for i, col in enumerate(COLUMNS[:-1]):
if col in CATEGORICAL_COLUMNS:
# Create a scores array to get the individual categorical column.
# Example:
# data = [39, 'State-gov', 77516, 'Bachelors', 13, 'Never-married', 'Adm-clerical',
# 'Not-in-family', 'White', 'Male', 2174, 0, 40, 'United-States']
# scores = [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
#
# Returns: [['State-gov']]
# Build the scores array.
scores = [0] * len(COLUMNS[:-1])
# This column is the categorical column we want to extract.
scores[i] = 1
skb = SelectKBest(k=1)
skb.scores_ = scores
# Convert the categorical column to a numerical value
lbn = LabelBinarizer()
r = skb.transform(train_features)
lbn.fit(r)
# Create the pipeline to extract the categorical feature
categorical_pipelines.append(
('categorical-{}'.format(i), Pipeline([
('SKB-{}'.format(i), skb),
('LBN-{}'.format(i), lbn)])))
# Create pipeline to extract the numerical features
skb = SelectKBest(k=6)
# From COLUMNS use the features that are numerical
skb.scores_ = [1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0]
categorical_pipelines.append(('numerical', skb))
# Combine all the features using FeatureUnion
preprocess = FeatureUnion(categorical_pipelines)
# Create the classifier
classifier = RandomForestClassifier()
# Transform the features and fit them to the classifier
classifier.fit(preprocess.transform(train_features), train_labels)
# Create the overall model as a single pipeline
pipeline = Pipeline([
('union', preprocess),
('classifier', classifier)
])
# Split path into bucket and subdirectory
bucket = args.model_dir.split('/')[2]
subdirs = args.model_dir.split('/')[3:]
subdir = subdirs[0]
subdirs.pop(0)
for comp in subdirs:
subdir = os.path.join(subdir, comp)
# Write model to a local file
joblib.dump(pipeline, 'model.joblib')
# Upload the model to GCS
bucket = storage.Client().bucket(bucket)
blob = bucket.blob(subdir + '/model.joblib')
blob.upload_from_filename('model.joblib')
Explanation: Examine the training package
Package layout
Before you start the training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.
PKG-INFO
README.md
setup.cfg
setup.py
trainer
__init__.py
task.py
The files setup.cfg and setup.py are the instructions for installing the package into the operating environment of the Docker image.
The file trainer/task.py is the Python script for executing the custom training job. Note, when we referred to it in the worker pool specification, we replace the directory slash with a dot (trainer.task) and dropped the file suffix (.py).
Package Assembly
In the following cells, you will assemble the training package.
End of explanation
! rm -f custom.tar custom.tar.gz
! tar cvf custom.tar custom
! gzip custom.tar
! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_census.tar.gz
Explanation: Store training script on your Cloud Storage bucket
Next, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
End of explanation
job = aip.CustomTrainingJob(
display_name="census_" + TIMESTAMP,
script_path="custom/trainer/task.py",
container_uri=TRAIN_IMAGE,
requirements=["gcsfs==0.7.1", "tensorflow-datasets==4.4"],
)
print(job)
Explanation: Train a model
training.create-python-pre-built-container
Create and run custom training job
To train a custom model, you perform two steps: 1) create a custom training job, and 2) run the job.
Create custom training job
A custom training job is created with the CustomTrainingJob class, with the following parameters:
display_name: The human readable name for the custom training job.
container_uri: The training container image.
requirements: Package requirements for the training container image (e.g., pandas).
script_path: The relative path to the training script.
End of explanation
MODEL_DIR = "{}/{}".format(BUCKET_NAME, TIMESTAMP)
job.run(
replica_count=1, machine_type=TRAIN_COMPUTE, base_output_dir=MODEL_DIR, sync=True
)
MODEL_DIR = MODEL_DIR + "/model"
model_path_to_deploy = MODEL_DIR
Explanation: Example output:
<google.cloud.aiplatform.training_jobs.CustomTrainingJob object at 0x7feab1346710>
Run the custom training job
Next, you run the custom job to start the training job by invoking the method run, with the following parameters:
replica_count: The number of compute instances for training (replica_count = 1 is single node training).
machine_type: The machine type for the compute instances.
base_output_dir: The Cloud Storage location to write the model artifacts to.
sync: Whether to block until completion of the job.
End of explanation
model = aip.Model.upload(
display_name="census_" + TIMESTAMP,
artifact_uri=MODEL_DIR,
serving_container_image_uri=DEPLOY_IMAGE,
sync=False,
)
model.wait()
Explanation: general.import-model
Upload the model
Next, upload your model to a Model resource using Model.upload() method, with the following parameters:
display_name: The human readable name for the Model resource.
artifact: The Cloud Storage location of the trained model artifacts.
serving_container_image_uri: The serving container image.
sync: Whether to execute the upload asynchronously or synchronously.
If the upload() method is run asynchronously, you can subsequently block until completion with the wait() method.
End of explanation
INSTANCES = [
[
25,
"Private",
226802,
"11th",
7,
"Never-married",
"Machine-op-inspct",
"Own-child",
"Black",
"Male",
0,
0,
40,
"United-States",
],
[
38,
"Private",
89814,
"HS-grad",
9,
"Married-civ-spouse",
"Farming-fishing",
"Husband",
"White",
"Male",
0,
0,
50,
"United-States",
],
]
Explanation: Example output:
INFO:google.cloud.aiplatform.models:Creating Model
INFO:google.cloud.aiplatform.models:Create Model backing LRO: projects/759209241365/locations/us-central1/models/925164267982815232/operations/3458372263047331840
INFO:google.cloud.aiplatform.models:Model created. Resource name: projects/759209241365/locations/us-central1/models/925164267982815232
INFO:google.cloud.aiplatform.models:To use this Model in another session:
INFO:google.cloud.aiplatform.models:model = aiplatform.Model('projects/759209241365/locations/us-central1/models/925164267982815232')
Make batch predictions
predictions.batch-prediction
Make test items
You will use synthetic data as a test data items. Don't be concerned that we are using synthetic data -- we just want to demonstrate how to make a prediction.
End of explanation
import json
import tensorflow as tf
gcs_input_uri = BUCKET_NAME + "/" + "test.jsonl"
with tf.io.gfile.GFile(gcs_input_uri, "w") as f:
for i in INSTANCES:
f.write(json.dumps(i) + "\n")
! gsutil cat $gcs_input_uri
Explanation: Make the batch input file
Now make a batch input file, which you will store in your local Cloud Storage bucket. Each instance in the prediction request is a list of the form:
[ [ content_1], [content_2] ]
content: The feature values of the test item as a list.
End of explanation
MIN_NODES = 1
MAX_NODES = 1
batch_predict_job = model.batch_predict(
job_display_name="census_" + TIMESTAMP,
gcs_source=gcs_input_uri,
gcs_destination_prefix=BUCKET_NAME,
instances_format="jsonl",
predictions_format="jsonl",
model_parameters=None,
machine_type=DEPLOY_COMPUTE,
starting_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
sync=False,
)
print(batch_predict_job)
Explanation: Make the batch prediction request
Now that your Model resource is trained, you can make a batch prediction by invoking the batch_predict() method, with the following parameters:
job_display_name: The human readable name for the batch prediction job.
gcs_source: A list of one or more batch request input files.
gcs_destination_prefix: The Cloud Storage location for storing the batch prediction resuls.
instances_format: The format for the input instances, either 'csv' or 'jsonl'. Defaults to 'jsonl'.
predictions_format: The format for the output predictions, either 'csv' or 'jsonl'. Defaults to 'jsonl'.
machine_type: The type of machine to use for training.
sync: If set to True, the call will block while waiting for the asynchronous batch job to complete.
End of explanation
batch_predict_job.wait()
Explanation: Example output:
INFO:google.cloud.aiplatform.jobs:Creating BatchPredictionJob
<google.cloud.aiplatform.jobs.BatchPredictionJob object at 0x7f806a6112d0> is waiting for upstream dependencies to complete.
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob created. Resource name: projects/759209241365/locations/us-central1/batchPredictionJobs/5110965452507447296
INFO:google.cloud.aiplatform.jobs:To use this BatchPredictionJob in another session:
INFO:google.cloud.aiplatform.jobs:bpj = aiplatform.BatchPredictionJob('projects/759209241365/locations/us-central1/batchPredictionJobs/5110965452507447296')
INFO:google.cloud.aiplatform.jobs:View Batch Prediction Job:
https://console.cloud.google.com/ai/platform/locations/us-central1/batch-predictions/5110965452507447296?project=759209241365
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/5110965452507447296 current state:
JobState.JOB_STATE_RUNNING
Wait for completion of batch prediction job
Next, wait for the batch job to complete. Alternatively, one can set the parameter sync to True in the batch_predict() method to block until the batch prediction job is completed.
End of explanation
import json
bp_iter_outputs = batch_predict_job.iter_outputs()
prediction_results = list()
for blob in bp_iter_outputs:
if blob.name.split("/")[-1].startswith("prediction"):
prediction_results.append(blob.name)
tags = list()
for prediction_result in prediction_results:
gfile_name = f"gs://{bp_iter_outputs.bucket.name}/{prediction_result}"
with tf.io.gfile.GFile(name=gfile_name, mode="r") as gfile:
for line in gfile.readlines():
line = json.loads(line)
print(line)
break
Explanation: Example Output:
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob created. Resource name: projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328
INFO:google.cloud.aiplatform.jobs:To use this BatchPredictionJob in another session:
INFO:google.cloud.aiplatform.jobs:bpj = aiplatform.BatchPredictionJob('projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328')
INFO:google.cloud.aiplatform.jobs:View Batch Prediction Job:
https://console.cloud.google.com/ai/platform/locations/us-central1/batch-predictions/181835033978339328?project=759209241365
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_SUCCEEDED
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob run completed. Resource name: projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328
Get the predictions
Next, get the results from the completed batch prediction job.
The results are written to the Cloud Storage output bucket you specified in the batch prediction request. You call the method iter_outputs() to get a list of each Cloud Storage file generated with the results. Each file contains one or more prediction requests in a JSON format:
instance: The prediction request.
prediction: The prediction response.
End of explanation
DEPLOYED_NAME = "census-" + TIMESTAMP
TRAFFIC_SPLIT = {"0": 100}
MIN_NODES = 1
MAX_NODES = 1
endpoint = model.deploy(
deployed_model_display_name=DEPLOYED_NAME,
traffic_split=TRAFFIC_SPLIT,
machine_type=DEPLOY_COMPUTE,
min_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
)
Explanation: Example Output:
{'instance': [25, 'Private', 226802, '11th', 7, 'Never-married', 'Machine-op-inspct', 'Own-child', 'Black', 'Male', 0, 0, 40, 'United-States'], 'prediction': False}
Make online predictions
predictions.deploy-model-api
Deploy the model
Next, deploy your model for online prediction. To deploy the model, you invoke the deploy method, with the following parameters:
deployed_model_display_name: A human readable name for the deployed model.
traffic_split: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs.
If only one model, then specify as { "0": 100 }, where "0" refers to this model being uploaded and 100 means 100% of the traffic.
If there are existing models on the endpoint, for which the traffic will be split, then use model_id to specify as { "0": percent, model_id: percent, ... }, where model_id is the model id of an existing model to the deployed endpoint. The percents must add up to 100.
machine_type: The type of machine to use for training.
starting_replica_count: The number of compute instances to initially provision.
max_replica_count: The maximum number of compute instances to scale to. In this tutorial, only one instance is provisioned.
End of explanation
INSTANCE = [
25,
"Private",
226802,
"11th",
7,
"Never-married",
"Machine-op-inspct",
"Own-child",
"Black",
"Male",
0,
0,
40,
"United-States",
]
Explanation: Example output:
INFO:google.cloud.aiplatform.models:Creating Endpoint
INFO:google.cloud.aiplatform.models:Create Endpoint backing LRO: projects/759209241365/locations/us-central1/endpoints/4867177336350441472/operations/4087251132693348352
INFO:google.cloud.aiplatform.models:Endpoint created. Resource name: projects/759209241365/locations/us-central1/endpoints/4867177336350441472
INFO:google.cloud.aiplatform.models:To use this Endpoint in another session:
INFO:google.cloud.aiplatform.models:endpoint = aiplatform.Endpoint('projects/759209241365/locations/us-central1/endpoints/4867177336350441472')
INFO:google.cloud.aiplatform.models:Deploying model to Endpoint : projects/759209241365/locations/us-central1/endpoints/4867177336350441472
INFO:google.cloud.aiplatform.models:Deploy Endpoint model backing LRO: projects/759209241365/locations/us-central1/endpoints/4867177336350441472/operations/1691336130932244480
INFO:google.cloud.aiplatform.models:Endpoint model deployed. Resource name: projects/759209241365/locations/us-central1/endpoints/4867177336350441472
predictions.online-prediction-automl
Make test item
You will use synthetic data as a test data item. Don't be concerned that we are using synthetic data -- we just want to demonstrate how to make a prediction.
End of explanation
instances_list = [INSTANCE]
prediction = endpoint.predict(instances_list)
print(prediction)
Explanation: Make the prediction
Now that your Model resource is deployed to an Endpoint resource, you can do online predictions by sending prediction requests to the Endpoint resource.
Request
The format of each instance is:
[feature_list]
Since the predict() method can take multiple items (instances), send your single test item as a list of one test item.
Response
The response from the predict() call is a Python dictionary with the following entries:
ids: The internal assigned unique identifiers for each prediction request.
predictions: The predicted confidence, between 0 and 1, per class label.
deployed_model_id: The Vertex AI identifier for the deployed Model resource which did the predictions.
End of explanation
endpoint.undeploy_all()
Explanation: Example output:
Prediction(predictions=[False], deployed_model_id='7220545636163125248', explanations=None)
Undeploy the model
When you are done doing predictions, you undeploy the model from the Endpoint resouce. This deprovisions all compute resources and ends billing for the deployed model.
End of explanation
delete_all = True
if delete_all:
# Delete the dataset using the Vertex dataset object
try:
if "dataset" in globals():
dataset.delete()
except Exception as e:
print(e)
# Delete the model using the Vertex model object
try:
if "model" in globals():
model.delete()
except Exception as e:
print(e)
# Delete the endpoint using the Vertex endpoint object
try:
if "endpoint" in globals():
endpoint.delete()
except Exception as e:
print(e)
# Delete the AutoML or Pipeline trainig job
try:
if "dag" in globals():
dag.delete()
except Exception as e:
print(e)
# Delete the custom trainig job
try:
if "job" in globals():
job.delete()
except Exception as e:
print(e)
# Delete the batch prediction job using the Vertex batch prediction object
try:
if "batch_predict_job" in globals():
batch_predict_job.delete()
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object
try:
if "hpt_job" in globals():
hpt_job.delete()
except Exception as e:
print(e)
if "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
Explanation: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Dataset
Pipeline
Model
Endpoint
AutoML Training Job
Batch Job
Custom Job
Hyperparameter Tuning Job
Cloud Storage Bucket
End of explanation |
10,051 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Modeling and Simulation in Python
Case study.
Copyright 2017 Allen Downey
License
Step1: Unrolling
Let's simulate a kitten unrolling toilet paper. As reference material, see this video.
The interactions of the kitten and the paper roll are complex. To keep things simple, let's assume that the kitten pulls down on the free end of the roll with constant force. Also, we will neglect the friction between the roll and the axle.
This figure shows the paper roll with $r$, $F$, and $\tau$. As a vector quantity, the direction of $\tau$ is into the page, but we only care about its magnitude for now.
We'll start by loading the units we need.
Step2: And a few more parameters in the Params object.
Step4: make_system computes rho_h, which we'll need to compute moment of inertia, and k, which we'll use to compute r.
Step5: Testing make_system
Step7: Here's how we compute I as a function of r
Step8: When r is Rmin, I is small.
Step9: As r increases, so does I.
Step11: Exercises
Write a slope function we can use to simulate this system. Here are some suggestions and hints
Step12: Test slope_func with the initial conditions.
Step13: Run the simulation.
Step14: And look at the results.
Step15: Check the results to see if they seem plausible
Step16: Plot omega
Step17: Plot y | Python Code:
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
Explanation: Modeling and Simulation in Python
Case study.
Copyright 2017 Allen Downey
License: Creative Commons Attribution 4.0 International
End of explanation
radian = UNITS.radian
m = UNITS.meter
s = UNITS.second
kg = UNITS.kilogram
N = UNITS.newton
Explanation: Unrolling
Let's simulate a kitten unrolling toilet paper. As reference material, see this video.
The interactions of the kitten and the paper roll are complex. To keep things simple, let's assume that the kitten pulls down on the free end of the roll with constant force. Also, we will neglect the friction between the roll and the axle.
This figure shows the paper roll with $r$, $F$, and $\tau$. As a vector quantity, the direction of $\tau$ is into the page, but we only care about its magnitude for now.
We'll start by loading the units we need.
End of explanation
params = Params(Rmin = 0.02 * m,
Rmax = 0.055 * m,
Mcore = 15e-3 * kg,
Mroll = 215e-3 * kg,
L = 47 * m,
tension = 2e-4 * N,
t_end = 120 * s)
Explanation: And a few more parameters in the Params object.
End of explanation
def make_system(params):
Make a system object.
params: Params with Rmin, Rmax, Mcore, Mroll,
L, tension, and t_end
returns: System with init, k, rho_h, Rmin, Rmax,
Mcore, Mroll, ts
L, Rmax, Rmin = params.L, params.Rmax, params.Rmin
Mroll = params.Mroll
init = State(theta = 0 * radian,
omega = 0 * radian/s,
y = L)
area = pi * (Rmax**2 - Rmin**2)
rho_h = Mroll / area
k = (Rmax**2 - Rmin**2) / 2 / L / radian
return System(params, init=init, area=area, rho_h=rho_h, k=k)
Explanation: make_system computes rho_h, which we'll need to compute moment of inertia, and k, which we'll use to compute r.
End of explanation
system = make_system(params)
system.init
Explanation: Testing make_system
End of explanation
def moment_of_inertia(r, system):
Moment of inertia for a roll of toilet paper.
r: current radius of roll in meters
system: System object with Mcore, rho, Rmin, Rmax
returns: moment of inertia in kg m**2
Mcore, Rmin, rho_h = system.Mcore, system.Rmin, system.rho_h
Icore = Mcore * Rmin**2
Iroll = pi * rho_h / 2 * (r**4 - Rmin**4)
return Icore + Iroll
Explanation: Here's how we compute I as a function of r:
End of explanation
moment_of_inertia(system.Rmin, system)
Explanation: When r is Rmin, I is small.
End of explanation
moment_of_inertia(system.Rmax, system)
Explanation: As r increases, so does I.
End of explanation
# Solution
def slope_func(state, t, system):
Computes the derivatives of the state variables.
state: State object with theta, omega, y
t: time
system: System object with Rmin, k, Mcore, rho_h, tension
returns: sequence of derivatives
theta, omega, y = state
k, Rmin, tension = system.k, system.Rmin, system.tension
r = sqrt(2*k*y + Rmin**2)
I = moment_of_inertia(r, system)
tau = r * tension
alpha = tau / I
dydt = -r * omega
return omega, alpha, dydt
Explanation: Exercises
Write a slope function we can use to simulate this system. Here are some suggestions and hints:
r is no longer part of the State object. Instead, we compute r at each time step, based on the current value of y, using
$y = \frac{1}{2k} (r^2 - R_{min}^2)$
Angular velocity, omega, is no longer constant. Instead, we compute torque, tau, and angular acceleration, alpha, at each time step.
I changed the definition of theta so positive values correspond to clockwise rotation, so dydt = -r * omega; that is, positive values of omega yield decreasing values of y, the amount of paper still on the roll.
Your slope function should return omega, alpha, and dydt, which are the derivatives of theta, omega, and y, respectively.
Because r changes over time, we have to compute moment of inertia, I, at each time step.
That last point might be more of a problem than I have made it seem. In the same way that $F = m a$ only applies when $m$ is constant, $\tau = I \alpha$ only applies when $I$ is constant. When $I$ varies, we usually have to use a more general version of Newton's law. However, I believe that in this example, mass and moment of inertia vary together in a way that makes the simple approach work out. Not all of my collegues are convinced.
End of explanation
# Solution
slope_func(system.init, 0*s, system)
Explanation: Test slope_func with the initial conditions.
End of explanation
# Solution
results, details = run_ode_solver(system, slope_func)
details
Explanation: Run the simulation.
End of explanation
results.tail()
Explanation: And look at the results.
End of explanation
def plot_theta(results):
plot(results.theta, color='C0', label='theta')
decorate(xlabel='Time (s)',
ylabel='Angle (rad)')
plot_theta(results)
Explanation: Check the results to see if they seem plausible:
The final value of theta should be about 220 radians.
The final value of omega should be near 4 radians/second, which is less one revolution per second, so that seems plausible.
The final value of y should be about 35 meters of paper left on the roll, which means the kitten pulls off 12 meters in two minutes. That doesn't seem impossible, although it is based on a level of consistency and focus that is unlikely in a kitten.
Angular velocity, omega, should increase almost linearly at first, as constant force yields almost constant torque. Then, as the radius decreases, the lever arm decreases, yielding lower torque, but moment of inertia decreases even more, yielding higher angular acceleration.
Plot theta
End of explanation
def plot_omega(results):
plot(results.omega, color='C2', label='omega')
decorate(xlabel='Time (s)',
ylabel='Angular velocity (rad/s)')
plot_omega(results)
Explanation: Plot omega
End of explanation
def plot_y(results):
plot(results.y, color='C1', label='y')
decorate(xlabel='Time (s)',
ylabel='Length (m)')
plot_y(results)
Explanation: Plot y
End of explanation |
10,052 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Managing kubernetes objects using common resource operations with the python client
Some of these operations include;
create_xxxx
Step1: Load config from default location.
Step2: Create API endpoint instance as well as API resource instances (body and specification).
Step3: Fill required object fields (apiVersion, kind, metadata and spec).
Step4: Create Deployment using create_xxxx command for Deployments.
Step5: Use list_xxxx command for Deployment, to list Deployments.
Step6: Use read_xxxx command for Deployment, to display the detailed state of the created Deployment resource.
Step7: Use patch_xxxx command for Deployment, to make specific update to the Deployment.
Step8: Use replace_xxxx command for Deployment, to update Deployment with a completely new version of the object.
Step9: Use delete_xxxx command for Deployment, to delete created Deployment. | Python Code:
from kubernetes import client, config
Explanation: Managing kubernetes objects using common resource operations with the python client
Some of these operations include;
create_xxxx : create a resource object. Ex create_namespaced_pod and create_namespaced_deployment, for creation of pods and deployments respectively. This performs operations similar to kubectl create.
read_xxxx : read the specified resource object. Ex read_namespaced_pod and read_namespaced_deployment, to read pods and deployments respectively. This performs operations similar to kubectl describe.
list_xxxx : retrieve all resource objects of a specific type. Ex list_namespaced_pod and list_namespaced_deployment, to list pods and deployments respectively. This performs operations similar to kubectl get.
patch_xxxx : apply a change to a specific field. Ex patch_namespaced_pod and patch_namespaced_deployment, to update pods and deployments respectively. This performs operations similar to kubectl patch, kubectl label, kubectl annotate etc.
replace_xxxx : replacing a resource object will update the resource by replacing the existing spec with the provided one. Ex replace_namespaced_pod and replace_namespaced_deployment, to update pods and deployments respectively, by creating new replacements of the entire object. This performs operations similar to kubectl rolling-update, kubectl apply and kubectl replace.
delete_xxxx : delete a resource. This performs operations similar to kubectl delete.
For Further information see the Documentation for API Endpoints section in https://github.com/kubernetes-client/python/blob/master/kubernetes/README.md
End of explanation
config.load_kube_config()
Explanation: Load config from default location.
End of explanation
api_instance = client.AppsV1Api()
dep = client.V1Deployment()
spec = client.V1DeploymentSpec()
Explanation: Create API endpoint instance as well as API resource instances (body and specification).
End of explanation
name = "my-busybox"
dep.metadata = client.V1ObjectMeta(name=name)
spec.template = client.V1PodTemplateSpec()
spec.template.metadata = client.V1ObjectMeta(name="busybox")
spec.template.metadata.labels = {"app":"busybox"}
spec.template.spec = client.V1PodSpec()
dep.spec = spec
container = client.V1Container()
container.image = "busybox:1.26.1"
container.args = ["sleep", "3600"]
container.name = name
spec.template.spec.containers = [container]
Explanation: Fill required object fields (apiVersion, kind, metadata and spec).
End of explanation
api_instance.create_namespaced_deployment(namespace="default",body=dep)
Explanation: Create Deployment using create_xxxx command for Deployments.
End of explanation
deps = api_instance.list_namespaced_deployment(namespace="default")
for item in deps.items:
print("%s %s" % (item.metadata.namespace, item.metadata.name))
Explanation: Use list_xxxx command for Deployment, to list Deployments.
End of explanation
api_instance.read_namespaced_deployment(namespace="default",name=name)
Explanation: Use read_xxxx command for Deployment, to display the detailed state of the created Deployment resource.
End of explanation
dep.metadata.labels = {"key": "value"}
api_instance.patch_namespaced_deployment(name=name, namespace="default", body=dep)
Explanation: Use patch_xxxx command for Deployment, to make specific update to the Deployment.
End of explanation
dep.spec.template.spec.containers[0].image = "busybox:1.26.2"
api_instance.replace_namespaced_deployment(name=name, namespace="default", body=dep)
Explanation: Use replace_xxxx command for Deployment, to update Deployment with a completely new version of the object.
End of explanation
api_instance.delete_namespaced_deployment(name=name, namespace="default", body=client.V1DeleteOptions(propagation_policy="Foreground", grace_period_seconds=5))
Explanation: Use delete_xxxx command for Deployment, to delete created Deployment.
End of explanation |
10,053 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
GPyOpt
Step1: In this example we will optimize the 2D Six-Hump Camel function (available in GPyOpt). We will assume that exact evaluations of the function are observed. The explicit form of the function is
Step2: Imagine that we were optimizing the function in the intervals $(-1,1)\times (-1.5,1.5)$. As usual, we can defined this box constraints as
Step3: This will be an standard case of optimizing the function in an hypercube. However in this case we are going to study how to solve optimization problems with arbitrary constraints. In particular, we consider the problem of finding the minimum of the function in the region defined by
$$-x_2 - .5 + |x_1| -\sqrt{1-x_1^2} \leq 0 $$
$$ x_2 + .5 + |x_1| -\sqrt{1-x_1^2} \leq 0 $$
We can define these constraints as
Step4: And create the feasible region od the problem by writting
Step5: Now, let's have a look to what we have. Let's make a plot of the feasible region and the function with the original box-constraints. Note that the function .indicator_constrains(X) takes value 1 if we are in the feasible region and 0 otherwise.
Step6: The Six-Hump Camel function has two global minima. However, with the constraints that we are using, only one of the two is a valid one. We can see this by overlapping the two previous plots.
Step7: We will use the modular iterface to solve this problem. We start by generating an random inital design of 5 points to start the optimization. We just need to do
Step8: Importantly, the points are always generated within the feasible region as we can check here
Step9: Now, we choose the rest of the objects that we need to run the optimization. We will use a Gaussian Process with parameters fitted using MLE and the Expected improvement. We use the default BFGS optimizer of the acquisition. Evaluations of the function are done sequentially.
Step10: Next, we create the BO object to run the optimization.
Step11: We first run the optimization for 5 steps and check how the results looks.
Step12: See how the optimization is only done within the feasible region, out of it the value of the acquisition is zero, so no evaluation is selected in that region. We run 20 more iterations to see the acquisition and convergence. | Python Code:
%pylab inline
import GPyOpt
import GPy
import numpy as np
Explanation: GPyOpt: Bayesian Optimization with fixed constraints
Written by Javier Gonzalez, University of Sheffield.
Reference Manual index
Last updated Friday, 11 March 2016.
In this notebook we will learn how to solve optimization problems with fixed constraints. We will focus on problems where the goal is to find
$$ x_{M} = \arg \min_{x \in {\mathcal X}} f(x) \,\, \mbox{subject to}, $$
$$c_1(x)\leq 0 $$
$$ \dots $$
$$c_m(x)\leq 0 $$
where $f: {\mathcal X} \to R$ be a L-Lipschitz continuous function defined on a compact subset ${\mathcal X} \subseteq R^d$ and $c_1,\dots,c_m$ are a series of known constraints that determine the feasible region of the problem. We will see the syntax that we need to use to solve this problems with Bayesian Optimization using GPyOpt. First we start loading GPyOpt and GPy.
End of explanation
func = GPyOpt.objective_examples.experiments2d.sixhumpcamel()
Explanation: In this example we will optimize the 2D Six-Hump Camel function (available in GPyOpt). We will assume that exact evaluations of the function are observed. The explicit form of the function is:
$$f(x_1,x_2) =4x_1^2 – 2.1x_1^4 + x_1^6/3 + x_1x_2 – 4x_2^2 + 4x_2^4$$
End of explanation
space =[{'name': 'var_1', 'type': 'continuous', 'domain': (-1,1)},
{'name': 'var_2', 'type': 'continuous', 'domain': (-1.5,1.5)}]
Explanation: Imagine that we were optimizing the function in the intervals $(-1,1)\times (-1.5,1.5)$. As usual, we can defined this box constraints as:
End of explanation
constraints = [{'name': 'constr_1', 'constraint': '-x[:,1] -.5 + abs(x[:,0]) - np.sqrt(1-x[:,0]**2)'},
{'name': 'constr_2', 'constraint': 'x[:,1] +.5 - abs(x[:,0]) - np.sqrt(1-x[:,0]**2)'}]
Explanation: This will be an standard case of optimizing the function in an hypercube. However in this case we are going to study how to solve optimization problems with arbitrary constraints. In particular, we consider the problem of finding the minimum of the function in the region defined by
$$-x_2 - .5 + |x_1| -\sqrt{1-x_1^2} \leq 0 $$
$$ x_2 + .5 + |x_1| -\sqrt{1-x_1^2} \leq 0 $$
We can define these constraints as
End of explanation
feasible_region = GPyOpt.Design_space(space = space, constraints = constraints)
Explanation: And create the feasible region od the problem by writting:
End of explanation
## Grid of points to make the plots
grid = 400
bounds = feasible_region.get_continuous_bounds()
X1 = np.linspace(bounds[0][0], bounds[0][1], grid)
X2 = np.linspace(bounds[1][0], bounds[1][1], grid)
x1, x2 = np.meshgrid(X1, X2)
X = np.hstack((x1.reshape(grid*grid,1),x2.reshape(grid*grid,1)))
## Check the points in the feasible region.
masked_ind = feasible_region.indicator_constraints(X).reshape(grid,grid)
masked_ind = np.ma.masked_where(masked_ind > 0.5, masked_ind)
masked_ind[1,1]=1
## Make the plots
plt.figure(figsize=(14,6))
# Feasible region
plt.subplot(121)
plt.contourf(X1, X2, masked_ind ,100, cmap= plt.cm.bone, alpha=1,origin ='lower')
plt.text(-0.25,0,'FEASIBLE',size=20)
plt.text(-0.3,1.1,'INFEASIBLE',size=20,color='white')
plt.subplot(122)
plt.plot()
plt.contourf(X1, X2, func.f(X).reshape(grid,grid),100, alpha=1,origin ='lower')
plt.plot(np.array(func.min)[:,0], np.array(func.min)[:,1], 'r.', markersize=20, label=u'Minimum')
plt.legend()
plt.title('Six-Hump Camel function',size=20)
Explanation: Now, let's have a look to what we have. Let's make a plot of the feasible region and the function with the original box-constraints. Note that the function .indicator_constrains(X) takes value 1 if we are in the feasible region and 0 otherwise.
End of explanation
plt.figure(figsize=(6.5,6))
OB = plt.contourf(X1, X2, func.f(X).reshape(grid,grid),100,alpha=1)
IN = plt.contourf(X1, X2, masked_ind ,100, cmap= plt.cm.bone, alpha=.5,origin ='lower')
plt.text(-0.25,0,'FEASIBLE',size=20,color='white')
plt.text(-0.3,1.1,'INFEASIBLE',size=20,color='white')
plt.plot(np.array(func.min)[:,0], np.array(func.min)[:,1], 'r.', markersize=20, label=u'Minimum')
plt.title('Six-Hump Camel with restrictions',size=20)
plt.legend()
Explanation: The Six-Hump Camel function has two global minima. However, with the constraints that we are using, only one of the two is a valid one. We can see this by overlapping the two previous plots.
End of explanation
# --- CHOOSE the intial design
from numpy.random import seed # fixed seed
seed(123456)
initial_design = GPyOpt.experiment_design.initial_design('random', feasible_region, 10)
Explanation: We will use the modular iterface to solve this problem. We start by generating an random inital design of 5 points to start the optimization. We just need to do:
End of explanation
plt.figure(figsize=(6.5,6))
OB = plt.contourf(X1, X2, func.f(X).reshape(grid,grid),100,alpha=1)
IN = plt.contourf(X1, X2, masked_ind ,100, cmap= plt.cm.bone, alpha=.5,origin ='lower')
plt.text(-0.25,0,'FEASIBLE',size=20,color='white')
plt.text(-0.3,1.1,'INFEASIBLE',size=20,color='white')
plt.plot(np.array(func.min)[:,0], np.array(func.min)[:,1], 'r.', markersize=20, label=u'Minimum')
plt.title('Six-Hump Camel with restrictions',size=20)
plt.plot(initial_design[:,0],initial_design[:,1],'yx',label = 'Design')
plt.legend()
Explanation: Importantly, the points are always generated within the feasible region as we can check here:
End of explanation
# --- CHOOSE the objective
objective = GPyOpt.core.task.SingleObjective(func.f)
# --- CHOOSE the model type
model = GPyOpt.models.GPModel(exact_feval=True,optimize_restarts=10,verbose=False)
# --- CHOOSE the acquisition optimizer
aquisition_optimizer = GPyOpt.optimization.AcquisitionOptimizer(feasible_region)
# --- CHOOSE the type of acquisition
acquisition = GPyOpt.acquisitions.AcquisitionEI(model, feasible_region, optimizer=aquisition_optimizer)
# --- CHOOSE a collection method
evaluator = GPyOpt.core.evaluators.Sequential(acquisition)
Explanation: Now, we choose the rest of the objects that we need to run the optimization. We will use a Gaussian Process with parameters fitted using MLE and the Expected improvement. We use the default BFGS optimizer of the acquisition. Evaluations of the function are done sequentially.
End of explanation
# BO object
bo = GPyOpt.methods.ModularBayesianOptimization(model, feasible_region, objective, acquisition, evaluator, initial_design)
Explanation: Next, we create the BO object to run the optimization.
End of explanation
# --- Stop conditions
max_time = None
max_iter = 5
tolerance = 1e-8 # distance between two consecutive observations
# Run the optimization
bo.run_optimization(max_iter = max_iter, max_time = max_time, eps = tolerance, verbosity=False)
bo.plot_acquisition()
Explanation: We first run the optimization for 5 steps and check how the results looks.
End of explanation
# Run the optimization
max_iter = 25
bo.run_optimization(max_iter = max_iter, max_time = max_time, eps = tolerance, verbosity=False)
bo.plot_acquisition()
bo.plot_convergence()
# Best found value
np.round(bo.x_opt,2)
# True min
np.round(func.min[0],2)
Explanation: See how the optimization is only done within the feasible region, out of it the value of the acquisition is zero, so no evaluation is selected in that region. We run 20 more iterations to see the acquisition and convergence.
End of explanation |
10,054 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Convolutional GANs
In this notebook, you'll build a GAN using convolutional layers in the generator and discriminator. This is called a Deep Convolutional GAN, or DCGAN for short. The DCGAN architecture was first explored last year and has seen impressive results in generating new images, you can read the original paper here.
You'll be training DCGAN on the Street View House Numbers (SVHN) dataset. These are color images of house numbers collected from Google street view. SVHN images are in color and much more variable than MNIST.
So, we'll need a deeper and more powerful network. This is accomplished through using convolutional layers in the discriminator and generator. It's also necessary to use batch normalization to get the convolutional networks to train. The only real changes compared to what you saw previously are in the generator and discriminator, otherwise the rest of the implementation is the same.
Step1: Getting the data
Here you can download the SVHN dataset. Run the cell above and it'll download to your machine.
Step2: These SVHN files are .mat files typically used with Matlab. However, we can load them in with scipy.io.loadmat which we imported above.
Step3: Here I'm showing a small sample of the images. Each of these is 32x32 with 3 color channels (RGB). These are the real images we'll pass to the discriminator and what the generator will eventually fake.
Step4: Here we need to do a bit of preprocessing and getting the images into a form where we can pass batches to the network. First off, we need to rescale the images to a range of -1 to 1, since the output of our generator is also in that range. We also have a set of test and validation images which could be used if we're trying to identify the numbers in the images.
Step5: Network Inputs
Here, just creating some placeholders like normal.
Step6: Generator
Here you'll build the generator network. The input will be our noise vector z as before. Also as before, the output will be a $tanh$ output, but this time with size 32x32 which is the size of our SVHN images.
What's new here is we'll use convolutional layers to create our new images. The first layer is a fully connected layer which is reshaped into a deep and narrow layer, something like 4x4x1024 as in the original DCGAN paper. Then we use batch normalization and a leaky ReLU activation. Next is a transposed convolution where typically you'd halve the depth and double the width and height of the previous layer. Again, we use batch normalization and leaky ReLU. For each of these layers, the general scheme is convolution > batch norm > leaky ReLU.
You keep stacking layers up like this until you get the final transposed convolution layer with shape 32x32x3. Below is the archicture used in the original DCGAN paper
Step7: Discriminator
Here you'll build the discriminator. This is basically just a convolutional classifier like you've built before. The input to the discriminator are 32x32x3 tensors/images. You'll want a few convolutional layers, then a fully connected layer for the output. As before, we want a sigmoid output, and you'll need to return the logits as well. For the depths of the convolutional layers I suggest starting with 16, 32, 64 filters in the first layer, then double the depth as you add layers. Note that in the DCGAN paper, they did all the downsampling using only strided convolutional layers with no maxpool layers.
You'll also want to use batch normalization with tf.layers.batch_normalization on each layer except the first convolutional and output layers. Again, each layer should look something like convolution > batch norm > leaky ReLU.
Note
Step9: Model Loss
Calculating the loss like before, nothing new here.
Step11: Optimizers
Not much new here, but notice how the train operations are wrapped in a with tf.control_dependencies block so the batch normalization layers can update their population statistics.
Step12: Building the model
Here we can use the functions we defined about to build the model as a class. This will make it easier to move the network around in our code since the nodes and operations in the graph are packaged in one object.
Step13: Here is a function for displaying generated images.
Step14: And another function we can use to train our network. Notice when we call generator to create the samples to display, we set training to False. That's so the batch normalization layers will use the population statistics rather than the batch statistics. Also notice that we set the net.input_real placeholder when we run the generator's optimizer. The generator doesn't actually use it, but we'd get an error without it because of the tf.control_dependencies block we created in model_opt.
Step15: Hyperparameters
GANs are very sensitive to hyperparameters. A lot of experimentation goes into finding the best hyperparameters such that the generator and discriminator don't overpower each other. Try out your own hyperparameters or read the DCGAN paper to see what worked for them.
Exercise | Python Code:
%matplotlib inline
import pickle as pkl
import matplotlib.pyplot as plt
import numpy as np
from scipy.io import loadmat
import tensorflow as tf
!mkdir data
Explanation: Deep Convolutional GANs
In this notebook, you'll build a GAN using convolutional layers in the generator and discriminator. This is called a Deep Convolutional GAN, or DCGAN for short. The DCGAN architecture was first explored last year and has seen impressive results in generating new images, you can read the original paper here.
You'll be training DCGAN on the Street View House Numbers (SVHN) dataset. These are color images of house numbers collected from Google street view. SVHN images are in color and much more variable than MNIST.
So, we'll need a deeper and more powerful network. This is accomplished through using convolutional layers in the discriminator and generator. It's also necessary to use batch normalization to get the convolutional networks to train. The only real changes compared to what you saw previously are in the generator and discriminator, otherwise the rest of the implementation is the same.
End of explanation
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
data_dir = 'data/'
if not isdir(data_dir):
raise Exception("Data directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(data_dir + "train_32x32.mat"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar:
urlretrieve(
'http://ufldl.stanford.edu/housenumbers/train_32x32.mat',
data_dir + 'train_32x32.mat',
pbar.hook)
if not isfile(data_dir + "test_32x32.mat"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Testing Set') as pbar:
urlretrieve(
'http://ufldl.stanford.edu/housenumbers/test_32x32.mat',
data_dir + 'test_32x32.mat',
pbar.hook)
Explanation: Getting the data
Here you can download the SVHN dataset. Run the cell above and it'll download to your machine.
End of explanation
trainset = loadmat(data_dir + 'train_32x32.mat')
testset = loadmat(data_dir + 'test_32x32.mat')
Explanation: These SVHN files are .mat files typically used with Matlab. However, we can load them in with scipy.io.loadmat which we imported above.
End of explanation
idx = np.random.randint(0, trainset['X'].shape[3], size=36)
fig, axes = plt.subplots(6, 6, sharex=True, sharey=True, figsize=(5,5),)
for ii, ax in zip(idx, axes.flatten()):
ax.imshow(trainset['X'][:,:,:,ii], aspect='equal')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
plt.subplots_adjust(wspace=0, hspace=0)
Explanation: Here I'm showing a small sample of the images. Each of these is 32x32 with 3 color channels (RGB). These are the real images we'll pass to the discriminator and what the generator will eventually fake.
End of explanation
def scale(x, feature_range=(-1, 1)):
# scale to (0, 1)
x = ((x - x.min())/(255 - x.min()))
# scale to feature_range
min, max = feature_range
x = x * (max - min) + min
return x
class Dataset:
def __init__(self, train, test, val_frac=0.5, shuffle=False, scale_func=None):
split_idx = int(len(test['y'])*(1 - val_frac))
self.test_x, self.valid_x = test['X'][:,:,:,:split_idx], test['X'][:,:,:,split_idx:]
self.test_y, self.valid_y = test['y'][:split_idx], test['y'][split_idx:]
self.train_x, self.train_y = train['X'], train['y']
self.train_x = np.rollaxis(self.train_x, 3)
self.valid_x = np.rollaxis(self.valid_x, 3)
self.test_x = np.rollaxis(self.test_x, 3)
if scale_func is None:
self.scaler = scale
else:
self.scaler = scale_func
self.shuffle = shuffle
def batches(self, batch_size):
if self.shuffle:
idx = np.arange(len(dataset.train_x))
np.random.shuffle(idx)
self.train_x = self.train_x[idx]
self.train_y = self.train_y[idx]
n_batches = len(self.train_y)//batch_size
for ii in range(0, len(self.train_y), batch_size):
x = self.train_x[ii:ii+batch_size]
y = self.train_y[ii:ii+batch_size]
yield self.scaler(x), y
Explanation: Here we need to do a bit of preprocessing and getting the images into a form where we can pass batches to the network. First off, we need to rescale the images to a range of -1 to 1, since the output of our generator is also in that range. We also have a set of test and validation images which could be used if we're trying to identify the numbers in the images.
End of explanation
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, (None, *real_dim), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
return inputs_real, inputs_z
Explanation: Network Inputs
Here, just creating some placeholders like normal.
End of explanation
def generator(z, output_dim, reuse=False, alpha=0.2, training=True):
with tf.variable_scope('generator', reuse=reuse):
# First fully connected layer
layer_1_output = tf.layers.dense(z,4*4*1024)
layer_1_output = tf.reshape(layer_1_output,[-1,4,4,1024])
layer_1_output = tf.layers.batch_normalization(layer_1_output, training=training)
layer_1_output = tf.maximum(alpha * layer_1_output,layer_1_output)
layer_2_output = tf.layers.conv2d_transpose(layer_1_output, 256, 5, strides = 2, padding = 'same')
layer_2_output = tf.layers.batch_normalization(layer_2_output, training = training)
layer_2_output = tf.maximum(alpha * layer_2_output, layer_2_output)
layer_3_output = tf.layers.conv2d_transpose(layer_2_output, 128, 5, strides = 2, padding = 'same')
layer_3_output = tf.layers.batch_normalization(layer_3_output, training = training)
layer_3_output = tf.maximum(alpha * layer_3_output, layer_3_output)
# Output layer, 32x32x3
logits = tf.layers.conv2d_transpose(layer_3_output, output_dim, 5, strides=2, padding = 'same')
out = tf.tanh(logits)
return out
Explanation: Generator
Here you'll build the generator network. The input will be our noise vector z as before. Also as before, the output will be a $tanh$ output, but this time with size 32x32 which is the size of our SVHN images.
What's new here is we'll use convolutional layers to create our new images. The first layer is a fully connected layer which is reshaped into a deep and narrow layer, something like 4x4x1024 as in the original DCGAN paper. Then we use batch normalization and a leaky ReLU activation. Next is a transposed convolution where typically you'd halve the depth and double the width and height of the previous layer. Again, we use batch normalization and leaky ReLU. For each of these layers, the general scheme is convolution > batch norm > leaky ReLU.
You keep stacking layers up like this until you get the final transposed convolution layer with shape 32x32x3. Below is the archicture used in the original DCGAN paper:
Note that the final layer here is 64x64x3, while for our SVHN dataset, we only want it to be 32x32x3.
Exercise: Build the transposed convolutional network for the generator in the function below. Be sure to use leaky ReLUs on all the layers except for the last tanh layer, as well as batch normalization on all the transposed convolutional layers except the last one.
End of explanation
def discriminator(x, reuse=False, alpha=0.2):
with tf.variable_scope('discriminator', reuse=reuse):
# Input layer is 32x32x3
# For the depths of the convolutional layers I suggest starting with 16, 32, 64 filters in the first layer
layer1 = tf.layers.conv2d(inputs = x,
filters=128,
kernel_size=5,
strides=2,
padding = 'same')
layer1 = tf.maximum(alpha*layer1, layer1)
layer2 = tf.layers.conv2d(inputs = layer1,
filters=256,
kernel_size=5,
strides=2,
padding = 'same')
layer2 = tf.layers.batch_normalization(layer2, training=True)
layer2 = tf.maximum(alpha*layer2, layer2)
logits = tf.reshape(layer2,(-1,4*4*256))
logits = tf.layers.dense(logits,1)
out = tf.sigmoid(logits)
return out, logits
Explanation: Discriminator
Here you'll build the discriminator. This is basically just a convolutional classifier like you've built before. The input to the discriminator are 32x32x3 tensors/images. You'll want a few convolutional layers, then a fully connected layer for the output. As before, we want a sigmoid output, and you'll need to return the logits as well. For the depths of the convolutional layers I suggest starting with 16, 32, 64 filters in the first layer, then double the depth as you add layers. Note that in the DCGAN paper, they did all the downsampling using only strided convolutional layers with no maxpool layers.
You'll also want to use batch normalization with tf.layers.batch_normalization on each layer except the first convolutional and output layers. Again, each layer should look something like convolution > batch norm > leaky ReLU.
Note: in this project, your batch normalization layers will always use batch statistics. (That is, always set training to True.) That's because we are only interested in using the discriminator to help train the generator. However, if you wanted to use the discriminator for inference later, then you would need to set the training parameter appropriately.
Exercise: Build the convolutional network for the discriminator. The input is a 32x32x3 images, the output is a sigmoid plus the logits. Again, use Leaky ReLU activations and batch normalization on all the layers except the first.
End of explanation
def model_loss(input_real, input_z, output_dim, alpha=0.2):
Get the loss for the discriminator and generator
:param input_real: Images from the real dataset
:param input_z: Z input
:param out_channel_dim: The number of channels in the output image
:return: A tuple of (discriminator loss, generator loss)
g_model = generator(input_z, output_dim, alpha=alpha)
d_model_real, d_logits_real = discriminator(input_real, alpha=alpha)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, alpha=alpha)
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake)))
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake)))
d_loss = d_loss_real + d_loss_fake
return d_loss, g_loss
Explanation: Model Loss
Calculating the loss like before, nothing new here.
End of explanation
def model_opt(d_loss, g_loss, learning_rate, beta1):
Get optimization operations
:param d_loss: Discriminator loss Tensor
:param g_loss: Generator loss Tensor
:param learning_rate: Learning Rate Placeholder
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:return: A tuple of (discriminator training operation, generator training operation)
# Get weights and bias to update
t_vars = tf.trainable_variables()
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
g_vars = [var for var in t_vars if var.name.startswith('generator')]
# Optimize
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars)
return d_train_opt, g_train_opt
Explanation: Optimizers
Not much new here, but notice how the train operations are wrapped in a with tf.control_dependencies block so the batch normalization layers can update their population statistics.
End of explanation
class GAN:
def __init__(self, real_size, z_size, learning_rate, alpha=0.2, beta1=0.5):
tf.reset_default_graph()
self.input_real, self.input_z = model_inputs(real_size, z_size)
self.d_loss, self.g_loss = model_loss(self.input_real, self.input_z,
real_size[2], alpha=alpha)
self.d_opt, self.g_opt = model_opt(self.d_loss, self.g_loss, learning_rate, beta1)
Explanation: Building the model
Here we can use the functions we defined about to build the model as a class. This will make it easier to move the network around in our code since the nodes and operations in the graph are packaged in one object.
End of explanation
def view_samples(epoch, samples, nrows, ncols, figsize=(5,5)):
fig, axes = plt.subplots(figsize=figsize, nrows=nrows, ncols=ncols,
sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.axis('off')
img = ((img - img.min())*255 / (img.max() - img.min())).astype(np.uint8)
ax.set_adjustable('box-forced')
im = ax.imshow(img, aspect='equal')
plt.subplots_adjust(wspace=0, hspace=0)
return fig, axes
Explanation: Here is a function for displaying generated images.
End of explanation
def train(net, dataset, epochs, batch_size, print_every=10, show_every=100, figsize=(5,5)):
saver = tf.train.Saver()
sample_z = np.random.uniform(-1, 1, size=(72, z_size))
samples, losses = [], []
steps = 0
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for x, y in dataset.batches(batch_size):
steps += 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(net.d_opt, feed_dict={net.input_real: x, net.input_z: batch_z})
_ = sess.run(net.g_opt, feed_dict={net.input_z: batch_z, net.input_real: x})
if steps % print_every == 0:
# At the end of each epoch, get the losses and print them out
train_loss_d = net.d_loss.eval({net.input_z: batch_z, net.input_real: x})
train_loss_g = net.g_loss.eval({net.input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
if steps % show_every == 0:
gen_samples = sess.run(
generator(net.input_z, 3, reuse=True, training=False),
feed_dict={net.input_z: sample_z})
samples.append(gen_samples)
_ = view_samples(-1, samples, 6, 12, figsize=figsize)
plt.show()
saver.save(sess, './checkpoints/generator.ckpt')
with open('samples.pkl', 'wb') as f:
pkl.dump(samples, f)
return losses, samples
Explanation: And another function we can use to train our network. Notice when we call generator to create the samples to display, we set training to False. That's so the batch normalization layers will use the population statistics rather than the batch statistics. Also notice that we set the net.input_real placeholder when we run the generator's optimizer. The generator doesn't actually use it, but we'd get an error without it because of the tf.control_dependencies block we created in model_opt.
End of explanation
real_size = (32,32,3)
z_size = 100
learning_rate = 0.001
batch_size = 64
epochs = 1
alpha = 0.01
beta1 = 0.9
# Create the network
net = GAN(real_size, z_size, learning_rate, alpha=alpha, beta1=beta1)
# Load the data and train the network here
dataset = Dataset(trainset, testset)
losses, samples = train(net, dataset, epochs, batch_size, figsize=(10,5))
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator', alpha=0.5)
plt.plot(losses.T[1], label='Generator', alpha=0.5)
plt.title("Training Losses")
plt.legend()
_ = view_samples(-1, samples, 6, 12, figsize=(10,5))
Explanation: Hyperparameters
GANs are very sensitive to hyperparameters. A lot of experimentation goes into finding the best hyperparameters such that the generator and discriminator don't overpower each other. Try out your own hyperparameters or read the DCGAN paper to see what worked for them.
Exercise: Find hyperparameters to train this GAN. The values found in the DCGAN paper work well, or you can experiment on your own. In general, you want the discriminator loss to be around 0.3, this means it is correctly classifying images as fake or real about 50% of the time.
End of explanation |
10,055 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
LDA/QDA on height/weight data
We're asked to fit a Linear Discriminant Analysis (LDA) and Quadratic Discriminant Analysis (QDA) model to the height/weight data and compute the the misclassification rate. See my implementation of these algorithms on GitHub. The data can be found here.
Step1: Let's plot the data, coloring males blue and females red, too.
Step2: Let's define a few globals to help with the plotting and model fitting.
Step3: Let's try the QDA model first. We follow the conventions of the sklearn implementation. Let $X$ be our data or design matrix and $y$ be our class label. The rows of $X$ consist of the vectors $\mathbf{x}i$, which are the individual observations. We let $y_i \sim \mathrm{Multinomial}\left(\theta_1,\ldots,\theta_K\right)$, where $\sum{k=1}^K \theta_k = 1$. We let $\mathbf{x}_i \mid y_i = k \sim \mathcal{N}\left(\mathbf{\mu}_k, \Sigma_k\right)$, which is a multivariate normal. We use the following estimates for the parameters
Step4: Let's look at LDA now. We compute $\hat{\theta}_k$ and $\hat{\mu}_k$ in the same manner. However, now we have that all covariances are equal, that is, $\hat{\Sigma} = \hat{\Sigma}_k$ for all $k$. Let $p$ be the number of features, that is, the number of columns in $X$. First, we note that
\begin{align}
\log p\left(\mathcal{D} \mid \boldsymbol\mu, \Sigma,\boldsymbol\theta\right)
&= \sum_{i=1}^N \log p\left(\mathbf{x}i,y_i \mid \boldsymbol\mu, \Sigma,\boldsymbol\theta\right)
= \sum{k = 1}^K \sum_{{i~
Step5: Clearly, we see that the two methods give us different decision boundaries. However, for these data, the misclassification rate is the same. | Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
# benchmark sklearn implementations, these are much faster
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
# my own implementation, results are identical but runs much slower
import DiscriminantAnalysis
raw_data = pd.read_csv("heightWeightData.txt", header=None, names=["gender", "height", "weight"])
raw_data.info()
raw_data.head()
Explanation: LDA/QDA on height/weight data
We're asked to fit a Linear Discriminant Analysis (LDA) and Quadratic Discriminant Analysis (QDA) model to the height/weight data and compute the the misclassification rate. See my implementation of these algorithms on GitHub. The data can be found here.
End of explanation
plt.figure(figsize=(8,8))
labels = {1: 'Male', 2: 'Female'}
colors = {1: 'blue', 2: 'red'}
def plot_height_weight(title="Weight vs Height", ax=None):
if ax == None:
ax = plt.gca()
for name, group in raw_data.groupby('gender'):
ax.scatter(group.height, group.weight, color=colors[name], label=labels[name])
ax.set_title(title)
ax.set_xlabel('height')
ax.set_ylabel('weight')
ax.legend(loc='upper left')
ax.grid()
plot_height_weight()
plt.show()
Explanation: Let's plot the data, coloring males blue and females red, too.
End of explanation
x_min = 50
x_max = 85
y_min = 50
y_max = 300
X = raw_data[['height', 'weight']].as_matrix()
y = raw_data['gender'].as_matrix()
xx, yy = np.meshgrid(np.linspace(x_min, x_max, num=200, endpoint=True),
np.linspace(y_min, y_max, num=200, endpoint=True))
cmap_light = ListedColormap(['#AAAAFF','#FFAAAA'])
def plot_height_weight_mesh(xx, yy, Z, comment=None, title=None, ax=None):
if ax == None:
ax = plt.gca()
ax.pcolormesh(xx, yy, Z, cmap=cmap_light)
if title == None:
plot_height_weight(ax=ax)
else:
plot_height_weight(ax=ax, title = title)
ax.set_xlim([x_min, x_max])
ax.set_ylim([y_min, y_max])
if comment != None:
ax.text(0.95, 0.05, comment, transform=ax.transAxes,
verticalalignment="bottom", horizontalalignment="right",
fontsize=14)
def decimal_to_percent(x, decimals=2):
return '{0:.2f}%'.format(np.round(100*x, decimals=2))
Explanation: Let's define a few globals to help with the plotting and model fitting.
End of explanation
# qda = QuadraticDiscriminantAnalysis(store_covariances=True) # sklearn implementation
qda = DiscriminantAnalysis.QDA() # my implementation
qda.fit(X, y)
qda_misclassification = 1 - qda.score(X, y)
qda_Z = qda.predict(np.c_[xx.ravel(), yy.ravel()])
qda_Z = qda_Z.reshape(xx.shape)
plt.figure(figsize=(8,8))
plot_height_weight_mesh(xx, yy, qda_Z, title="Weight vs Height: QDA",
comment="Misclassification rate: " + decimal_to_percent(qda_misclassification))
plt.show()
Explanation: Let's try the QDA model first. We follow the conventions of the sklearn implementation. Let $X$ be our data or design matrix and $y$ be our class label. The rows of $X$ consist of the vectors $\mathbf{x}i$, which are the individual observations. We let $y_i \sim \mathrm{Multinomial}\left(\theta_1,\ldots,\theta_K\right)$, where $\sum{k=1}^K \theta_k = 1$. We let $\mathbf{x}_i \mid y_i = k \sim \mathcal{N}\left(\mathbf{\mu}_k, \Sigma_k\right)$, which is a multivariate normal. We use the following estimates for the parameters:
\begin{align}
\hat{\theta}k &= \frac{N_k}{N} \
\hat{\mu}_k &= \frac{1}{N_k}\sum{{i~:~ y_i = k}}\mathbf{x}i \
\hat{\Sigma}_k &= \frac{1}{N_k - 1}\sum{{i~:~y_i = k}} \left(\mathbf{x}_i - \hat{\mu}_k\right)\left(\mathbf{x}_i - \hat{\mu}_k\right)^\intercal.
\end{align}
$N$ is total number of observations, and $N_k$ is the number of observations of class $k$. Thus, for each class $k$, we compute the proportion of observations that are of that class, the sample mean, and the unbiased estimate for covariance.
End of explanation
# lda = LinearDiscriminantAnalysis(store_covariance=True) # sklearn implementation
lda = DiscriminantAnalysis.LDA() # my implementation
lda.fit(X, y)
lda_misclassification = 1 - lda.score(X, y)
lda_Z = lda.predict(np.c_[xx.ravel(), yy.ravel()])
lda_Z = lda_Z.reshape(xx.shape)
plt.figure(figsize=(8,8))
plot_height_weight_mesh(xx, yy, lda_Z, title="Weight vs Height: LDA",
comment="Misclassification rate: " + decimal_to_percent(lda_misclassification))
plt.show()
Explanation: Let's look at LDA now. We compute $\hat{\theta}_k$ and $\hat{\mu}_k$ in the same manner. However, now we have that all covariances are equal, that is, $\hat{\Sigma} = \hat{\Sigma}_k$ for all $k$. Let $p$ be the number of features, that is, the number of columns in $X$. First, we note that
\begin{align}
\log p\left(\mathcal{D} \mid \boldsymbol\mu, \Sigma,\boldsymbol\theta\right)
&= \sum_{i=1}^N \log p\left(\mathbf{x}i,y_i \mid \boldsymbol\mu, \Sigma,\boldsymbol\theta\right)
= \sum{k = 1}^K \sum_{{i~:~y_i = k}} \log p\left(\mathbf{x}i, y_i = k \mid \mu_k, \Sigma, \theta_k\right) \
&= \sum{k = 1}^K \sum_{{i~:~y_i = k}} \left[\log p(y_i = k) + \log p\left(\mathbf{x}i \mid \mu_k, \Sigma, y_i=k\right)\right] \
&= \sum{k = 1}^K \sum_{{i~:~y_i = k}} \left[\log \theta_k - \frac{p}{2}\log 2\pi - \frac{1}{2}\log|\Sigma|
- \frac{1}{2}\left(\mathbf{x}i - \mu_k\right)^\intercal\Sigma^{-1}\left(\mathbf{x}_i - \mu_k\right)\right] \
&= -\frac{Np}{2}\log 2\pi -\frac{N}{2}\log|\Sigma| + \sum{k = 1}^K \left(N_k\log\theta_k - \frac{1}{2}\sum_{{i~:~y_i = k}}\left(\mathbf{x}_i - \mu_k\right)^\intercal\Sigma^{-1}\left(\mathbf{x}_i - \mu_k\right)\right).
\end{align}
Let $\Lambda = \Sigma^{-1}$. The MLE is invariant with regard to reparameterization, so after isolating the terms that involve $\Sigma$ we can focus on maximizing
\begin{align}
l(\Lambda) &= \frac{N}{2}\log|\Lambda| - \frac{1}{2}\sum_{k = 1}^K \sum_{{i~:~y_i = k}}\left(\mathbf{x}i - \mu_k\right)^\intercal\Lambda\left(\mathbf{x}_i - \mu_k\right) \
&= \frac{N}{2}\log|\Lambda| - \frac{1}{2}\sum{k = 1}^K \sum_{{i~:~y_i = k}}\operatorname{tr}\left(\left(\mathbf{x}_i - \mu_k\right)\left(\mathbf{x}_i - \mu_k\right)^\intercal\Lambda\right),
\end{align}
where we have used the fact that $\left(\mathbf{x}_i - \mu_k\right)^\intercal\Sigma^{-1}\left(\mathbf{x}_i - \mu_k\right)$ is a scalar, so we can replace it with the trace, and then, we apply the fact that the trace remains the same after cyclic permutations.
Now, we note these two identities to help us take the derivative with respect to $\Lambda$,
\begin{align}
\frac{\partial}{\partial\Lambda}\log|\Lambda| &= \left(\Lambda^{-1}\right)^\intercal \
\frac{\partial}{\partial\Lambda}\operatorname{tr}\left(A\Lambda\right) &= A^\intercal.
\end{align}
Thus, we'll have that
\begin{align}
l^\prime(\Lambda) &= \frac{N}{2}\left(\Lambda^{-1}\right)^\intercal - \frac{1}{2}\sum_{k=1}^K\left(\sum_{{i~:~y_i = k}}\left(\mathbf{x}i - \mu_k\right)\left(\mathbf{x}_i - \mu_k\right)^\intercal\right) \
&= \frac{N}{2}\Sigma - \frac{1}{2}\sum{k=1}^K\left(\sum_{{i~:~y_i = k}}\left(\mathbf{x}_i - \mu_k\right)\left(\mathbf{x}_i - \mu_k\right)^\intercal\right)
\end{align}
since $\Lambda^{-1} = \Sigma$ and $\Sigma$ is symmetric.
Setting $l^\prime(\Lambda) = 0$ and solving for $\Sigma$, we have that
\begin{equation}
\Sigma = \frac{1}{N} \sum_{k=1}^K\left(\sum_{{i~:~y_i = k}}\left(\mathbf{x}_i - \mu_k\right)\left(\mathbf{x}_i - \mu_k\right)^\intercal\right).
\end{equation}
In this manner our estimate is
\begin{equation}
\hat{\Sigma} = \frac{1}{N} \sum_{k=1}^K\left(\sum_{{i~:~y_i = k}}\left(\mathbf{x}_i - \hat{\mu}_k\right)\left(\mathbf{x}_i - \hat{\mu}_k\right)^\intercal\right).
\end{equation}
For whatever reason, sklearn uses the biased MLE estimate for covariance in LDA, but it uses the unbiased estimate for covariance in QDA.
End of explanation
fig = plt.figure(figsize=(14,8))
ax1 = fig.add_subplot(1,2,1)
ax2 = fig.add_subplot(1,2,2)
plot_height_weight_mesh(xx, yy, qda_Z, title="Weight vs Height: QDA", ax=ax1,
comment="Misclassification rate: " + decimal_to_percent(qda_misclassification))
plot_height_weight_mesh(xx, yy, lda_Z, title="Weight vs Height: LDA", ax=ax2,
comment="Misclassification rate: " + decimal_to_percent(lda_misclassification))
fig.suptitle('Comparison of QDA and LDA', fontsize=18)
plt.show()
Explanation: Clearly, we see that the two methods give us different decision boundaries. However, for these data, the misclassification rate is the same.
End of explanation |
10,056 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Two Moons Normalizing Flow Using Distrax + Haiku
Neural Spline Flow based off of distrax documentation for a flow. Code to load 2 moons example dataset sourced from Chris Waites's jax-flows demo.
Step1: Plotting 2 moons dataset
Code taken directly from Chris Waites's jax-flows demo. This is the distribution we want to create a bijection to from a simple base distribution, such as a gaussian distribution.
Step4: Creating the normalizing flow in distrax+haiku
Instead of a uniform distribution, we use a normal distribution as the base distribution. This makes more sense for a standardized two moons dataset that is scaled according to a normal distribution using sklearn's StandardScaler(). Using a uniform base distribution will result in inf and nan loss.
Step6: Setting up the optimizer
Step7: Training the flow | Python Code:
!pip install -qq -U dm-haiku distrax optax
import matplotlib.pyplot as plt
from IPython.display import clear_output
from sklearn import datasets, preprocessing
try:
import distrax
except ModuleNotFoundError:
%pip install -qq distrax
import distrax
import jax
import jax.numpy as jnp
import numpy as np
try:
import haiku as hk
except ModuleNotFoundError:
%pip install -qq haiku
import haiku as hk
try:
import optax
except ModuleNotFoundError:
%pip install -qq optax
import optax
try:
import tensorflow as tf
except ModuleNotFoundError:
%pip install -qq tensorflow
import tensorflow as tf
try:
import tensorflow_datasets as tfds
except ModuleNotFoundError:
%pip install -qq tensorflow tensorflow_datasets
import tensorflow_datasets as tfds
try:
from tensorflow_probability.substrates import jax as tfp
except ModuleNotFoundError:
%pip install -qq tensorflow-probability
from tensorflow_probability.substrates import jax as tfp
tfd = tfp.distributions
# key = jax.random.PRNGKey(1234)
Explanation: Two Moons Normalizing Flow Using Distrax + Haiku
Neural Spline Flow based off of distrax documentation for a flow. Code to load 2 moons example dataset sourced from Chris Waites's jax-flows demo.
End of explanation
n_samples = 10000
plot_range = [(-2, 2), (-2, 2)]
n_bins = 100
scaler = preprocessing.StandardScaler()
X, _ = datasets.make_moons(n_samples=n_samples, noise=0.05)
X = scaler.fit_transform(X)
plt.hist2d(X[:, 0], X[:, 1], bins=n_bins, range=plot_range)[-1]
plt.savefig("two-moons-original.pdf")
plt.savefig("two-moons-original.png")
Explanation: Plotting 2 moons dataset
Code taken directly from Chris Waites's jax-flows demo. This is the distribution we want to create a bijection to from a simple base distribution, such as a gaussian distribution.
End of explanation
from typing import Any, Iterator, Mapping, Optional, Sequence, Tuple
# Hyperparams - change these to experiment
flow_num_layers = 8
mlp_num_layers = 4
hidden_size = 1000
num_bins = 8
batch_size = 512
learning_rate = 1e-4
eval_frequency = 100
Array = jnp.ndarray
PRNGKey = Array
Batch = Mapping[str, np.ndarray]
OptState = Any
# Functions to create a distrax normalizing flow
def make_conditioner(
event_shape: Sequence[int], hidden_sizes: Sequence[int], num_bijector_params: int
) -> hk.Sequential:
Creates an MLP conditioner for each layer of the flow.
return hk.Sequential(
[
hk.Flatten(preserve_dims=-len(event_shape)),
hk.nets.MLP(hidden_sizes, activate_final=True),
# We initialize this linear layer to zero so that the flow is initialized
# to the identity function.
hk.Linear(np.prod(event_shape) * num_bijector_params, w_init=jnp.zeros, b_init=jnp.zeros),
hk.Reshape(tuple(event_shape) + (num_bijector_params,), preserve_dims=-1),
]
)
def make_flow_model(
event_shape: Sequence[int], num_layers: int, hidden_sizes: Sequence[int], num_bins: int
) -> distrax.Transformed:
Creates the flow model.
# Alternating binary mask.
mask = jnp.arange(0, np.prod(event_shape)) % 2
mask = jnp.reshape(mask, event_shape)
mask = mask.astype(bool)
def bijector_fn(params: Array):
return distrax.RationalQuadraticSpline(params, range_min=-2.0, range_max=2.0)
# Number of parameters for the rational-quadratic spline:
# - `num_bins` bin widths
# - `num_bins` bin heights
# - `num_bins + 1` knot slopes
# for a total of `3 * num_bins + 1` parameters.
num_bijector_params = 3 * num_bins + 1
layers = []
for _ in range(num_layers):
layer = distrax.MaskedCoupling(
mask=mask,
bijector=bijector_fn,
conditioner=make_conditioner(event_shape, hidden_sizes, num_bijector_params),
)
layers.append(layer)
# Flip the mask after each layer.
mask = jnp.logical_not(mask)
# We invert the flow so that the `forward` method is called with `log_prob`.
flow = distrax.Inverse(distrax.Chain(layers))
# Making base distribution normal distribution
mu = jnp.zeros(event_shape)
sigma = jnp.ones(event_shape)
base_distribution = distrax.Independent(distrax.MultivariateNormalDiag(mu, sigma))
return distrax.Transformed(base_distribution, flow)
def load_dataset(split: tfds.Split, batch_size: int) -> Iterator[Batch]:
# ds = tfds.load("mnist", split=split, shuffle_files=True)
ds = split
ds = ds.shuffle(buffer_size=10 * batch_size)
ds = ds.batch(batch_size)
ds = ds.prefetch(buffer_size=1000)
ds = ds.repeat()
return iter(tfds.as_numpy(ds))
def prepare_data(batch: Batch, prng_key: Optional[PRNGKey] = None) -> Array:
data = batch.astype(np.float32)
return data
@hk.without_apply_rng
@hk.transform
def model_sample(key: PRNGKey, num_samples: int) -> Array:
model = make_flow_model(
event_shape=TWO_MOONS_SHAPE,
num_layers=flow_num_layers,
hidden_sizes=[hidden_size] * mlp_num_layers,
num_bins=num_bins,
)
return model.sample(seed=key, sample_shape=[num_samples])
@hk.without_apply_rng
@hk.transform
def log_prob(data: Array) -> Array:
model = make_flow_model(
event_shape=TWO_MOONS_SHAPE,
num_layers=flow_num_layers,
hidden_sizes=[hidden_size] * mlp_num_layers,
num_bins=num_bins,
)
return model.log_prob(data)
def loss_fn(params: hk.Params, prng_key: PRNGKey, batch: Batch) -> Array:
data = prepare_data(batch, prng_key)
# Loss is average negative log likelihood.
loss = -jnp.mean(log_prob.apply(params, data))
return loss
@jax.jit
def eval_fn(params: hk.Params, batch: Batch) -> Array:
data = prepare_data(batch) # We don't dequantize during evaluation.
loss = -jnp.mean(log_prob.apply(params, data))
return loss
Explanation: Creating the normalizing flow in distrax+haiku
Instead of a uniform distribution, we use a normal distribution as the base distribution. This makes more sense for a standardized two moons dataset that is scaled according to a normal distribution using sklearn's StandardScaler(). Using a uniform base distribution will result in inf and nan loss.
End of explanation
optimizer = optax.adam(learning_rate)
@jax.jit
def update(params: hk.Params, prng_key: PRNGKey, opt_state: OptState, batch: Batch) -> Tuple[hk.Params, OptState]:
Single SGD update step.
grads = jax.grad(loss_fn)(params, prng_key, batch)
updates, new_opt_state = optimizer.update(grads, opt_state)
new_params = optax.apply_updates(params, updates)
return new_params, new_opt_state
Explanation: Setting up the optimizer
End of explanation
# Event shape
TWO_MOONS_SHAPE = (2,)
# Create tf dataset from sklearn dataset
dataset = tf.data.Dataset.from_tensor_slices(X)
# Splitting into train/validate ds
train = dataset.skip(2000)
val = dataset.take(2000)
# load_dataset(split: tfds.Split, batch_size: int)
train_ds = load_dataset(train, 512)
valid_ds = load_dataset(val, 512)
# Initializing PRNG and Neural Net params
prng_seq = hk.PRNGSequence(1)
params = log_prob.init(next(prng_seq), np.zeros((1, *TWO_MOONS_SHAPE)))
opt_state = optimizer.init(params)
training_steps = 1000
for step in range(training_steps):
params, opt_state = update(params, next(prng_seq), opt_state, next(train_ds))
if step % eval_frequency == 0:
val_loss = eval_fn(params, next(valid_ds))
print(f"STEP: {step:5d}; Validation loss: {val_loss:.3f}")
n_samples = 10000
plot_range = [(-2, 2), (-2, 2)]
n_bins = 100
X_transf = model_sample.apply(params, next(prng_seq), num_samples=n_samples)
plt.hist2d(X_transf[:, 0], X_transf[:, 1], bins=n_bins, range=plot_range)[-1]
plt.savefig("two-moons-flow.pdf")
plt.savefig("two-moons-flow.png")
plt.show()
Explanation: Training the flow
End of explanation |
10,057 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Idea
Using the vmstat command line utility to quickly determine the root cause of performance problems.
Step1: Data Input
In this version, we use a helper library that I've built to read in data sources into pandas' DataFrame.
Step2: Data Selection
Step3: Visualization | Python Code:
%less ../datasets/vmstat_loadtest.log
Explanation: Idea
Using the vmstat command line utility to quickly determine the root cause of performance problems.
End of explanation
from ozapfdis.linux import vmstat
stats = vmstat.read_logfile("../datasets/vmstat_loadtest.log")
stats.head()
Explanation: Data Input
In this version, we use a helper library that I've built to read in data sources into pandas' DataFrame.
End of explanation
cpu_data = stats.iloc[:, -5:]
cpu_data.head()
Explanation: Data Selection
End of explanation
%matplotlib inline
cpu_data.plot.area();
Explanation: Visualization
End of explanation |
10,058 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Olfaction Model Demo
This notebook illustrates how to run a Neurokernel-based model of part of the fruit fly's antennal lobe.
Background
The early olfactory system in Drosophila consists of two antennal lobes,
one on each side of the fly brain. Each of these LPUs contain 49 glomeruli that
differ in functionality, size, shape, and relative position. Each glomerulus
receives axons from about 50 olfactory receptor neurons (ORNs) on each of the
fly's two antennae that express the same odorant receptor. The axons of each ORN
connect to the dendrites of 3 to 5 projection neurons (PNs) in the glomeruli.
In addition to the PNs - which transmit olfactory information to the higher
regions of the brain - the antennal lobes contain local neurons (LNs) whose
connections are restricted to the lobes; inter-glomerular connectivity therefore
subsumes synaptic connections between ORNs and PNs, ORNs and LNs, LNs and PNs, and feedback
from PNs to LNs. The entire early olfactory system in Drosophila
contains approximately 4000 neurons.
The current model of the each antennal lobe comprises 49 glomerular channels
with full intraglomerular connectivity in both hemispheres of the fly brain. The
entire model comprises 2800 neurons, or 70% of the fly's entire antennal
lobe. All neurons in the system are modeled using the Leaky Integrate-and-Fire
(LIF) model and all synaptic currents elicited by spikes are modeled using alpha functions.
Parameters for 24 of the glomerular channels are based upon currently available
ORN type data (Hallem et al., 2006); all other channels are configured with
artificial parameters.
A script that generates a GEXF file containing the antennal lobe model configuration is included in the examples/olfaction/data subdirectory of the Neurokernel source code.
Executing the Model
Assuming that the Neurokernel source has been cloned to ~/neurokernel, we first generate an input signal of duration 1.0 seconds and construct the LPU configuration
Step1: Next, we identify the indices of the olfactory sensory neurons (OSNs) and projection neurons (PNs) associated with a specific glomerulus; in this case, we focus on glomerulus DA1
Step2: We now execute the model
Step3: Next, we display the input odorant concentration profile and the spikes produced by the 25 OSNs and 3 PNs associated with glomerulus DA1 in the model | Python Code:
%cd -q ~/neurokernel/examples/olfaction/data
%run gen_olf_input.py
%run create_olf_gexf.py
Explanation: Olfaction Model Demo
This notebook illustrates how to run a Neurokernel-based model of part of the fruit fly's antennal lobe.
Background
The early olfactory system in Drosophila consists of two antennal lobes,
one on each side of the fly brain. Each of these LPUs contain 49 glomeruli that
differ in functionality, size, shape, and relative position. Each glomerulus
receives axons from about 50 olfactory receptor neurons (ORNs) on each of the
fly's two antennae that express the same odorant receptor. The axons of each ORN
connect to the dendrites of 3 to 5 projection neurons (PNs) in the glomeruli.
In addition to the PNs - which transmit olfactory information to the higher
regions of the brain - the antennal lobes contain local neurons (LNs) whose
connections are restricted to the lobes; inter-glomerular connectivity therefore
subsumes synaptic connections between ORNs and PNs, ORNs and LNs, LNs and PNs, and feedback
from PNs to LNs. The entire early olfactory system in Drosophila
contains approximately 4000 neurons.
The current model of the each antennal lobe comprises 49 glomerular channels
with full intraglomerular connectivity in both hemispheres of the fly brain. The
entire model comprises 2800 neurons, or 70% of the fly's entire antennal
lobe. All neurons in the system are modeled using the Leaky Integrate-and-Fire
(LIF) model and all synaptic currents elicited by spikes are modeled using alpha functions.
Parameters for 24 of the glomerular channels are based upon currently available
ORN type data (Hallem et al., 2006); all other channels are configured with
artificial parameters.
A script that generates a GEXF file containing the antennal lobe model configuration is included in the examples/olfaction/data subdirectory of the Neurokernel source code.
Executing the Model
Assuming that the Neurokernel source has been cloned to ~/neurokernel, we first generate an input signal of duration 1.0 seconds and construct the LPU configuration:
End of explanation
import re
import networkx as nx
import neurokernel.tools.graph
g = nx.read_gexf('antennallobe.gexf.gz')
df_node, df_edge = neurokernel.tools.graph.graph_to_df(g)
glom_name = 'DA1'
osn_ind = sorted(list(set([ind[0] for ind in \
df_edge[df_edge.name.str.contains('.*-%s_.*' % glom_name)].index])))
pn_ind = sorted(list(set([ind[1] for ind in \
df_edge[df_edge.name.str.contains('.*-%s_.*' % glom_name)].index])))
# Get OSN and PN label indices:
osn_ind_labels = [int(re.search('osn_.*_(\d+)', name).group(1)) \
for name in df_node.ix[osn_ind].name]
pn_ind_labels = [int(re.search('.*_pn_(\d+)', name).group(1)) \
for name in df_node.ix[pn_ind].name]
Explanation: Next, we identify the indices of the olfactory sensory neurons (OSNs) and projection neurons (PNs) associated with a specific glomerulus; in this case, we focus on glomerulus DA1:
End of explanation
%cd -q ~/neurokernel/examples/olfaction
%run olfaction_demo.py
Explanation: We now execute the model:
End of explanation
%matplotlib inline
import h5py
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
import numpy as np
fmt = lambda x, pos: '%2.2f' % (float(x)/1e4)
with h5py.File('./data/olfactory_input.h5', 'r') as fi, \
h5py.File('olfactory_output_spike.h5','r') as fo:
data_i = fi['array'].value
data_o = fo['array'].value
mpl.rcParams['figure.dpi'] = 120
mpl.rcParams['figure.figsize'] = (12,9)
raster = lambda data: plt.eventplot([np.nonzero(data[i, :])[0] for i in xrange(data.shape[0])],
colors = [(0, 0, 0)],
lineoffsets = np.arange(data.shape[0]),
linelengths = np.ones(data.shape[0])/2.0)
f = plt.figure()
plt.subplot(311)
ax = plt.gca()
ax.xaxis.set_major_formatter(ticker.FuncFormatter(fmt))
plt.plot(data_i[:10000, 0]);
ax.set_ylim(np.min(data_i)-1, np.max(data_i)+1)
ax.set_xlim(0, 10000)
plt.title('Input Stimulus'); plt.ylabel('Concentration')
plt.subplot(312)
raster(data_o.T[osn_ind, :])
plt.title('Spikes Generated by OSNs'); plt.ylabel('OSN #');
ax = plt.gca()
ax.set_ylim(np.min(osn_ind_labels), np.max(osn_ind_labels))
ax.xaxis.set_major_formatter(ticker.FuncFormatter(fmt))
ax.yaxis.set_major_locator(ticker.MultipleLocator(base=5.0))
plt.subplot(313)
raster(data_o.T[pn_ind, :])
plt.title('Spikes Generated by PNs'); plt.ylabel('PN #');
ax = plt.gca()
ax.set_ylim(np.min(pn_ind_labels)-0.5, np.max(pn_ind_labels)+0.5)
ax.xaxis.set_major_formatter(ticker.FuncFormatter(fmt))
ax.yaxis.set_major_locator(ticker.MultipleLocator(base=1.0))
plt.xlabel('time (s)')
plt.subplots_adjust()
f.savefig('olfactory_output.png')
Explanation: Next, we display the input odorant concentration profile and the spikes produced by the 25 OSNs and 3 PNs associated with glomerulus DA1 in the model:
End of explanation |
10,059 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data reduction for paleomagnetic data aboard the JOIDES Resolution
This notebook is for people wanting to download and manipulate data from an IODP Expedition using data in the LIMS Online Repository. The basic procedure is as follows. This notebook will guide you through the process step-by-step.
Import the required python packages and set up the desired directory structure for each HOLE.
Download the Section Summary table and put the .csv file in the HOLE working directory
Download the Sample Report table for any discrete samples taken (select Sample Type - CUBE and Test code PMAG) and put them in the HOLE working directory
Download the SRM data for the hole and put them in the SRM working directory in the HOLE working directory
Download the SRM discrete measurement data for the hole and put them in the SRM_discrete sample working directory.
Download the JR6 data and put them in the JR6 working directory
Download the KLY4S data and put them in the KLY4S working directory.
If you want to edit the archive half data for coring or lithologic disturbances
Step1: <a id='sample'></a>
Make the sample tables
Download the Sample Report and Section Summary tables (when available) for the HOLE from the LIMS online repository at http
Step2: <a id='archives'></a>
Convert the SRM archive half data for the Hole
download data for a single hole from LORE as a csv file. edit the file name in the preliminaries cell
put the file into the HOLE/SRM_archive_data directory in the HOLE directory
edit the composite depth column header (comp_depth_key) if desired.
if you have been busy measuring archives, writing the measurement files can take awhile so be patient.
Step3: Editing of SRM archive data.
filter for desired demag step (set in the preliminaries cell).
remove data from within 80 cm of core tops and 10 cm from section ends.
if desired
Step4: <a id='discretes'></a>
Convert SRM discrete sample data to MagIC
Step5: for OFFLINE treatments
Step6: To make some quickie zijderveld plots in the notebook set the following the True. To save all the plots, set save_plots to True
Step7: If you did a bunch of full demagnetizations, set max_field to your maximum af field. To save your plots, set save_plots to True. To change the output format, change 'svg' to whatever you want ('pdf','eps','png').
Step8: Import the JR6 data.
download the JR6 data from LIMS and put it in the JR6_data folder in your working directory.
edit the file name in the preliminaries cell to reflect the actual file name.
execute the two cells in order.
Step9: <a id='aniso'></a>
AMS data
Convert AMS data to MagIC
Download KAPPABRIDGE expanded magnetic susceptibility data from the Lims Online Repository
place the downloaded .csv file in the KLY4S_data in the HOLE directory.
change kly4s_file to the correct file name in the preliminaries cell
Step10: To make a depth plot of your AMS data, change the False to True in the cell below and run it.
Step11: This way makes equal area plots in core coordinates.... To run it, set the False to True.
To save the plots, sset save_plots to True. for different options, try (help(ipmag.aniso_magic_nb)).
Step12: <a id='plots'></a>
Downhole Plots
Fill in the section summary file in the preliminaries folder and run the cells below
Step13: <a id='upload'></a>
Tidy up for MagC
Combine all the MagIC files for uploading to MagIC (http | Python Code:
# import a bunch of packages for use in the notebook
import pmagpy.pmag as pmag # a bunch of PmagPy modules
import pmagpy.pmagplotlib as pmagplotlib
import pmagpy.ipmag as ipmag
import pmagpy.contribution_builder as cb
from pmagpy import convert_2_magic as convert # conversion scripts for many lab formats
from pmagpy import iodp_funcs as iodp_funcs # functions for dealing with LIMS data
import matplotlib.pyplot as plt # our plotting buddy
import numpy as np # the fabulous NumPy package
import pandas as pd # and of course Pandas
%matplotlib inline
from importlib import reload
import warnings
warnings.filterwarnings("ignore")
meas_files,spec_files,samp_files,site_files=[],[],[],[] # holders for magic file names
import os
# Modify these for your expedition
exp_name,exp_description="","" # e.g., 'IODP Expedition 382','Iceberg Alley'
# Edit these for each hole. In the following, HOLE refers to the current hole.
hole="" # e.g., U999A
# edit these for the current hole - get it from Hole summary on LORE
hole_lat,hole_lon=0+0/60,0+0/60 # e.g., -57+26.5335/60,-43+21.4723/60
gad_inc=pmag.pinc(hole_lat) # geocentric axial dipole inclination for hole
demag_step=0.015 # choose the demagnetization step (in tesla) for downhole plots
# set up the directory structure for this hole
# these are default locations and variables - do not change them.
jr6_dir=hole+'/JR6_data'
kly4s_dir=hole+'/KLY4S_data'
srm_archive_dir=hole+'/SRM_archive_data'
srm_discrete_dir=hole+'/SRM_discrete_data'
magic_dir=hole+'/'+hole+'_MagIC'
dscr_file="" # if no discrete samples, this stays blank
# set up the directory structure
if hole: # checks if the hole name has been set.
if hole not in os.listdir(): # checks if the directory structure exists, otherwise this will create it
os.mkdir(hole)
os.mkdir(jr6_dir)
os.mkdir(kly4s_dir)
os.mkdir(srm_archive_dir)
os.mkdir(srm_discrete_dir)
os.mkdir(magic_dir)
os.mkdir('Figures')
#After mkdir has been run, you can
# 1) download all the files you need
# 2) put them in the correct folders as instructed below. Set the file names here:
# 3) edit the file names for the downloaded files.
# Put these downloaded file into the HOLE directory (after mkdir has been executed)
# required for downhole direction plot. Put the file in the HOLE directory
section_summary_file="" # set this to the downloaded summary file (e.g., "Section Summary_15_5_2019.csv")
if section_summary_file: # add the path to the summary file, if set.
summary_file=hole+'/'+section_summary_file
# required for downhole AMS plot. Put the file in the HOLE directory
core_summary_file="" # set this to the downloaded Core summary file (e.g., "Core Summary_15_4_2019.csv")
# required for unpacking LIMS discrete sample data. Put the file in the HOLE directory
samp_file="" # set this to your sample file (e.g., samp_file='samples_5_4_2019.csv' )
# Put these files into the designated subdirectories in the HOLE directory:
# required for unpacking SRM archive measurements file. It should be in HOLE/SRM_archive_data/
srm_archive_file="" # set this to your srm archive file from LIMS (e.g., "srmsection_1_4_2019.csv")
# required for unpacking SRM discrete measurements file. It should be in HOLE/SRM_discrete_data/
srm_discrete_file= '' # SRM discrete measurements file (e.g., "srmdiscrete_7_4_2019.csv")
# required for unpacking SRM discrete measurements with OFFLINE TREATMENTS. Put in: HOLE/SRM_archive_data/
# NB: you also need the srm_discrete_file set
dscr_ex_file='' # SRM discrete extended file (e.g., "ex-srm_26_4_2019.csv")
# needed for unpacking JR-6 measurement data. Put the file in HOLE/JR6_data/ directory
jr6_file='' # JR6 data file from LORE (e.g., "spinner_1_4_2019.csv")
# required for unpackying Kappabridge data. be sure to download the expanded file from LORE.
# Put it in HOLE/KLY4S_data/
kly4s_file='' # set this to the downloaded file (e.g., "ex-kappa_15_4_2019.csv")
Explanation: Data reduction for paleomagnetic data aboard the JOIDES Resolution
This notebook is for people wanting to download and manipulate data from an IODP Expedition using data in the LIMS Online Repository. The basic procedure is as follows. This notebook will guide you through the process step-by-step.
Import the required python packages and set up the desired directory structure for each HOLE.
Download the Section Summary table and put the .csv file in the HOLE working directory
Download the Sample Report table for any discrete samples taken (select Sample Type - CUBE and Test code PMAG) and put them in the HOLE working directory
Download the SRM data for the hole and put them in the SRM working directory in the HOLE working directory
Download the SRM discrete measurement data for the hole and put them in the SRM_discrete sample working directory.
Download the JR6 data and put them in the JR6 working directory
Download the KLY4S data and put them in the KLY4S working directory.
If you want to edit the archive half data for coring or lithologic disturbances:
Download the core disturbance info from Desklogic
go to the Desklogic computer station and open DeskLogic (little yellow square)
select the 'macroscopic' template
select sample: hole, Archive, section half
download
export: include classification, Data and save on the Desktop
if the network drives are not available, right click on the double square icon on bar
switch the login profile to scientist and login with OES credentials (user name/email password)
choose scientist and map the drives. they should be now available to windows explorer
copy to your HOLE working directory.
If you want to use Xray information to edit your archive half data: fill in the XRAY disturbance summary file and put it into the HOLE working directory [template is in data_files/iodp_magic]
To start processing data from a single HOLE:
Duplicate this notebook with the HOLE name (e.g., U999A) by first making a copy, then renaming it under the 'File' menu. Follow the instructions below.
For HELP:
- for help on options in any python function, you can type:
help(MODULE.FUNCTION) in the cell below. For example click on the cell below for information on how to use convert.iodp_samples_csv. This works for any python function.
- email [email protected]. Incluede a description of your problem, a screen shot of the error if appropriate and an example data file that is causing the difficulty (be mindful of embargo issues).
For a worked example (using FAKE data, see data_files/iodp_magic/U999A.ipynb)
Table of Contents
Preliminaries : import required packages, set up directory structure and set file names
Make sample tables : parse downloaded sample table to standard format
Archive measurements : parse downloaded srm archive measurements and perform editing functions
Discrete sample measurements : parse downloaded discrete sample data
Downhole plots : make downhole plots of remanence
Anisotropy of magnetic susceptibility : plots of AMS
Prepare files for uploading to MagIC : you can upload data to the MagIC database as a private contribution. Then when you have published your data, you can activate your contribution with the DOI of your publication.
<a id='preliminaries'></a>
Preliminaries
In the cell below, edit the HOLE name and set the HOLE latitude and longitude where HOLE stands for the hole name. The cell below will do this for you.
edit the file names as you download them from LORE
Every time you open this notebook, you must click on the cell below and then click 'Run' in the menu above to execute it.
End of explanation
# Make sure the sample file is in your HOLE directory and the filename set in the Preliminaries
# Note: this program will not run if the file is in use
if samp_file:
comp_depth_key='Top depth CSF-B (m)'
# do the heavy lifting:
convert.iodp_samples_csv(samp_file,input_dir_path=hole,spec_file='lims_specimens.txt',\
samp_file='lims_samples.txt',site_file='lims_sites.txt',\
dir_path=magic_dir,comp_depth_key=comp_depth_key,\
exp_name=exp_name,exp_desc=exp_description,lat=hole_lat,\
lon=hole_lon)
# this collects the file names that were created so they can be combined with others, e.g., those
# from the archive half measurements which are not in the sample table.
if 'lims_specimens.txt' not in spec_files:spec_files.append('lims_specimens.txt')
if 'lims_samples.txt' not in samp_files:samp_files.append('lims_samples.txt')
if 'lims_sites.txt' not in site_files:site_files.append('lims_sites.txt')
# do it again to make copies for use with demag_gui
convert.iodp_samples_csv(samp_file,input_dir_path=hole,\
dir_path=magic_dir,comp_depth_key=comp_depth_key,\
exp_name=exp_name,exp_desc=exp_description,lat=hole_lat,\
lon=hole_lon)
Explanation: <a id='sample'></a>
Make the sample tables
Download the Sample Report and Section Summary tables (when available) for the HOLE from the LIMS online repository at http://web.ship.iodp.tamu.edu/LORE/ and save them as csv files.
Put the two .csv files in the HOLE directory created above.
Edit the name of the sample .csv file in prelimaries cell and the 'secondary depth' column that you selected when downloading (the default is CSF-B as below) in the cell below.
Execute the cell to populate the MagIC meta-data tables. The depth information ends up in the lims_sites.txt table in the HOLE_MagIC directory, for example and gets used to create downhole plots.
Run the following cell to create the samples, sites, and locations tables for the hole.
End of explanation
# Fill in the name of your srm archive half measruement file (e.g., samp_file='samples_5_4_2019.csv' )
# Make sure the archive measurement file is in your HOLE directory.
# Note: this program will not run if the file is in use
if srm_archive_file:
comp_depth_key='Depth CSF-B (m)'
convert.iodp_srm_lore(srm_archive_file,meas_file='srm_arch_measurements.txt', comp_depth_key=comp_depth_key,\
dir_path=magic_dir,input_dir_path=srm_archive_dir,lat=hole_lat,lon=hole_lon)
if 'srm_arch_measurements.txt' not in meas_files:meas_files.append('srm_arch_measurements.txt')
if 'srm_arch_specimens.txt' not in spec_files:spec_files.append('srm_arch_specimens.txt')
if 'srm_arch_samples.txt' not in samp_files:samp_files.append('srm_arch_samples.txt')
if 'srm_arch_sites.txt' not in site_files:site_files.append('srm_arch_sites.txt')
Explanation: <a id='archives'></a>
Convert the SRM archive half data for the Hole
download data for a single hole from LORE as a csv file. edit the file name in the preliminaries cell
put the file into the HOLE/SRM_archive_data directory in the HOLE directory
edit the composite depth column header (comp_depth_key) if desired.
if you have been busy measuring archives, writing the measurement files can take awhile so be patient.
End of explanation
# to execute this cell, set False to True. turn it back to False so you don't rerun this by accident.
remove_ends=True
remove_desclogic_disturbance=False
remove_xray_disturbance=False
core_top=80 # remove top 80 cm of core top - change as desired
section_ends=10 # remove 10 cm from either end of the section - change as desired
if False:
arch_demag_step=iodp_funcs.demag_step(magic_dir,hole,demag_step) # pick the demag step
if remove_ends:
noends=iodp_funcs.remove_ends(arch_demag_step,hole,\
core_top=core_top,section_ends=section_ends) # remove the ends
else:
noends=arch_demag_step
if remove_desclogic_disturbance:
nodist=iodp_funcs.remove_disturbance(noends,hole) # remove coring disturbances
else:
nodist=noends
if remove_xray_disturbance:
no_xray_df=iodp_funcs.no_xray_disturbance(nodist,hole)
else:
no_xray_df=nodist
adj_dec_df,core_dec_adj=iodp_funcs.adj_dec(no_xray_df,hole)
Explanation: Editing of SRM archive data.
filter for desired demag step (set in the preliminaries cell).
remove data from within 80 cm of core tops and 10 cm from section ends.
if desired: (set nodist=True), remove from "disturbed" intervals labled "high" from DescLogic.
go to DescLogic and export the list of disturbances for the hole.
put this in HOLE_disturbances.xlsx in the HOLE directory. note that HOLE is your hole name, set in the "preliminaries" cell.
remove data from disturbed intervals based on the Xrays. You have to create the Xray data file yourself. There is a template for this that must be followed in PmagPy/data_files/iodp_magic
adjust the data to average normal dec=90
End of explanation
if srm_discrete_file:
convert.iodp_dscr_lore(srm_discrete_file,meas_file='srm_dscr_measurements.txt', \
dir_path=magic_dir,input_dir_path=srm_discrete_dir,spec_file='lims_specimens.txt')
if 'srm_dscr_measurements.txt' not in meas_files:meas_files.append('srm_dscr_measurements.txt')
dscr_file='srm_dscr_measurements.txt'
Explanation: <a id='discretes'></a>
Convert SRM discrete sample data to MagIC:
download data for a single hole from LORE as a csv file.
for OFFLINE treatments (ARM,IRM, DTECH AF, thermal), download both the
"standard" and the extended file names
put the file into the HOLE/SRM_discrete_data directory in the HOLE directory
for "regular" SRM files (no offline treaments) edit the file name and execute the cell below:
filenames are set in the preliminaries cell
End of explanation
if dscr_ex_file and srm_discrete_file:
convert.iodp_dscr_lore(srm_discrete_file,dscr_ex_file=dscr_ex_file,meas_file='srm_dscr_measurements.txt', \
dir_path=magic_dir,input_dir_path=srm_discrete_dir,spec_file='lims_specimens.txt',\
offline_meas_file='srm_dscr_offline_measurements.txt')
if 'srm_dscr_measurements.txt' not in meas_files:meas_files.append('srm_dscr_measurements.txt')
if 'srm_dscr_offline_measurements.txt' not in meas_files:meas_files.append('srm_dscr_offline_measurments.txt')
ipmag.combine_magic(['srm_dscr_measurements.txt','srm_dscr_offline_measurements.txt'],
outfile='dscr_measurements.txt',dir_path=magic_dir)
dscr_file='dscr_measurements.txt'
Explanation: for OFFLINE treatments: specify the extended discrete srm file name.
NB: for this to work properly, you must follow these conventions when running the SRM in
Offline mode:
put these in you comment field in the IMS-10 program for discrete samples:
- for NRMs:
NRM
- for AF demag at 10 mT (for example)
AF:10
- for thermal at 200 C
T:200
- for ARM with 100mT AC and 50 uT DC:
ARM:100:.05
- for IRM in 1000 mT
IRM:1000
set the filenames in the prelimaries cell
End of explanation
if dscr_file:
ipmag.zeq_magic(meas_file=dscr_file,\
spec_file='lims_specimens.txt',input_dir_path=magic_dir,n_plots="all",save_plots=False)
Explanation: To make some quickie zijderveld plots in the notebook set the following the True. To save all the plots, set save_plots to True
End of explanation
cnt=1
max_field=0 # set this to your peak field (in tesla) to execute this
if max_field:
srm_dscr_df=pd.read_csv(magic_dir+'/'+dscr_file,sep='\t',header=1)
srm_dmag=srm_dscr_df[srm_dscr_df.treat_ac_field>=max_field] # find all the demag specimens
spc_list=srm_dmag.specimen.unique()
for spc in spc_list:
ipmag.zeq_magic(meas_file=dscr_file,specimen=spc,fignum=cnt,\
spec_file='lims_specimens.txt',input_dir_path=magic_dir,save_plots=False,
fmt='svg')
cnt+=3;
Explanation: If you did a bunch of full demagnetizations, set max_field to your maximum af field. To save your plots, set save_plots to True. To change the output format, change 'svg' to whatever you want ('pdf','eps','png').
End of explanation
if jr6_file:
convert.iodp_jr6_lore(jr6_file,meas_file='jr6_measurements.txt',dir_path=magic_dir,\
input_dir_path=jr6_dir,spec_file='lims_specimens.txt',noave=False)
if 'jr6_measurements.txt' not in meas_files:meas_files.append('jr6_measurements.txt')
dscr_file='jr6_measurements.txt'
# combine srm and jr6 data, we can combine the data here:
if jr6_file and srm_discrete_file:
ipmag.combine_magic(['srm_dscr_measurements.txt','jr6_measurements.txt'],
outfile='dscr_measurements.txt',dir_path=magic_dir)
dscr_file='dscr_measurements.txt'
max_field=0 # set this to your peak field (in tesla) to execute this
if max_field:
cnt=1
dscr_df=pd.read_csv(magic_dir+'/dscr_measurements.txt',sep='\t',header=1)
dmag_df=dscr_df[dscr_df.treat_ac_field>=max_field] # find all the demag specimens
spc_list=dmag_df.specimen.unique()
for spc in spc_list:
ipmag.zeq_magic(meas_file='dscr_measurements.txt',specimen=spc,fignum=cnt,\
spec_file='lims_specimens.txt',input_dir_path=magic_dir,save_plots=False)
cnt+=3;
Explanation: Import the JR6 data.
download the JR6 data from LIMS and put it in the JR6_data folder in your working directory.
edit the file name in the preliminaries cell to reflect the actual file name.
execute the two cells in order.
End of explanation
if kly4s_file:
convert.iodp_kly4s_lore(kly4s_file, meas_out='kly4s_measurements.txt',
spec_infile='lims_specimens.txt', spec_out='kly4s_specimens.txt',
dir_path=magic_dir, input_dir_path=kly4s_dir,actual_volume=7)
if 'kly4s_measurements.txt' not in meas_files:meas_files.append('kly4s_measurements.txt')
if 'kly4s_specimens.txt' not in meas_files:meas_files.append('kly4s_specimens.txt')
Explanation: <a id='aniso'></a>
AMS data
Convert AMS data to MagIC
Download KAPPABRIDGE expanded magnetic susceptibility data from the Lims Online Repository
place the downloaded .csv file in the KLY4S_data in the HOLE directory.
change kly4s_file to the correct file name in the preliminaries cell
End of explanation
if False:
ipmag.ani_depthplot(spec_file='kly4s_specimens.txt', dir_path=magic_dir,
samp_file='lims_samples.txt',site_file='lims_sites.txt',
dmin=-1,dmax=-1,meas_file='kly4s_measurements.txt',
sum_file=core_summary_file)
plt.savefig('Figures/'+hole+'_anisotropy_xmastree.pdf')
Explanation: To make a depth plot of your AMS data, change the False to True in the cell below and run it.
End of explanation
if False:
ipmag.aniso_magic_nb(infile=magic_dir+'/kly4s_specimens.txt',\
verbose=False,save_plots=False,ihext=False,iboot=True,ivec=True)
Explanation: This way makes equal area plots in core coordinates.... To run it, set the False to True.
To save the plots, sset save_plots to True. for different options, try (help(ipmag.aniso_magic_nb)).
End of explanation
if dscr_file: # checks for discrete measurements, adds depths, etc.
srm_dscr_df=pd.read_csv(magic_dir+'/'+dscr_file,sep='\t',header=1)
dscr_df=srm_dscr_df.copy(deep=True)
dscr_df=dscr_df[srm_dscr_df['treat_ac_field']==demag_step]
depth_data=pd.read_csv(magic_dir+'/lims_sites.txt',sep='\t',header=1)
depth_data['specimen']=depth_data['site']
depth_data=depth_data[['specimen','core_depth']]
depth_data.sort_values(by='specimen')
dscr_df=pd.merge(dscr_df,depth_data,on='specimen')
else:
dscr_df="" # if no discrete samples.
if section_summary_file:
arch_demag_step=pd.read_csv(hole+'/'+hole+'_arch_demag_step.csv')
adj_dec_df=pd.read_csv(hole+'/'+hole+'_dec_adjusted.csv')
#Let's get the section boundaries.
# edit the file name for the Section Summary table downloaded from LIMS
summary_df=pd.read_csv(summary_file)
summary_df.dropna(subset=['Sect'],inplace=True)
if type(summary_df.Sect)!='str':
summary_df.Sect=summary_df.Sect.astype('int64')
summary_df.Sect=summary_df.Sect.astype('str')
summary_df=summary_df[summary_df['Sect'].str.contains('CC')==False]
max_depth=arch_demag_step['core_depth'].max()
summary_df=summary_df[summary_df['Top depth CSF-A (m)']<max_depth]
sect_depths=summary_df['Top depth CSF-A (m)'].values
summary_df['Core']=summary_df['Core'].astype('int')
labels=summary_df['Core'].astype('str')+summary_df['Type']+'-'+summary_df['Sect'].astype('str')
arch_demag_step=pd.read_csv(hole+'/'+hole+'_arch_demag_step.csv')
interval=100 # how to divide up the plots
depth_min,depth_max=0,interval
fignum=1
while depth_min<arch_demag_step.core_depth.max():
iodp_funcs.make_plot(arch_demag_step,adj_dec_df,sect_depths,hole,\
gad_inc,depth_min,depth_max,labels,spec_df=dscr_df,fignum=fignum)
depth_min+=interval
depth_max+=interval
fignum+=1
Explanation: <a id='plots'></a>
Downhole Plots
Fill in the section summary file in the preliminaries folder and run the cells below
End of explanation
if False: # when ready to upload, you can set this to true to make upload file.
ipmag.combine_magic(spec_files,outfile='specimens.txt',dir_path=magic_dir)
ipmag.combine_magic(samp_files,outfile='samples.txt',dir_path=magic_dir)
ipmag.combine_magic(site_files,outfile='sites.txt',dir_path=magic_dir)
ipmag.combine_magic(meas_files,outfile='measurements.txt',dir_path=magic_dir)
ipmag.upload_magic()
Explanation: <a id='upload'></a>
Tidy up for MagC
Combine all the MagIC files for uploading to MagIC (http://earthref.org/MagIC).
End of explanation |
10,060 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
확률론적 선형 회귀 모형
OLS(Ordinary Least Square) 방법을 사용하면 데이터에 대한 확률론적인 가정없이도 최적의 가중치를 계산할 수 있다. 그러나 이 경우에는 계산한 가중치가 어느 정도의 신뢰도 또는 안정성을 가지는지 확인할 수 있는 방법이 없다. 이를 확인하고자 하는 시도 중의 하나가 부트스트래핑(bootstrapping) 방법이다.
부트스트래핑
부트스트래핑(bootstrapping)은 회귀 분석에 사용한 데이터가 달라진다면 회귀 분석의 결과는 어느 정도 영향을 받는지를 알기 위한 방법이다.
데이터가 확률 변수로부터 생성된 표본이거나 혹은 더 큰 모집단 중에서 선택한 표본이라고 가정한다면 회귀 분석의 결과는 분석에 사용한 표본에 의존적임을 알 수 있다. 만약 추가적인 다른 표본을 얻어서 다시 회귀 분석에 사용한다면 회귀 분석 결과 즉, 가중치 벡터의 값은 달라질 것이다.
그러나 현실적으로는 데이터를 추가적으로 얻기가 힘들기 때문에 부트스트래핑 방법에서는 기존의 데이터를 재표본화(re-sampling)하는 방법을 선택한다. 재표본화는 기존의 $D$개의 데이터에서 다시 $D$개의 데이터를 선택하되 중복 선택도 가능하게 한다. (resampling with replacement) 이 경우 이론적으로는 $2^D$개의 새로운 표본 집단을 얻을 수 있다.
직접 부트스트래핑을 실시해 보자. 우선 100개의 가상 데이터를 생성하여 이를 기반으로 회귀 분석을 실시한다.
Step1: 다음으로 이 데이터에서 중복을 허락하여 N개의 데이터를 선택한 후 다시 회귀 분석을 한다. 이론적으로 $2^{100}$개의 경우가 있지만 1,000번만 반복해 본다.
Step2: 전체 가중치 집합을 히스토그램으로 나타내면 다음과 같다.
Step3: 평균과 분산은 다음과 같다.
Step4: 가중치 중 상수항의 경우 평균은 -1.6이지만 표분 편차가 $\sqrt{4.81}=2.19$이므로 0일 가능성을 배제할 수 없다.
이 결과를 statsmodels 의 회귀 분석 보고서와 비교하자.
Step5: 보고서의 std err 항목을 보면 표준 편차의 경우 2.163 이고 마지막의 신뢰 구간(confidence interval)이 -5.920 ~ 2.663 임을 보이고 있다. 부트스트래핑으로 얻은 결과와 유사하다. 이 결과는 다음에 설명할 확률론적 가정에 의해 계산된 값이다.
확률론적 선형 회귀 모형
확률론적 선형 회귀 모형에서는 $y$가 확률 변수로부터 생성된 표본이라고 가정하며 다음과 같은 조건을 만족한다.
선형 정규 분포 가정
종속 변수 $y$는 기댓값 $w^Tx$, 분산 $\sigma^2$ 를 가지는 정규 분포 확률 변수이다.
$$ p(y \mid x, \theta) = \mathcal{N}(y \mid w^Tx, \sigma^2 ) $$
따라서 오차(disturbance) $ \epsilon = y-w^Tx $ 도 정규 분포 확률 변수이다.
$$ p(\epsilon \mid \theta) = \mathcal{N}(0, \sigma^2 ) $$
외생성(Exogeneity) 가정
오차 $\epsilon$와 독립 변수 $x$는 서로 독립이다.
$$ \text{E}[\epsilon \mid x] = 0$$
조건부 독립 가정
오차 $\epsilon$는 $x$에 대해 조건부 독립이다.
$$ \text{Cov}[\epsilon_i, \epsilon_j \mid x] = 0$$
MLE를 사용한 선형 회귀 분석
앞의 확률론적 선형 회귀 모형과 MLE(Maximum Likelihood Estimation)을 사용하여 가중치 벡터 $w$의 값을 구해보자.
Likelihood는 다음과 같다.
$$
\begin{eqnarray}
p(y_{1 | Python Code:
from sklearn.datasets import make_regression
X0, y, coef = make_regression(n_samples=100, n_features=1, noise=20, coef=True, random_state=0)
dfX0 = pd.DataFrame(X0, columns=["X1"])
dfX = sm.add_constant(dfX0)
dfy = pd.DataFrame(y, columns=["y"])
model = sm.OLS(dfy, dfX)
result = model.fit()
print(result.params)
Explanation: 확률론적 선형 회귀 모형
OLS(Ordinary Least Square) 방법을 사용하면 데이터에 대한 확률론적인 가정없이도 최적의 가중치를 계산할 수 있다. 그러나 이 경우에는 계산한 가중치가 어느 정도의 신뢰도 또는 안정성을 가지는지 확인할 수 있는 방법이 없다. 이를 확인하고자 하는 시도 중의 하나가 부트스트래핑(bootstrapping) 방법이다.
부트스트래핑
부트스트래핑(bootstrapping)은 회귀 분석에 사용한 데이터가 달라진다면 회귀 분석의 결과는 어느 정도 영향을 받는지를 알기 위한 방법이다.
데이터가 확률 변수로부터 생성된 표본이거나 혹은 더 큰 모집단 중에서 선택한 표본이라고 가정한다면 회귀 분석의 결과는 분석에 사용한 표본에 의존적임을 알 수 있다. 만약 추가적인 다른 표본을 얻어서 다시 회귀 분석에 사용한다면 회귀 분석 결과 즉, 가중치 벡터의 값은 달라질 것이다.
그러나 현실적으로는 데이터를 추가적으로 얻기가 힘들기 때문에 부트스트래핑 방법에서는 기존의 데이터를 재표본화(re-sampling)하는 방법을 선택한다. 재표본화는 기존의 $D$개의 데이터에서 다시 $D$개의 데이터를 선택하되 중복 선택도 가능하게 한다. (resampling with replacement) 이 경우 이론적으로는 $2^D$개의 새로운 표본 집단을 얻을 수 있다.
직접 부트스트래핑을 실시해 보자. 우선 100개의 가상 데이터를 생성하여 이를 기반으로 회귀 분석을 실시한다.
End of explanation
N = 1000
params_c = np.zeros(N)
params_x1 = np.zeros(N)
for i in range(N):
idx = np.random.choice(len(dfy), len(dfy), replace=True)
dfX2 = dfX.ix[idx, :]
dfy2 = dfy.ix[idx]
r = sm.OLS(dfy2, dfX2).fit()
params_c[i] = r.params.const
params_x1[i] = r.params.X1
Explanation: 다음으로 이 데이터에서 중복을 허락하여 N개의 데이터를 선택한 후 다시 회귀 분석을 한다. 이론적으로 $2^{100}$개의 경우가 있지만 1,000번만 반복해 본다.
End of explanation
ax1 = plt.subplot(121)
sns.distplot(params_c, ax=ax1)
plt.axvline(params_c.mean(), c='r')
plt.title("const parameter")
ax2 = plt.subplot(122)
sns.distplot(params_x1, ax=ax2)
plt.axvline(params_x1.mean(), c='r')
plt.title("x1 parameter")
plt.show()
Explanation: 전체 가중치 집합을 히스토그램으로 나타내면 다음과 같다.
End of explanation
sp.stats.describe(params_c)
sp.stats.describe(params_x1)
Explanation: 평균과 분산은 다음과 같다.
End of explanation
print(result.summary())
Explanation: 가중치 중 상수항의 경우 평균은 -1.6이지만 표분 편차가 $\sqrt{4.81}=2.19$이므로 0일 가능성을 배제할 수 없다.
이 결과를 statsmodels 의 회귀 분석 보고서와 비교하자.
End of explanation
sp.stats.probplot(result.resid, plot=plt)
plt.show()
plt.plot(X0, result.resid, 'o')
plt.axhline(y=0, c='k')
plt.xlabel("X1")
plt.ylabel("Residual")
plt.show()
import statsmodels.stats.api as sms
test = sms.omni_normtest(result.resid)
for x in zip(['Chi^2', 'P-value'], test):
print("%-12s: %6.3f" % x)
import statsmodels.stats.api as sms
test = sms.jarque_bera(result.resid)
for x in zip(['Jarque-Bera', 'P-value', 'Skew', 'Kurtosis'], test):
print("%-12s: %6.3f" % x)
Explanation: 보고서의 std err 항목을 보면 표준 편차의 경우 2.163 이고 마지막의 신뢰 구간(confidence interval)이 -5.920 ~ 2.663 임을 보이고 있다. 부트스트래핑으로 얻은 결과와 유사하다. 이 결과는 다음에 설명할 확률론적 가정에 의해 계산된 값이다.
확률론적 선형 회귀 모형
확률론적 선형 회귀 모형에서는 $y$가 확률 변수로부터 생성된 표본이라고 가정하며 다음과 같은 조건을 만족한다.
선형 정규 분포 가정
종속 변수 $y$는 기댓값 $w^Tx$, 분산 $\sigma^2$ 를 가지는 정규 분포 확률 변수이다.
$$ p(y \mid x, \theta) = \mathcal{N}(y \mid w^Tx, \sigma^2 ) $$
따라서 오차(disturbance) $ \epsilon = y-w^Tx $ 도 정규 분포 확률 변수이다.
$$ p(\epsilon \mid \theta) = \mathcal{N}(0, \sigma^2 ) $$
외생성(Exogeneity) 가정
오차 $\epsilon$와 독립 변수 $x$는 서로 독립이다.
$$ \text{E}[\epsilon \mid x] = 0$$
조건부 독립 가정
오차 $\epsilon$는 $x$에 대해 조건부 독립이다.
$$ \text{Cov}[\epsilon_i, \epsilon_j \mid x] = 0$$
MLE를 사용한 선형 회귀 분석
앞의 확률론적 선형 회귀 모형과 MLE(Maximum Likelihood Estimation)을 사용하여 가중치 벡터 $w$의 값을 구해보자.
Likelihood는 다음과 같다.
$$
\begin{eqnarray}
p(y_{1:N} \,\big|\, x_{1:N}, \theta)
&=& \prod_{i=1}^N N(y_i \,\big|\, w^T x_i , \sigma^2) \
&=& \prod_{i=1}^N \frac{1}{\sqrt{2\pi\sigma^2}}\exp\left{-\frac{(y_i-w^T x_i)^2}{2\sigma^2} \right} \
\end{eqnarray}
$$
계산을 용이하기 위해 Log를 취하면 다음과 같다.
$$
\begin{eqnarray}
\text{LL}
&=& \log p(y_{1:N} \,\big|\, x_{1:N}, \theta) \
&=& \log \prod_{i=1}^N \frac{1}{\sqrt{2\pi\sigma^2}}\exp\left{-\frac{(y_i-w^T x_i)^2}{2\sigma^2} \right} \
&=& -\dfrac{1}{2\sigma^2} \sum_{i=1}^N (y_i-w^T x_i)^2 + \dfrac{1}{2} \sum_{i=1}^N \log{2\pi}{\sigma^2} \
\end{eqnarray}
$$
이를 행렬로 표시하면 다음과 같다.
$$
\text{LL} = -C_1 (y - Xw)^T(y-Xw) + C_0 = -C_1(w^TX^TXw -2 y^TXw + y^Ty) + C_0
$$
$$
C_1 = -\dfrac{1}{2\sigma^2}
$$
$$
C_0 = \dfrac{1}{2} \sum_{i=1}^N \log{2\pi}{\sigma^2}
$$
이를 최적화하면 OLS와 동일한 결과를 얻을 수 있다.
$$
\dfrac{\partial}{\partial w} \text{LL} \propto - 2X^TX \hat{w} + 2X^Ty = 0
$$
$$
\hat{w} = (X^TX)^{-1}X^T y
$$
잔차의 분포
위의 확률론적 선형 회귀 모형에 따르면 잔차 $e = y - \hat{w}^Tx$ 도 정규 분포를 따른다. 이는 다음과 같이 증명할 수 있다.
확률론적 선형 회귀 모형의 오차 $\epsilon$와 잔차 $e$는 다음과 같은 관계를 가진다.
$$ \hat{y} = X\hat{w} = X (X^TX)^{-1}X^T y = Hy $$
$$ e = y - \hat{y}= y - Hy = (I - H) y$$
$M = I - H$이라고 정의하면
$$ e = My = M (Xw + \epsilon) $$
최적화 조건에서
$$
X^TX \hat{w} - X^Ty = 0
$$
$$
X^T(X\hat{w} - y) = -X^Te = 0
$$
$$
X^TMy = 0
$$
이 식은 모든 $y$에 대해 성립하므로
$$
X^TM = 0
$$
$H$가 대칭 행렬이므로 $M = I -H$도 대칭 행렬
$$
MX = 0
$$
$$ e = MXw + M\epsilon = M\epsilon $$
즉, 잔차 $e$는 오차 $\epsilon$의 선형 변환(linear transform)이다. 정규 분포의 선형 변환은 마찬가지로 정규 분포이므로 잔차도 정규 분포를 다른다.
End of explanation |
10,061 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<div style="text-align
Step1: <a id="ref3"></a>
Building a Graph
As we said before, TensorFlow works as a graph computational model. Let's create our first graph.
To create two source operations which will output numbers we will define two constants
Step2: After that, let's make an operation over these variables. The function tf.add() adds two elements (you could also use c = a + b).
Step3: Then TensorFlow needs to initialize a session to run our code. Sessions are, in a way, a context for creating a graph inside TensorFlow. Let's define our session
Step4: Let's run the session to get the result from the previous defined 'c' operation
Step5: Close the session to release resources
Step6: To avoid having to close sessions every time, we can define them in a with block, so after running the with block the session will close automatically
Step7: Even this silly example of adding 2 constants to reach a simple result defines the basis of TensorFlow. Define your edge (In this case our constants), include nodes (operations, like tf.add), and start a session to build a graph.
<a id="ref2"></a>
TensorFlow Basic Elements
Tensor
Variable
Operation
Session
Placeholder
Tensorboard
What is the meaning of Tensor?
<div class="alert alert-success alertsuccess" style="margin-top
Step8: <a id="ref5"></a>
Why Tensors?
The Tensor structure helps us by giving the freedom to shape the dataset the way we want.
And it is particularly helpful when dealing with images, due to the nature of how information in images are encoded,
Thinking about images, its easy to understand that it has a height and width, so it would make sense to represent the information contained in it with a two dimensional strucutre (a matrix)... until you remember that images have colors, and to add information about the colors, we need another dimension, and thats when Tensors become particulary helpful.
Images are encoded into color channels, the image data is represented into each color intensity in a color channel at a given point, the most common one being RGB, which means Red, Blue and Green. The information contained into an image is the intensity of each channel color into the width and height of the image, just like this
Step9: Let's first create a simple counter, a variable that increases one unit at a time
Step10: Variables must be initialized by running an initialization operation after having launched the graph. We first have to add the initialization operation to the graph
Step11: We then start a session to run the graph, first initialize the variables, then print the initial value of the state variable, and then run the operation of updating the state variable and printing the result after each update
Step12: <a id="ref7"></a>
Placeholders
Now we know how to manipulate variables inside TensorFlow, but what about feeding data outside of a TensorFlow model?
If you want to feed data to a TensorFlow model from outside a model, you will need to use placeholders.
So what are these placeholders and what do they do?
Placeholders can be seen as "holes" in your model, "holes" which you will pass the data to, you can create them using <br/> <b>tf.placeholder(datatype)</b>, where <b>datatype</b> specifies the type of data (integers, floating points, strings, booleans) along with its precision (8, 16, 32, 64) bits.
The definition of each data type with the respective python sintax is defined as
Step13: And define a simple multiplication operation
Step14: Now we need to define and run the session, but since we created a "hole" in the model to pass the data, when we initialize the session we are obligated to pass an argument with the data, otherwise we would get an error.
To pass the data to the model we call the session with an extra argument <b> feed_dict</b> in which we should pass a dictionary with each placeholder name folowed by its respective data, just like this
Step15: Since data in TensorFlow is passed in form of multidimensional arrays we can pass any kind of tensor through the placeholders to get the answer to the simple multiplication operation
Step16: <a id="ref8"></a>
Operations
Operations are nodes that represent the mathematical operations over the tensors on a graph. These operations can be any kind of functions, like add and subtract tensor or maybe an activation function.
tf.matmul, tf.add, tf.nn.sigmoid are some of the operations in TensorFlow. These are like functions in python but operate directly over tensors and each one does a specific thing.
<div class="alert alert-success alertsuccess" style="margin-top
Step17: <a id="ref8"></a>
Tensorboard
TensorBoard is a suite of web applications for inspecting and understanding your TensorFlow runs and graphs. TensorBoard currently supports five visualizations | Python Code:
import tensorflow as tf
Explanation: <div style="text-align:center"><img src = "https://www.tensorflow.org/_static/images/tensorflow/logo.png"></div>
<a id="ref2"></a>
How does TensorFlow work?
TensorFlow defines computations as Graphs, and these are made with operations (also know as “ops”). So, when we work with TensorFlow, it is the same as defining a series of operations in a Graph.
To execute these operations as computations, we must launch the Graph into a Session. The session translates and passes the operations represented into the graphs to the device you want to execute them on, be it a GPU or CPU.
For example, the image below represents a graph in TensorFlow. W, x and b are tensors over the edges of this graph. MatMul is an operation over the tensors W and x, after that Add is called and add the result of the previous operator with b.
<img src='https://ibm.box.com/shared/static/a94cgezzwbkrq02jzfjjljrcaozu5s2q.png'>
Importing TensorFlow
<p>To use TensorFlow, we need to import the library. We imported it and optionally gave it the name "tf", so the modules can be accessed by __tf.module-name__:
End of explanation
a = tf.constant([2])
b = tf.constant([3])
Explanation: <a id="ref3"></a>
Building a Graph
As we said before, TensorFlow works as a graph computational model. Let's create our first graph.
To create two source operations which will output numbers we will define two constants:
End of explanation
c = tf.add(a,b)
Explanation: After that, let's make an operation over these variables. The function tf.add() adds two elements (you could also use c = a + b).
End of explanation
session = tf.Session()
Explanation: Then TensorFlow needs to initialize a session to run our code. Sessions are, in a way, a context for creating a graph inside TensorFlow. Let's define our session:
End of explanation
result = session.run(c)
print(result)
Explanation: Let's run the session to get the result from the previous defined 'c' operation:
End of explanation
session.close()
Explanation: Close the session to release resources:
End of explanation
with tf.Session() as session:
result = session.run(c)
print(result)
Explanation: To avoid having to close sessions every time, we can define them in a with block, so after running the with block the session will close automatically:
End of explanation
Scalar = tf.constant([2])
Vector = tf.constant([5,6,2])
Matrix = tf.constant([[1,2,3],[2,3,4],[3,4,5]])
Tensor = tf.constant( [ [[1,2,3],[2,3,4],[3,4,5]] , [[4,5,6],[5,6,7],[6,7,8]] , [[7,8,9],[8,9,10],[9,10,11]] ] )
with tf.Session() as session:
result = session.run(Scalar)
print ("Scalar (1 entry):\n %s \n" % result)
result = session.run(Vector)
print ("Vector (3 entries) :\n %s \n" % result)
result = session.run(Matrix)
print ("Matrix (3x3 entries):\n %s \n" % result)
result = session.run(Tensor)
print ("Tensor (3x3x3 entries) :\n %s \n" % result)
Explanation: Even this silly example of adding 2 constants to reach a simple result defines the basis of TensorFlow. Define your edge (In this case our constants), include nodes (operations, like tf.add), and start a session to build a graph.
<a id="ref2"></a>
TensorFlow Basic Elements
Tensor
Variable
Operation
Session
Placeholder
Tensorboard
What is the meaning of Tensor?
<div class="alert alert-success alertsuccess" style="margin-top: 20px">
<font size = 3><strong>In TensorFlow all data is passed between operations in a computation graph, and these are passed in the form of Tensors, hence the name of TensorFlow.</strong></font>
<br>
<br>
The word __tensor__ from new latin means "that which stretches". It is a mathematical object that is named __tensor__ because an early application of tensors was the study of materials stretching under tension. The contemporary meaning of tensors can be taken as multidimensional arrays.
<p></p>
</div>
What are multidimensional arrays here?
<table style="width:100%">
<tr>
<td><b>Dimension</b></td>
<td><b>Physical Representation</b></td>
<td><b>Mathematical Object</b></td>
<td><b>In Code</b></td>
</tr>
<tr>
<td>Zero </td>
<td>Point</td>
<td>Scalar (Single Number)</td>
<td>[ 1 ]</td>
</tr>
<tr>
<td>One</td>
<td>Line</td>
<td>Vector (Series of Numbers) </td>
<td>[ 1,2,3,4,... ]</td>
</tr>
<tr>
<td>Two</td>
<td>Surface</td>
<td>Matrix (Table of Numbers)</td>
<td>[ [1,2,3,4,...], [1,2,3,4,...], [1,2,3,4,...],... ]</td>
</tr>
<tr>
<td>Three</td>
<td>Volume</td>
<td>Tensor (Cube of Numbers)</td>
<td>[ [[1,2,...], [1,2,...], [1,2,...],...], [[1,2,...], [1,2,...], [1,2,...],...], [[1,2,...], [1,2,...], [1,2,...] ,...]... ]</td>
</tr>
</table>
<a id="ref4"></a>
Defining multidimensional arrays using TensorFlow
Now we will try to define such arrays using TensorFlow:
End of explanation
state = tf.Variable(0)
Explanation: <a id="ref5"></a>
Why Tensors?
The Tensor structure helps us by giving the freedom to shape the dataset the way we want.
And it is particularly helpful when dealing with images, due to the nature of how information in images are encoded,
Thinking about images, its easy to understand that it has a height and width, so it would make sense to represent the information contained in it with a two dimensional strucutre (a matrix)... until you remember that images have colors, and to add information about the colors, we need another dimension, and thats when Tensors become particulary helpful.
Images are encoded into color channels, the image data is represented into each color intensity in a color channel at a given point, the most common one being RGB, which means Red, Blue and Green. The information contained into an image is the intensity of each channel color into the width and height of the image, just like this:
<img src='https://ibm.box.com/shared/static/xlpv9h5xws248c09k1rlx7cer69y4grh.png'>
So the intensity of the red channel at each point with width and height can be represented into a matrix, the same goes for the blue and green channels, so we end up having three matrices, and when these are combined they form a tensor.
<a id="ref6"></a>
Variables
Now that we are more familiar with the structure of data, we will take a look at how TensorFlow handles variables.
To define variables we use the command tf.variable().
To be able to use variables in a computation graph it is necessary to initialize them before running the graph in a session. This is done by running tf.global_variables_initializer().
To update the value of a variable, we simply run an assign operation that assigns a value to the variable:
End of explanation
one = tf.constant(1)
new_value = tf.add(state, one)
update = tf.assign(state, new_value)
Explanation: Let's first create a simple counter, a variable that increases one unit at a time:
End of explanation
init = tf.global_variables_initializer()
Explanation: Variables must be initialized by running an initialization operation after having launched the graph. We first have to add the initialization operation to the graph:
End of explanation
with tf.Session() as session:
session.run(init)
print(session.run(state))
for i in range(3):
session.run(update)
print(session.run(state))
Explanation: We then start a session to run the graph, first initialize the variables, then print the initial value of the state variable, and then run the operation of updating the state variable and printing the result after each update:
End of explanation
a=tf.placeholder(tf.float32)
Explanation: <a id="ref7"></a>
Placeholders
Now we know how to manipulate variables inside TensorFlow, but what about feeding data outside of a TensorFlow model?
If you want to feed data to a TensorFlow model from outside a model, you will need to use placeholders.
So what are these placeholders and what do they do?
Placeholders can be seen as "holes" in your model, "holes" which you will pass the data to, you can create them using <br/> <b>tf.placeholder(datatype)</b>, where <b>datatype</b> specifies the type of data (integers, floating points, strings, booleans) along with its precision (8, 16, 32, 64) bits.
The definition of each data type with the respective python sintax is defined as:
|Data type |Python type|Description|
| --------- | --------- | --------- |
|DT_FLOAT |tf.float32 |32 bits floating point.|
|DT_DOUBLE |tf.float64 |64 bits floating point.|
|DT_INT8 |tf.int8 |8 bits signed integer.|
|DT_INT16 |tf.int16 |16 bits signed integer.|
|DT_INT32 |tf.int32 |32 bits signed integer.|
|DT_INT64 |tf.int64 |64 bits signed integer.|
|DT_UINT8 |tf.uint8 |8 bits unsigned integer.|
|DT_STRING |tf.string |Variable length byte arrays. Each element of a Tensor is a byte array.|
|DT_BOOL |tf.bool |Boolean.|
|DT_COMPLEX64 |tf.complex64 |Complex number made of two 32 bits floating points: real and imaginary parts.|
|DT_COMPLEX128 |tf.complex128 |Complex number made of two 64 bits floating points: real and imaginary parts.|
|DT_QINT8 |tf.qint8 |8 bits signed integer used in quantized Ops.|
|DT_QINT32 |tf.qint32 |32 bits signed integer used in quantized Ops.|
|DT_QUINT8 |tf.quint8 |8 bits unsigned integer used in quantized Ops.|
<div style="text-align:center">[[Table Source]](https://www.tensorflow.org/versions/r0.9/resources/dims_types.html)</div>
So we create a placeholder:
End of explanation
b=a*2
Explanation: And define a simple multiplication operation:
End of explanation
with tf.Session() as sess:
result = sess.run(b,feed_dict={a:3.5})
print (result)
Explanation: Now we need to define and run the session, but since we created a "hole" in the model to pass the data, when we initialize the session we are obligated to pass an argument with the data, otherwise we would get an error.
To pass the data to the model we call the session with an extra argument <b> feed_dict</b> in which we should pass a dictionary with each placeholder name folowed by its respective data, just like this:
End of explanation
dictionary={a: [ [ [1,2,3],[4,5,6],[7,8,9],[10,11,12] ] , [ [13,14,15],[16,17,18],[19,20,21],[22,23,24] ] ] }
with tf.Session() as sess:
result = sess.run(b,feed_dict=dictionary)
print (result)
Explanation: Since data in TensorFlow is passed in form of multidimensional arrays we can pass any kind of tensor through the placeholders to get the answer to the simple multiplication operation:
End of explanation
a = tf.constant([5])
b = tf.constant([2])
c = tf.add(a,b)
d = tf.subtract(a,b)
with tf.Session() as session:
result = session.run(c)
print ('c =: %s' % result)
result = session.run(d)
print ('d =: %s' % result)
Explanation: <a id="ref8"></a>
Operations
Operations are nodes that represent the mathematical operations over the tensors on a graph. These operations can be any kind of functions, like add and subtract tensor or maybe an activation function.
tf.matmul, tf.add, tf.nn.sigmoid are some of the operations in TensorFlow. These are like functions in python but operate directly over tensors and each one does a specific thing.
<div class="alert alert-success alertsuccess" style="margin-top: 20px">Other operations can be easily found in: https://www.tensorflow.org/versions/r0.9/api_docs/python/index.html</div>
End of explanation
import tensorflow as tf
with tf.name_scope("Operations"):
with tf.name_scope("Scope_a"):
a = tf.add(1, 2, name="a")
b = tf.multiply(a, 3, name="b")
with tf.name_scope("Scope_b"):
c = tf.add(4, 5, name="c")
d = tf.multiply(c, 6, name="d")
with tf.name_scope("Scope_c"):
e = tf.multiply(4, 5, name="e")
f = tf.div(c, 6, name="f")
g = tf.add(b, d, name="g")
h = tf.multiply(g, f, name="h")
with tf.Session() as sess:
print(sess.run(h))
with tf.Session() as sess:
writer = tf.summary.FileWriter("/home/raghav/TECH/output4", sess.graph)
print(sess.run(h))
writer.close()
Explanation: <a id="ref8"></a>
Tensorboard
TensorBoard is a suite of web applications for inspecting and understanding your TensorFlow runs and graphs. TensorBoard currently supports five visualizations: scalars, images, audio, histograms, and graphs. The computations you will use in TensorFlow for things such as training a massive deep neural network, can be fairly complex and confusing, TensorBoard will make this a lot easier to understand, debug, and optimize your TensorFlow programs.
This is what a tensorboard looks like:
<img src='https://learningtensorflow.com/images/ezgif.com-video-to-gif.gif'>
End of explanation |
10,062 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Week 1 Tutorial
GitHub Workflow and Goals for the Class
Getting Started
Ideally, you have already work through the Getting Started page on the course GitHub repository. You will need a computer that is running git, Jupyter notebook, and has all the required packages installed in order to do the homework, and some of the in-class exercises. (The exercises are intended to be collaborative, so don't worry if you don't have a laptop - but do sit next to someone who does!) If you haven't installed the required software, do it now (although the myriad of python packages can wait).
To run the tutorial notebooks in class, make sure you have forked and git clone'd the course repository. You might need to git pull to get the current tutorial, since we probably uploaded it just before class.
Mega-important!
After pulling down the tutorial notebook, immediately make a copy. Then do not modify the original. Do your work in the copy. This will prevent the possibility of git conflicts should the version-controlled file change at any point in the future. (The same exhortation applies to homeworks.)
To modify the notebook, you'll need to have it open and running in Jupyter notebook locally. At this point, the URL in your browser window should say something like "http
Step1: This crazy try-except construction is our way of making sure the notebooks will work when completed without actually providing complete code. You can either write your code directly in the except block, or delete the try, exec and except lines entirely (remembering to unindent the remaining lines in that case, because python).
Step2: This cell just prints out the string my_goals. | Python Code:
class SolutionMissingError(Exception):
def __init__(self):
Exception.__init__(self,"You need to complete the solution for this code to work!")
def REPLACE_WITH_YOUR_SOLUTION():
raise SolutionMissingError
REMOVE_THIS_LINE = REPLACE_WITH_YOUR_SOLUTION
Explanation: Week 1 Tutorial
GitHub Workflow and Goals for the Class
Getting Started
Ideally, you have already work through the Getting Started page on the course GitHub repository. You will need a computer that is running git, Jupyter notebook, and has all the required packages installed in order to do the homework, and some of the in-class exercises. (The exercises are intended to be collaborative, so don't worry if you don't have a laptop - but do sit next to someone who does!) If you haven't installed the required software, do it now (although the myriad of python packages can wait).
To run the tutorial notebooks in class, make sure you have forked and git clone'd the course repository. You might need to git pull to get the current tutorial, since we probably uploaded it just before class.
Mega-important!
After pulling down the tutorial notebook, immediately make a copy. Then do not modify the original. Do your work in the copy. This will prevent the possibility of git conflicts should the version-controlled file change at any point in the future. (The same exhortation applies to homeworks.)
To modify the notebook, you'll need to have it open and running in Jupyter notebook locally. At this point, the URL in your browser window should say something like "http://localhost:8890/notebooks/some_other_stuff/tutorial.ipynb"
This Week's "Tutorial"
Make sure you have read and understood the Homework instructions, have forked and cloned the 2019 homework repo, and have done any other necessary computer setup. If not, or if you need technical help, this is a great time for it.
The cells below contain an absurdly simple chunk of python code for you to complete, demonstrating the way that these tutorial notebooks will generally contain a mix of completed and incompleted code. Your job is to complete the code such that running the notebook will result in a string being printed out. Specifically, the string should be a brief statement of what you hope to learn from this class.
Once you've produced a functional notebook, submit your solution to the Tutorial1 folder of the private repo per the usual procedure for submitting homework assignments. (Note that we will not do this for any other tutorials; this is just to make sure that everyone knows how to use the repository.)
Preliminaries
The first code cell will usually contain some import statements in addition to the following definitions.
The REPLACE_WITH_YOUR_SOLUTION and/or REMOVE_THIS_LINE functions will show up anywhere you need to add your own code to complete the tutorial. Trying to run those cells as-is will produce a reminder.
End of explanation
try:
exec(open('Solution/goals.py').read())
except IOError:
my_goals = REPLACE_WITH_YOUR_SOLUTION()
Explanation: This crazy try-except construction is our way of making sure the notebooks will work when completed without actually providing complete code. You can either write your code directly in the except block, or delete the try, exec and except lines entirely (remembering to unindent the remaining lines in that case, because python).
End of explanation
print(my_goals)
Explanation: This cell just prints out the string my_goals.
End of explanation |
10,063 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Content and Objective
Show result of LS estimator for polynomials
Step1: Parameters
Step2: Do LS Estimation
Step3: Plotting | Python Code:
# importing
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
# showing figures inline
%matplotlib inline
# plotting options
font = {'size' : 30}
plt.rc('font', **font)
plt.rc('text', usetex=True)
matplotlib.rc('figure', figsize=(30, 15) )
Explanation: Content and Objective
Show result of LS estimator for polynomials:
Given $(x_i, y_i), i=1,...,N$
Assume polynomial model (plus awgn) to be valid
Get LS estimate for polynomial coefficients and show result
Method: Sample groups and get estimator
End of explanation
# define number of samples
N = 20
# define degrees of polynomials
K_actual = 8
K_est = 2
# randomly sample coefficients of polynomial and "noise-it"
coeffs = np.random.rand( K_actual ) * ( -1 )**np.random.randint( 2, size=K_actual )
coeffs /= np.linalg.norm( coeffs, 1 )
f = np.polynomial.polynomial.Polynomial( coeffs )
x_fine = np.linspace( 0, 1, 100)
# define variance of noise
sigma2 = .0
# define random measuring points
x_sample = np.sort( np.random.choice( x_fine, N, replace=False) )
f_sample = f( x_sample ) + np.sqrt( sigma2 ) * np.random.randn( x_sample.size )
Explanation: Parameters
End of explanation
X_LS = np.zeros( ( N, K_est ) )
for _n in range( N ):
for _k in range( K_est ):
X_LS[ _n, _k ] = ( x_sample[ _n ] )** _k
a_LS = np.matmul (np.linalg.pinv( X_LS ), f_sample )
f_LS = np.polynomial.polynomial.Polynomial( a_LS )
print( 'Actual coefficients:\n{}\n'.format( coeffs ) )
print( 'LS estimation:\n{}'.format( a_LS ) )
Explanation: Do LS Estimation
End of explanation
# plot results
plt.plot( x_fine, f( x_fine ), label='$f(x)$' )
plt.plot( x_sample, f_sample, '-x', ms=12, label='$(x_i, y_i)$' )
plt.plot( x_fine, f_LS( x_fine ), ms=12, label='$f_{LS}(x)$' )
plt.grid( True )
plt.legend()
Explanation: Plotting
End of explanation |
10,064 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
matplotlib 学习手册
整理和学习 matplotlib 的基本知识点和使用方法
参考
matplotlib org
Matplotlib 教程
IPython 以及 pylab 模式
IPython 是 Python 的一个增强版本。它在下列方面有所增强:命名输入输出、使用系统命令(shell commands)、排错(debug)能力。我们在命令行终端给 IPython 加上参数 -pylab (0.12 以后的版本是 --pylab)之后,就可以像 Matlab 或者 Mathematica 那样以交互的方式绘图。
快速入门示例
默认配置的具体内容
下面的代码中,我们展现了 matplotlib 的默认配置并辅以注释说明,这部分配置包含了有关绘图样式的所有配置。代码中的配置与默认配置完全相同,你可以在交互模式中修改其中的值来观察效果。
Step1: 之后的例子,为了更通用一些,通过导入:
import numpy as np
import matplotlib.pyplot as plt
所有的 np 和 plt 的函数,如果一开始使用了 from pylab import * 实际上都是可以不用通过 np 和 plt 来引用的。重写上面的例子:
Step2: 改变线条的颜色和粗细
Step3: 设置图片边界
Step4: 设置记号 和 记号的标签
Step5: 移动脊柱
Step6: 添加图例
Step7: 给一些特殊点做注释
我们希望在 2π/32π/3 的位置给两条函数曲线加上一个注释。首先,我们在对应的函数图像位置上画一个点;然后,向横轴引一条垂线,以虚线标记;最后,写上标签。
Step8: 设置透明
坐标轴上的记号标签被曲线挡住了,作为强迫症患者(雾)这是不能忍的。我们可以把它们放大,然后添加一个白色的半透明底色。这样可以保证标签和曲线同时可见
Step9: 图像、子图、坐标轴和记号
到目前为止,我们都用隐式的方法来绘制图像和坐标轴。快速绘图中,这是很方便的。我们也可以显式地控制图像、子图、坐标轴。Matplotlib 中的「图像」指的是用户界面看到的整个窗口内容。在图像里面有所谓「子图」。子图的位置是由坐标网格确定的,而「坐标轴」却不受此限制,可以放在图像的任意位置。我们已经隐式地使用过图像和子图:当我们调用 plot 函数的时候,matplotlib 调用 gca() 函数以及 gcf() 函数来获取当前的坐标轴和图像;如果无法获取图像,则会调用 figure() 函数来创建一个——严格地说,是用 subplot(1,1,1) 创建一个只有一个子图的图像。
图像
所谓「图像」就是 GUI 里以「Figure #」为标题的那些窗口。图像编号从 1 开始,与 MATLAB 的风格一致,而于 Python 从 0 开始编号的风格不同。它具有一些属性, 如 | Python Code:
# 导入 matplotlib 的所有内容(nympy 可以用 np 这个名字来使用)
from pylab import *
# 创建一个 8 * 6 点(point)的图,并设置分辨率为 80
figure(figsize=(8,6), dpi=80)
# 创建一个新的 1 * 1 的子图,接下来的图样绘制在其中的第 1 块(也是唯一的一块)
subplot(1,1,1)
X = np.linspace(-np.pi, np.pi, 256,endpoint=True)
C,S = np.cos(X), np.sin(X)
# 绘制余弦曲线,使用蓝色的、连续的、宽度为 1 (像素)的线条
plot(X, C, color="blue", linewidth=1.0, linestyle="-")
# 绘制正弦曲线,使用绿色的、连续的、宽度为 1 (像素)的线条
plot(X, S, color="green", linewidth=1.0, linestyle="-")
# 设置横轴的上下限
xlim(-4.0,4.0)
# 设置横轴记号
xticks(np.linspace(-4,4,9,endpoint=True))
# 设置纵轴的上下限
ylim(-1.0,1.0)
# 设置纵轴记号
yticks(np.linspace(-1,1,5,endpoint=True))
# 以分辨率 72 来保存图片
# savefig("exercice_2.png",dpi=72)
# 在屏幕上显示
show()
Explanation: matplotlib 学习手册
整理和学习 matplotlib 的基本知识点和使用方法
参考
matplotlib org
Matplotlib 教程
IPython 以及 pylab 模式
IPython 是 Python 的一个增强版本。它在下列方面有所增强:命名输入输出、使用系统命令(shell commands)、排错(debug)能力。我们在命令行终端给 IPython 加上参数 -pylab (0.12 以后的版本是 --pylab)之后,就可以像 Matlab 或者 Mathematica 那样以交互的方式绘图。
快速入门示例
默认配置的具体内容
下面的代码中,我们展现了 matplotlib 的默认配置并辅以注释说明,这部分配置包含了有关绘图样式的所有配置。代码中的配置与默认配置完全相同,你可以在交互模式中修改其中的值来观察效果。
End of explanation
# 导入 matplotlib
import numpy as np
import matplotlib.pyplot as plt
# 创建一个 8 * 6 点(point)的图,并设置分辨率为 80
plt.figure(figsize=(8,6), dpi=80)
# 创建一个新的 1 * 1 的子图,接下来的图样绘制在其中的第 1 块(也是唯一的一块)
plt.subplot(1,1,1)
X = np.linspace(-np.pi, np.pi, 256,endpoint=True)
C,S = np.cos(X), np.sin(X)
# 绘制余弦曲线,使用蓝色的、连续的、宽度为 1 (像素)的线条
plt.plot(X, C, color="blue", linewidth=1.0, linestyle="-")
# 绘制正弦曲线,使用绿色的、连续的、宽度为 1 (像素)的线条
plt.plot(X, S, color="green", linewidth=1.0, linestyle="-")
# 设置横轴的上下限
plt.xlim(-4.0,4.0)
# 设置横轴记号
plt.xticks(np.linspace(-4,4,9,endpoint=True))
# 设置纵轴的上下限
plt.ylim(-1.0,1.0)
# 设置纵轴记号
plt.yticks(np.linspace(-1,1,5,endpoint=True))
# 以分辨率 72 来保存图片
# savefig("exercice_2.png",dpi=72)
# 在屏幕上显示
plt.show()
Explanation: 之后的例子,为了更通用一些,通过导入:
import numpy as np
import matplotlib.pyplot as plt
所有的 np 和 plt 的函数,如果一开始使用了 from pylab import * 实际上都是可以不用通过 np 和 plt 来引用的。重写上面的例子:
End of explanation
import numpy as np
import matplotlib.pyplot as plt
plt.figure(figsize=(10,6), dpi=80)
plt.plot(X, C, color="blue", linewidth=2.5, linestyle="-")
plt.plot(X, S, color="red", linewidth=2.5, linestyle="-")
plt.show()
Explanation: 改变线条的颜色和粗细
End of explanation
## one method
plt.xlim(X.min()*1.1, X.max()*1.1)
plt.ylim(C.min()*1.1, C.max()*1.1)
## another method
xmin ,xmax = X.min(), X.max()
ymin, ymax = C.min(), C.max()
dx = (xmax - xmin) * 0.2
dy = (ymax - ymin) * 0.2
plt.xlim(xmin - dx, xmax + dx)
plt.ylim(ymin - dy, ymax + dy)
plt.plot(X, C)
plt.plot(X, S)
plt.show()
Explanation: 设置图片边界
End of explanation
# 设置记号
# plt.xticks( [-np.pi, -np.pi/2, 0, np.pi/2, np.pi])
# plt.yticks([-1, 0, +1])
# 设置记号: 注意这里使用了 LaTeX
plt.xticks([-np.pi, -np.pi/2, 0, np.pi/2, np.pi],
[r'$-\pi$', r'$-\pi/2$', r'$0$', r'$+\pi/2$', r'$+\pi$'])
plt.yticks([-1, 0, +1],
[r'$-1$', r'$0$', r'$+1$'])
plt.plot(X, C)
plt.plot(X, S)
plt.show()
Explanation: 设置记号 和 记号的标签
End of explanation
ax = plt.subplot(111)
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
ax.xaxis.set_ticks_position('bottom')
ax.spines['bottom'].set_position(('data',0))
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data',0))
plt.plot(X, C)
plt.plot(X, S)
plt.show()
Explanation: 移动脊柱
End of explanation
plt.plot(X, C, color="blue", linewidth=2.5, linestyle="-", label="cosine")
plt.plot(X, S, color="red", linewidth=2.5, linestyle="-", label="sine")
plt.legend(loc='upper left')
plt.show()
Explanation: 添加图例
End of explanation
# 移动脊柱
ax = plt.subplot(111)
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
ax.xaxis.set_ticks_position('bottom')
ax.spines['bottom'].set_position(('data',0))
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data',0))
plt.plot(X, C, color="blue", linewidth=2.5, linestyle="-", label="cosine")
plt.plot(X, S, color="red", linewidth=2.5, linestyle="-", label="sine")
# 设置范围,记号和标签
plt.xlim(X.min()*1.1, X.max()*1.1)
plt.xticks([-np.pi, -np.pi/2, 0, np.pi/2, np.pi],
[r'$-\pi$', r'$-\pi/2$', r'$0$', r'$+\pi/2$', r'$+\pi$'])
plt.ylim(C.min()*1.1,C.max()*1.1)
plt.yticks([-1, +1],
[r'$-1$', r'$+1$'])
# 设置特殊标记
t = 2*np.pi/3
# 竖线, cos(2*pi/3)
plt.plot([t,t],[0,np.cos(t)],
color ='blue', linewidth=1.5, linestyle="--")
plt.scatter([t,],[np.cos(t),], 50, color ='blue')
plt.annotate(r'$\sin(\frac{2\pi}{3})=\frac{\sqrt{3}}{2}$',
xy=(t, np.sin(t)), xycoords='data',
xytext=(+10, +30), textcoords='offset points', fontsize=16,
arrowprops=dict(arrowstyle="->", connectionstyle="arc3,rad=.2"))
# 竖线, sin(2*pi/3)
plt.plot([t,t],[0,np.sin(t)],
color ='red', linewidth=1.5, linestyle="--")
plt.scatter([t,],[np.sin(t),], 50, color ='red')
plt.annotate(r'$\cos(\frac{2\pi}{3})=-\frac{1}{2}$',
xy=(t, np.cos(t)), xycoords='data',
xytext=(-90, -50), textcoords='offset points', fontsize=16,
arrowprops=dict(arrowstyle="->", connectionstyle="arc3,rad=.2"))
# 图例
plt.legend(loc='upper left', frameon=False)
plt.show()
Explanation: 给一些特殊点做注释
我们希望在 2π/32π/3 的位置给两条函数曲线加上一个注释。首先,我们在对应的函数图像位置上画一个点;然后,向横轴引一条垂线,以虚线标记;最后,写上标签。
End of explanation
# 移动脊柱
ax = plt.subplot(111)
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
ax.xaxis.set_ticks_position('bottom')
ax.spines['bottom'].set_position(('data',0))
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data',0))
plt.plot(X, C, color="blue", linewidth=2.5, linestyle="-", label="cosine")
plt.plot(X, S, color="red", linewidth=2.5, linestyle="-", label="sine")
# 设置范围,记号和标签
plt.xlim(X.min()*1.1, X.max()*1.1)
plt.xticks([-np.pi, -np.pi/2, 0, np.pi/2, np.pi],
[r'$-\pi$', r'$-\pi/2$', r'$0$', r'$+\pi/2$', r'$+\pi$'])
plt.ylim(C.min()*1.1,C.max()*1.1)
plt.yticks([-1, +1],
[r'$-1$', r'$+1$'])
# 设置特殊标记
t = 2*np.pi/3
# 竖线, cos(2*pi/3)
plt.plot([t,t],[0,np.cos(t)],
color ='blue', linewidth=1.5, linestyle="--")
plt.scatter([t,],[np.cos(t),], 50, color ='blue')
plt.annotate(r'$\sin(\frac{2\pi}{3})=\frac{\sqrt{3}}{2}$',
xy=(t, np.sin(t)), xycoords='data',
xytext=(+10, +30), textcoords='offset points', fontsize=16,
arrowprops=dict(arrowstyle="->", connectionstyle="arc3,rad=.2"))
# 竖线, sin(2*pi/3)
plt.plot([t,t],[0,np.sin(t)],
color ='red', linewidth=1.5, linestyle="--")
plt.scatter([t,],[np.sin(t),], 50, color ='red')
plt.annotate(r'$\cos(\frac{2\pi}{3})=-\frac{1}{2}$',
xy=(t, np.cos(t)), xycoords='data',
xytext=(-90, -50), textcoords='offset points', fontsize=16,
arrowprops=dict(arrowstyle="->", connectionstyle="arc3,rad=.2"))
plt.plot(X, C, color="blue", linewidth=2.5, linestyle="-", label="cosine")
plt.plot(X, S, color="red", linewidth=2.5, linestyle="-", label="sine")
for label in ax.get_xticklabels() + ax.get_yticklabels():
label.set_fontsize(16)
label.set_bbox(dict(facecolor='white', edgecolor='None', alpha=0.65 ))
plt.show()
Explanation: 设置透明
坐标轴上的记号标签被曲线挡住了,作为强迫症患者(雾)这是不能忍的。我们可以把它们放大,然后添加一个白色的半透明底色。这样可以保证标签和曲线同时可见
End of explanation
eqs = []
eqs.append((r"$E = mc^2 = \sqrt{{m_0}^2c^4 + p^2c^2}$"))
eqs.append((r"$F_G = G\frac{m_1m_2}{r^2}$"))
plt.axes([0.025,0.025,0.95,0.95])
for i in range(24):
index = np.random.randint(0,len(eqs))
eq = eqs[index]
size = np.random.uniform(12,32)
x,y = np.random.uniform(0,1,2)
alpha = np.random.uniform(0.25,.75)
plt.text(x, y, eq, ha='center', va='center', color="#11557c", alpha=alpha,
transform=plt.gca().transAxes, fontsize=size, clip_on=True)
plt.xticks([]), plt.yticks([])
# savefig('../figures/text_ex.png',dpi=48)
plt.show()
Explanation: 图像、子图、坐标轴和记号
到目前为止,我们都用隐式的方法来绘制图像和坐标轴。快速绘图中,这是很方便的。我们也可以显式地控制图像、子图、坐标轴。Matplotlib 中的「图像」指的是用户界面看到的整个窗口内容。在图像里面有所谓「子图」。子图的位置是由坐标网格确定的,而「坐标轴」却不受此限制,可以放在图像的任意位置。我们已经隐式地使用过图像和子图:当我们调用 plot 函数的时候,matplotlib 调用 gca() 函数以及 gcf() 函数来获取当前的坐标轴和图像;如果无法获取图像,则会调用 figure() 函数来创建一个——严格地说,是用 subplot(1,1,1) 创建一个只有一个子图的图像。
图像
所谓「图像」就是 GUI 里以「Figure #」为标题的那些窗口。图像编号从 1 开始,与 MATLAB 的风格一致,而于 Python 从 0 开始编号的风格不同。它具有一些属性, 如: figsize, dpi, close()等
子图
常用如:
subplot(2,2,1), subplot(2,2,4), subplot(2,2,3), subplot(2,2,4)
坐标轴
axes()
记号
可以用 Tick Locators 来指定在那些位置放置记号,用 Tick Formatters 来调整记号的样式。
其他常见图
散点图:scatter
条形图:bar
等高线图: contourf
灰度图: meshgrid
饼图: pie
量场图: quiver
网格: axes
多重网格: subplot
极轴图: bar 组成
3D 图: Axes3D
手稿: text
End of explanation |
10,065 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Preparation of the reference genome
Usually NGS reads are mapped against a reference genome containing only the assembled chromosomes, and not the remaining contigs. And this methodology is perfectly valid. However in order to decrease the probability of having mapping errors, adding all unassembled contigs may help
Step1: The variables defined above can be modified for any other species, resulting in new results for the following commands.
Download from the NCBI
List of chromosomes/contigs
Step2: Sequences of each chromosome/contig
Step3: For each contig/chromosome download the corresponding FASTA file from NCBI
Step4: Concatenate all contigs/chromosomes into single files
Step5: Remove all the other files (with single chromosome/contig)
Step6: Creation of an index file for GEM mapper
Step7: The path to the index file will be
Step8: Cleanup | Python Code:
species = 'Mus_musculus'
taxid = '10090'
assembly = 'GRCm38.p6'
genbank = 'GCF_000001635.26'
Explanation: Preparation of the reference genome
Usually NGS reads are mapped against a reference genome containing only the assembled chromosomes, and not the remaining contigs. And this methodology is perfectly valid. However in order to decrease the probability of having mapping errors, adding all unassembled contigs may help:
For variant discovery, RNA-seq and ChIP-seq, it is recommended to use the entire primary assembly, including assembled chromosomes AND unlocalized/unplaced contigs, for the purpose of read mapping. Not including unlocalized and unplaced contigs potentially leads to more mapping errors.
from: http://lh3lh3.users.sourceforge.net/humanref.shtml
We are thus going to download full chromosomes and unassembled contigs. From these sequences we are then going to create two reference genomes:
- one "classic" reference genome with only assembled chromosomes, used to compute statistics on the genome (GC content, number of restriction sites or mappability)
- one that would contain all chromosomes and unassembled contigs, used exclusively for mapping.
Mus musculus's reference genome sequence
We search for the most recent reference genome corresponding to Mouse (https://www.ncbi.nlm.nih.gov/genome?term=mus%20musculus).
From there we obtain these identifiers:
End of explanation
sumurl = ('ftp://ftp.ncbi.nlm.nih.gov/genomes/all/{0}/{1}/{2}/{3}/{4}_{5}/'
'{4}_{5}_assembly_report.txt').format(genbank[:3], genbank[4:7], genbank[7:10],
genbank[10:13], genbank, assembly)
crmurl = ('https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi'
'?db=nuccore&id=%s&rettype=fasta&retmode=text')
print sumurl
! wget -q $sumurl -O chromosome_list.txt
! head chromosome_list.txt
Explanation: The variables defined above can be modified for any other species, resulting in new results for the following commands.
Download from the NCBI
List of chromosomes/contigs
End of explanation
import os
dirname = 'genome'
! mkdir -p {dirname}
Explanation: Sequences of each chromosome/contig
End of explanation
contig = []
for line in open('chromosome_list.txt'):
if line.startswith('#'):
continue
seq_name, seq_role, assigned_molecule, _, genbank, _, refseq, _ = line.split(None, 7)
if seq_role == 'assembled-molecule':
name = 'chr%s.fasta' % assigned_molecule
else:
name = 'chr%s_%s.fasta' % (assigned_molecule, seq_name.replace('/', '-'))
contig.append(name)
outfile = os.path.join(dirname, name)
if os.path.exists(outfile) and os.path.getsize(outfile) > 10:
continue
error_code = os.system('wget "%s" --no-check-certificate -O %s' % (crmurl % (genbank), outfile))
if error_code:
error_code = os.system('wget "%s" --no-check-certificate -O %s' % (crmurl % (refseq), outfile))
if error_code:
print genbank
Explanation: For each contig/chromosome download the corresponding FASTA file from NCBI
End of explanation
def write_to_fasta(line):
contig_file.write(line)
def write_to_fastas(line):
contig_file.write(line)
simple_file.write(line)
os.system('mkdir -p {}/{}-{}'.format(dirname, species, assembly))
contig_file = open('{0}/{1}-{2}/{1}-{2}_contigs.fa'.format(dirname, species, assembly),'w')
simple_file = open('{0}/{1}-{2}/{1}-{2}.fa'.format(dirname, species, assembly),'w')
for molecule in contig:
fh = open('{0}/{1}'.format(dirname, molecule))
oline = '>%s\n' % (molecule.replace('.fasta', ''))
_ = fh.next()
# if molecule is an assembled chromosome we write to both files, otherwise only to the *_contigs one
write = write_to_fasta if '_' in molecule else write_to_fastas
for line in fh:
write(oline)
oline = line
# last line usually empty...
if line.strip():
write(line)
contig_file.close()
simple_file.close()
Explanation: Concatenate all contigs/chromosomes into single files
End of explanation
! rm -f {dirname}/*.fasta
Explanation: Remove all the other files (with single chromosome/contig)
End of explanation
! gem-indexer -T 8 -i {dirname}/{species}-{assembly}/{species}-{assembly}_contigs.fa -o {dirname}/{species}-{assembly}/{species}-{assembly}_contigs
Explanation: Creation of an index file for GEM mapper
End of explanation
! gem-indexer -i {dirname}/{species}-{assembly}/{species}-{assembly}.fa \
-o {dirname}/{species}-{assembly}/{species}-{assembly} -T 8
! gem-mappability -I {dirname}/{species}-{assembly}/{species}-{assembly}.gem -l 50 \
-o {dirname}/{species}-{assembly}/{species}-{assembly}.50mer -T 8
! gem-2-wig -I {dirname}/{species}-{assembly}/{species}-{assembly}.gem \
-i {dirname}/{species}-{assembly}/{species}-{assembly}.50mer.mappability \
-o {dirname}/{species}-{assembly}/{species}-{assembly}.50mer
! wigToBigWig {dirname}/{species}-{assembly}/{species}-{assembly}.50mer.wig \
{dirname}/{species}-{assembly}/{species}-{assembly}.50mer.sizes \
{dirname}/{species}-{assembly}/{species}-{assembly}.50mer.bw
! bigWigToBedGraph {dirname}/{species}-{assembly}/{species}-{assembly}.50mer.bw \
{dirname}/{species}-{assembly}/{species}-{assembly}.50mer.bedGraph
Explanation: The path to the index file will be: {dirname}/{species}-{assembly}/{species}_contigs.gem
Compute mappability values needed for bias specific normalizations
In this case we can use the FASTA of the genome whithout contigs and follow these step:
End of explanation
! rm -f {dirname}/{species}-{assembly}/{species}-{assembly}.50mer.mappability
! rm -f {dirname}/{species}-{assembly}/{species}-{assembly}.50mer.wig
! rm -f {dirname}/{species}-{assembly}/{species}-{assembly}.50mer.bw
! rm -f {dirname}/{species}-{assembly}/{species}-{assembly}.50mer.sizes
! rm -f {dirname}/{species}-{assembly}/*.log
Explanation: Cleanup
End of explanation |
10,066 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
GP Regression with a Spectral Mixture Kernel
Introduction
This example shows how to use a SpectralMixtureKernel module on an ExactGP model. This module is designed for
When you want to use exact inference (e.g. for regression)
When you want to use a more sophisticated kernel than RBF
The Spectral Mixture (SM) kernel was invented and discussed in Wilson et al., 2013.
Step1: In the next cell, we set up the training data for this example. We'll be using 15 regularly spaced points on [0,1] which we evaluate the function on and add Gaussian noise to get the training labels.
Step2: Set up the model
The model should be very similar to the ExactGP model in the simple regression example.
The only difference is here, we're using a more complex kernel (the SpectralMixtureKernel). This kernel requires careful initialization to work properly. To that end, in the model __init__ function, we call
self.covar_module = gpytorch.kernels.SpectralMixtureKernel(n_mixtures=4)
self.covar_module.initialize_from_data(train_x, train_y)
This ensures that, when we perform optimization to learn kernel hyperparameters, we will be starting from a reasonable initialization.
Step3: In the next cell, we handle using Type-II MLE to train the hyperparameters of the Gaussian process.
The spectral mixture kernel's hyperparameters start from what was specified in initialize_from_data.
See the simple regression example for more info on this step.
Step4: Now that we've learned good hyperparameters, it's time to use our model to make predictions. The spectral mixture kernel is especially good at extrapolation. To that end, we'll see how well the model extrapolates past the interval [0, 1].
In the next cell, we plot the mean and confidence region of the Gaussian process model. The confidence_region method is a helper method that returns 2 standard deviations above and below the mean. | Python Code:
import math
import torch
import gpytorch
from matplotlib import pyplot as plt
%matplotlib inline
%load_ext autoreload
%autoreload 2
Explanation: GP Regression with a Spectral Mixture Kernel
Introduction
This example shows how to use a SpectralMixtureKernel module on an ExactGP model. This module is designed for
When you want to use exact inference (e.g. for regression)
When you want to use a more sophisticated kernel than RBF
The Spectral Mixture (SM) kernel was invented and discussed in Wilson et al., 2013.
End of explanation
train_x = torch.linspace(0, 1, 15)
train_y = torch.sin(train_x * (2 * math.pi))
Explanation: In the next cell, we set up the training data for this example. We'll be using 15 regularly spaced points on [0,1] which we evaluate the function on and add Gaussian noise to get the training labels.
End of explanation
class SpectralMixtureGPModel(gpytorch.models.ExactGP):
def __init__(self, train_x, train_y, likelihood):
super(SpectralMixtureGPModel, self).__init__(train_x, train_y, likelihood)
self.mean_module = gpytorch.means.ConstantMean()
self.covar_module = gpytorch.kernels.SpectralMixtureKernel(num_mixtures=4)
self.covar_module.initialize_from_data(train_x, train_y)
def forward(self,x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
likelihood = gpytorch.likelihoods.GaussianLikelihood()
model = SpectralMixtureGPModel(train_x, train_y, likelihood)
Explanation: Set up the model
The model should be very similar to the ExactGP model in the simple regression example.
The only difference is here, we're using a more complex kernel (the SpectralMixtureKernel). This kernel requires careful initialization to work properly. To that end, in the model __init__ function, we call
self.covar_module = gpytorch.kernels.SpectralMixtureKernel(n_mixtures=4)
self.covar_module.initialize_from_data(train_x, train_y)
This ensures that, when we perform optimization to learn kernel hyperparameters, we will be starting from a reasonable initialization.
End of explanation
# this is for running the notebook in our testing framework
import os
smoke_test = ('CI' in os.environ)
training_iter = 2 if smoke_test else 100
# Find optimal model hyperparameters
model.train()
likelihood.train()
# Use the adam optimizer
optimizer = torch.optim.Adam(model.parameters(), lr=0.1)
# "Loss" for GPs - the marginal log likelihood
mll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model)
for i in range(training_iter):
optimizer.zero_grad()
output = model(train_x)
loss = -mll(output, train_y)
loss.backward()
print('Iter %d/%d - Loss: %.3f' % (i + 1, training_iter, loss.item()))
optimizer.step()
Explanation: In the next cell, we handle using Type-II MLE to train the hyperparameters of the Gaussian process.
The spectral mixture kernel's hyperparameters start from what was specified in initialize_from_data.
See the simple regression example for more info on this step.
End of explanation
# Test points every 0.1 between 0 and 5
test_x = torch.linspace(0, 5, 51)
# Get into evaluation (predictive posterior) mode
model.eval()
likelihood.eval()
# The gpytorch.settings.fast_pred_var flag activates LOVE (for fast variances)
# See https://arxiv.org/abs/1803.06058
with torch.no_grad(), gpytorch.settings.fast_pred_var():
# Make predictions
observed_pred = likelihood(model(test_x))
# Initialize plot
f, ax = plt.subplots(1, 1, figsize=(4, 3))
# Get upper and lower confidence bounds
lower, upper = observed_pred.confidence_region()
# Plot training data as black stars
ax.plot(train_x.numpy(), train_y.numpy(), 'k*')
# Plot predictive means as blue line
ax.plot(test_x.numpy(), observed_pred.mean.numpy(), 'b')
# Shade between the lower and upper confidence bounds
ax.fill_between(test_x.numpy(), lower.numpy(), upper.numpy(), alpha=0.5)
ax.set_ylim([-3, 3])
ax.legend(['Observed Data', 'Mean', 'Confidence'])
Explanation: Now that we've learned good hyperparameters, it's time to use our model to make predictions. The spectral mixture kernel is especially good at extrapolation. To that end, we'll see how well the model extrapolates past the interval [0, 1].
In the next cell, we plot the mean and confidence region of the Gaussian process model. The confidence_region method is a helper method that returns 2 standard deviations above and below the mean.
End of explanation |
10,067 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Facetted subgrids
We can split the image (facetting), and we can split the grid (subgrids) and do a lot of operations separately. This works out relatively straightforwardly. However, can we combine the two operations? As it turns out the answer is "yes", but the implementation has a few subtleties to it, so this notebooks walks through it step by step.
Step1: Problem definition
To enable fast imaging we are interested in various types of locality. Specifically for images we care about locality in image space, whereas for dealing with visibilities we need locality in grid space (sub-grids). For example, we might have nodes storing pieces (facets) of the image, and now want to reconstruct a certain portion of the grid for de-gridding. Or the other way around - having gridded visibilities we want to propagate its contributions to facet nodes.
Mathematically speaking, facets and sub-grids are low-passes in time and frequency spaces. Let us express this mathematically as one-dimensional functions, and let us view everything from grid space. Let $G$ be the full grid data, assumed to be bounded / repeating past $\frac 12$ and sampled at a rate $N$
Step2: Now let $A_i$, $B_j$ be grid and facet masks respectively, so $A_i(x) = 0$ iff $\left|x-x_i\right| > x_A$, $\sum_i A_i = 1$
and $\mathcal F_y B(y) = 0$ iff $\left|y - y_i\right| > y_B$, $\sum_i \mathcal F A_i = 1$. Then we define
Step3: Note that to cover the entire number space with functions of finite support requires an infinite number of sub-grids and facets. However, in practice this does not really matter, as due to finite sampling we are effectively looking at infinite (repating) equivalence classes of facet/sub-grids anyway. Due to linearity this does not change the argument, so we will use the simpler representation.
The recombination problem
Clearly we have
Step4: After all, $A_i (B_j \ast G) \ne B_j \ast A_i G$. This has direct impact on how compact we can represent these
Step5: That is pretty close to what we are looking for
Step6: Close enough, this precision is more than enough for our purposes! However, we are not quite finished yet.
Correction for convolution
Our original goal was to construct $B_j \ast A_i G$. At this point we have a good approximation for $n_j \ast A_i G$. So it might look like all we need is the truncated inverse of the wave function $b_j$ such that $b_j \ast n_j = B_j$
Step7: The function $b$ looks pretty wild in grid space, with a lot of high frequency. But that is to be expected to some degree, after all it "undoes" the well-behavedness of $n$. The second diagram shows that we indeed have $b_j\ast n_j = B_j$ as intended.
Now let us attempt to go after the grand price, $b_j \ast m_i (n_j \ast a_i G)$
Step8: Looks okay-ish at first glance. However, let us measure the errors objectively
Step9: A lot worse than we had hoped!
It is not too hard to trace the source of the errors back
Step10: Note that the base accuracy performance of the wave function has improved a bit simply because we have increased $y_n$ slightly. However, the important part is that we can make much better use of the results
Step11: Reducing the grid-size / image sampling rate has clearly not impacted the precision too much. Note that while the $(1-m)(n*AG)$ error term should be zero in the $x_m$ grid region, with the reduced size the errors get aliased back onto the subgrid. For our parameters this happens to cause an error "well" in the centre of the sub-grid.
However, limiting in grid space is only half the story. As established at the start we not only want to reduce our grid size down to $x_m$, but want to also get rid of image data beyond $y_n$
Step12: As intended, $n_j\ast a_iG$ drops to zero past $y_N$. This is clearly true even after downsampling, so we can truncate the signal in image space as well. This does not impact signal reconstruction in any visible way.
Step13: The errors have not changed too much even though we have now significantly reduced the number of samples in both grid and image space. The downsampling operation eliminated the incidental "well" in grid space, but $n_j\ast a_iG \approx m_i (n_j \ast a_i G)$ holds just as it did before. This is what we were after!
For the final step, let's convolve with $b_j$. Notice that
Step14: And we made it out the other end. Seems the approximation is indeed robust against truncation and down-sampling as intended.
Sub-grids recombination
So we can obtain facet data from sub-grids effectively. But does this also work in the "opposite" direction, if we want to construct a sub-grid from facets?
\begin{align}
S_i = A_iG &= A_i \left( \left( \sum_j B_j \right) \ast G \right) = \sum_j A_i \left( B_j \ast G \right) \
\end{align}
Let us split $A_i$ and $B_j$ as we did before. Now what we would like to establish is that
Step15: This is entirely dual to the original method -- we are doing everything backwards. Now $G$ needs to get convolved with $b_j$ right away, and we end with the $A_i$ mask cutting away a bit of grid space.
It is instructive to have closer look at the function of this mask operation, especially considering the reduced case
Step16: Reproduction of the original signal quickly detoriates beyond the $x_A$ region. Especially note how $n\ast mG'$ tends to jump around violently beyond $x_A$. This is even worse for the reduced case, where the error aliases around the $x_m$ edges.
Data sizes and scaling
Let us have a closer look and check exactly how effective this has been. If $A (B \ast G) = B (A \ast G)$ was true, we would expect that we would be able to represent the dependencies between a facet and a subgrid with about $\lceil 4x_Ay_B \rceil$ samples.
Our method only allows truncating to $x_m$ in grid and $y_n$ in image space, therefore we need $\lceil 4x_my_n \rceil \approx \lceil 4(x_A+x_n)y_n \rceil$ samples
Step17: As expected, modulo some rounding of pixels at the edges. We are not actually that far away from the limits given by information theory. However, this is clearly for a very small data size, so what would happen if we started doing this for larger grids? Would we be able to retain the reached accuracy and efficiency?
To do this quickly, let us attempt to predict how much error we expect from our method. As we have seen before, the error of
$$b_j \ast m_i (n_j \ast a_i G)$$
consists of
Step18: This is clearly quite pessimistic even when comparing against the worst case, but it gives us a decent starting point.
Let us assume that we increase the facet size ($y_B$). To make sure that we compare like with like, let us attempt to keep $\mathcal F n_j(y_B)$ constant. This means that we need
Step19: At the same time, the "overhead" of our solution decreases slowly depending on the sub-grid size $x_A$, eventually approximating $y_N/y_B$. This is of course because the constant grid margin eventually stops mattering
Step20: So how much accuracy can we get for a certain overhead, optimally? If we settle on a certain overhead value $o$, we can calculate $y_n$ from a value of $x_ny_n$
Step21: Facet Split
So far we have been rather cavalier with using arrays larger than their "true" size. This simplifies things mathematically, yet we must remember that we don't actually have the $b_j \ast F_j$ available. Instead, we will handle an FFT grid, which is actually sampled at a finite rate - which we can represent as multiplying by a comb function
Step22: The problem here is that $\mathcal Fm$ never approaches zero - it is a sinc function that doesn't fall off very much at all. Therefore we have $\mathcal F m_i(y) \ne \left\operatorname{III}_{2y_P} \ast \mathcal F m_i \right$ no matter how much we restrict the $y$ region.
However, we can actually choose $m$
Step23: However, now this also limits $m$ to $y_N$ in image space
Step24: Good starting point. We just need to figure out how far we can reduce the image size (here $y_P$) without impacting our ability to reconstruct the image. What we need is
Step25: This shows that by using the low-pass-filtered $m'$ we can indeed perform the entire computation using just $2y_P$ sample points
Step26: Note that we are only truncating terms here that we already know to fall to zero. However, the choice of $y_P$ is still quite subtle here, as we need padding to make sure that the convolutions work out in the correct way.
Actual implementation
So to review, what we have identified are two approximations
Step27: So let us start by reconstructing sub-grids from facets. First step is to convolve in $b_j$, derived from the PSWF. This is a cheap multiplication in image space. We then pad to $2y_P$ which yields us
$$\operatorname{III}_{2y_P} \ast \mathcal Fb_j \mathcal FG$$
in FFT convention.
Step28: Next we need to cut out the appropriate sub-grid for $m_i$ for the subgrid we want to construct (here
Step29: Note that we would clearly not construct them separately for a real pipeline, as they are simply shifted. Due to the truncation in frequency space these are not quite top-hat functions any more. These terms now get used to extract the sub-grid data from each facet
Step30: Next step is to multiply in $n_j$ in order to un-do the effects of $b$ and cut out the garbage between $y_N$ and $y_P$. This means we arrive at
Step31: Quick mid-point accuracy check against the approximation formula using full resultion. We should be looking at only rounding errors.
Furthermore, when padded to the full density the Fourier transform should fall to the error level provided by the PSWF past $x_N+x_M$ - indicating that we can reduce the sampling rate / grid size without losing information
Step32: So our final step is to reduce the sampling rate (sub-grid size) to $x_M$. This is the step where we actually introduce the bulk of our error, as the "tail" regions outside $x_M$ get aliased in. As established before, this especially copies the $x_M+x_N$ region inside, which doesn't hurt because we are only interested in the centre $x_A$ part.
As long as $x_M$ divides the grid well, this can simply be achieved by selecting every $\frac1{x_M}$th sample
Step33: At this point, all that is left is to put together the sum
$$S_i = a_i \sum_j \left( n_j \ast m_i (b_j \ast F_j)\right)$$
which eliminates $b_j$. Note that we know $A$ to be a pure mask, therefore this is simply truncation in grid space.
Step34: Note the pattern of the errors in image space - this is the position dependence of the accuracy pattern from the $b_j$ multiplication. As we stitch together 5 facets, this pattern repeats 5 times.
However, note how the "stitching" works out really well - accuracy is especially good around the $y_B$ areas where the data from two facets "overlaps". This is likely because are effectively averaging over two samples here, which boost our accuracy.
Interactive playground
Test different parameters
Step35: 2D case
Clearly this can be generalised to arbitrarily high dimensions. However, note that this makes things worse in a few ways at the same time | Python Code:
%matplotlib inline
from matplotlib import pylab
import matplotlib.patches as patches
import matplotlib.path as path
from ipywidgets import interact
import numpy
import sys
import random
import itertools
import time
import scipy.special
import math
pylab.rcParams['figure.figsize'] = 16, 10
pylab.rcParams['image.cmap'] = 'viridis'
try:
sys.path.append('../..')
from cracodile.synthesis import *
from util.visualize import *
print("Crocodile mode")
except ImportError:
print("Stand-alone mode")
# Convolution and FFT helpers
def conv(a, b): return ifft(fft(a) * fft(b))
def coordinates(N):
return numpy.fft.fftshift(numpy.fft.fftfreq(N))
def fft(a):
if len(a.shape) == 1: return numpy.fft.fftshift(numpy.fft.fft(numpy.fft.ifftshift(a)))
elif len(a.shape) == 2: return numpy.fft.fftshift(numpy.fft.fft2(numpy.fft.ifftshift(a)))
def ifft(a):
if len(a.shape) == 1: return numpy.fft.fftshift(numpy.fft.ifft(numpy.fft.ifftshift(a)))
elif len(a.shape) == 2: return numpy.fft.fftshift(numpy.fft.ifft2(numpy.fft.ifftshift(a)))
def pad_mid(a, N):
N0 = a.shape[0]
assert N >= N0
return numpy.pad(a, len(a.shape) * [(N//2-N0//2, (N+1)//2-(N0+1)//2)], mode='constant', constant_values=0.0)
def extract_mid(a, N):
assert N <= a.shape[0]
cx = a.shape[0] // 2
s = N // 2
if N % 2 == 0:
return a[len(a.shape) * [slice(cx - s, cx + s)]]
else:
return a[len(a.shape) * [slice(cx - s, cx + s + 1)]]
def anti_aliasing_function(shape, m, c):
if len(numpy.array(shape).shape) == 0:
mult = 2 - 1/shape/4
return scipy.special.pro_ang1(m, m, c, mult*coordinates(shape))[0]
return numpy.outer(anti_aliasing_function(shape[0], m, c),
anti_aliasing_function(shape[1], m, c))
def coordinates2(N):
N2 = N // 2
if N % 2 == 0:
return numpy.mgrid[-N2:N2, -N2:N2][::-1] / N
else:
return numpy.mgrid[-N2:N2+1, -N2:N2+1][::-1] / N
def _show(a, name, scale, axes):
size = a.shape[0]
if size % 2 == 0:
low,high = -0.5, 0.5 * (size - 2) / size
else:
low,high = -0.5 * (size - 1) / size, 0.5 * (size - 1) / size
low = (low - 1/size/2) * scale
high = (high - 1/size/2) * scale
cax=axes.imshow(a, extent=(low,high,low,high)); axes.set_title(name);
axes.figure.colorbar(cax,shrink=.4,pad=0.025)
def show_grid(grid, name, theta, axes):
return _show(grid, name, theta, axes)
def show_image(img, name, theta, axes):
return _show(img, name, img.shape[0] / theta, axes)
# Helper for marking ranges in a graph
def mark_range(lbl, x0, x1, y0=None, y1=None, ax=None):
if ax is None: ax = pylab.gca()
if y0 is None: y0 = ax.get_ylim()[1]
if y1 is None: y1 = ax.get_ylim()[0]
wdt = ax.get_xlim()[1] - ax.get_xlim()[0]
ax.add_patch(patches.PathPatch(patches.Path([(x0,y0), (x0,y1)]), linestyle="dashed"))
ax.add_patch(patches.PathPatch(patches.Path([(x1,y0), (x1,y1)]), linestyle="dashed"))
if pylab.gca().get_yscale() == 'linear':
lbl_y = (y0*7+y1) / 8
else: # Some type of log scale
lbl_y = (y0**7*y1)**(1/8)
ax.annotate(lbl, (x1+wdt/200, lbl_y))
Explanation: Facetted subgrids
We can split the image (facetting), and we can split the grid (subgrids) and do a lot of operations separately. This works out relatively straightforwardly. However, can we combine the two operations? As it turns out the answer is "yes", but the implementation has a few subtleties to it, so this notebooks walks through it step by step.
End of explanation
N = 2000
x = coordinates(N); fx = N * x
ticks = coordinates(10); fticks = N * coordinates(12)
G = numpy.random.rand(N)-0.5
pylab.rcParams['figure.figsize'] = 16, 4
pylab.plot(x, G); pylab.legend(["G"]); pylab.xticks(ticks); pylab.show()
pylab.plot(fx, fft(G).real); pylab.legend(["F[G]"]); pylab.xticks(fticks); pylab.show()
Explanation: Problem definition
To enable fast imaging we are interested in various types of locality. Specifically for images we care about locality in image space, whereas for dealing with visibilities we need locality in grid space (sub-grids). For example, we might have nodes storing pieces (facets) of the image, and now want to reconstruct a certain portion of the grid for de-gridding. Or the other way around - having gridded visibilities we want to propagate its contributions to facet nodes.
Mathematically speaking, facets and sub-grids are low-passes in time and frequency spaces. Let us express this mathematically as one-dimensional functions, and let us view everything from grid space. Let $G$ be the full grid data, assumed to be bounded / repeating past $\frac 12$ and sampled at a rate $N$:
End of explanation
x0 = 0.2; xA = 200/N
yB = N/10
A = numpy.where(numpy.abs(x-x0) <= xA, 1, 0)
B = ifft(numpy.where(numpy.abs(fx) <= yB, 1, 0))
pylab.plot(x, A, x, B.real, x, A*G); pylab.legend(["A", "B", "AG"]);
pylab.xticks(ticks); mark_range("xA", x0-xA,x0+xA); pylab.show();
pylab.plot(fx, fft(A).real, fx, fft(B).real,
fx, fft(conv(B, G)).real); pylab.legend(["F[A]", "F[B]", "F[B*G]"]);
pylab.ylim(-20,30); pylab.xticks(fticks); mark_range("yB",-yB,yB); pylab.show();
Explanation: Now let $A_i$, $B_j$ be grid and facet masks respectively, so $A_i(x) = 0$ iff $\left|x-x_i\right| > x_A$, $\sum_i A_i = 1$
and $\mathcal F_y B(y) = 0$ iff $\left|y - y_i\right| > y_B$, $\sum_i \mathcal F A_i = 1$. Then we define:
\begin{align}
S_i &= A_iG && \text{sub-grid} \
F_j &= B_j \ast G && \text{facet}
\end{align}
End of explanation
pylab.plot(x, conv(B, A*G).real); pylab.legend(["B*AG"]);
pylab.xticks(ticks); mark_range("xA", x0-xA,x0+xA); pylab.show();
pylab.plot(fx, fft(A*conv(B,G)).real); pylab.legend(["F[A(B*G)]"]);
pylab.xticks(fticks); mark_range("yB", -yB,yB); pylab.show();
Explanation: Note that to cover the entire number space with functions of finite support requires an infinite number of sub-grids and facets. However, in practice this does not really matter, as due to finite sampling we are effectively looking at infinite (repating) equivalence classes of facet/sub-grids anyway. Due to linearity this does not change the argument, so we will use the simpler representation.
The recombination problem
Clearly we have:
\begin{align}
S_i = A_iG &= A_i \left( \left( \sum_j B_j \right) \ast G \right) \
&= A_i \sum_j \left( B_j \ast G \right) =\sum_j A_i F_j \
F_j = B_j \ast G &= B_j \ast \left( \left( \sum_i A_i \right) G \right) \
&= B_j \ast \left( \sum_i A_i G \right) = \sum_i \left( B_j \ast S_i \right)
\end{align}
Because of linearity properties. However, this is not particular efficient for re-distributing data. While we know that $B_j \ast G$ is limited in image space, this is not true for $A_i (B_j \ast G)$ -- and on the other side of the coin, while $A_i G$ is limited in grid space, this is not the case for $B_j \ast A_i G$:
End of explanation
alpha = 0; xN = 25 / N; yN = yB
n = ifft(pad_mid(anti_aliasing_function(int(yB*2), alpha, 2*numpy.pi*yN*xN), N)).real
pylab.semilogy(x, numpy.abs(n)); pylab.legend(["n"]);
pylab.xticks(ticks); mark_range("$x_n$", -xN,xN); pylab.show();
pylab.semilogy(fx, numpy.abs(fft(n))); pylab.legend(["F[n]"]);
pylab.xticks(fticks); mark_range("$y_n=y_B$", -yN,yN); pylab.show();
Explanation: After all, $A_i (B_j \ast G) \ne B_j \ast A_i G$. This has direct impact on how compact we can represent these: As we only know $A_i F_j$ to be limited in grid space, we cannot reduce the sampling rate, and as $B_j \ast S_i$ is only limited in image space we cannot reduce the grid size.
Therefore in order to reconstruct $S_i$ from $k$ facets we need to collect $k$ grids the same size as $S_i$! This might not sound too bad, but remember that to reconstruct all $l$ sub-grids we have $l \times k$ dependencies between facets and sub-grids to cover. So if we can only reduce the amount to transfer by a factor of $O(k)$ or $O(l)$, that still means that the amount of data to exchange scales linearly with either the number of facets or the number of sub-grids.
Facet recombination
What can we do? The masks $A_i$ and $B_j$ we have chosen leave us with little wiggle room, so let us split them into sub-masks $A_i=a_im_i$ and $B_j = b_j\ast n_j$ with $a, m$ and $b, n$ being bounded in grid and image space respectively. Now let us attempt to establish:
$$B_j \ast A_iG = b_j \ast n_j \ast m_i a_i G \approx b_j \ast m_i ( n_j \ast a_i G)$$
If we ignore $b_j$ for a moment this means that we want:
$$n_j \ast m_i a_i G \approx m_i (n_j \ast a_i G)$$
Let us assume that $m_i$ is a top-hat function mask and has a greater support than $A_i$, so $a_i m_i = a_i = A_i$. Then it follows:
\begin{align}
\Leftrightarrow n_j \ast a_i' G &\approx m_i (n_j \ast a_i G) \
\Leftrightarrow (m_i - 1) (n_j \ast a_i G) &\approx 0
\end{align}
By choosing $m$ as a top hat function, we now have that $m_i - 1 = 0$ for all $|x-x_0| < x_m$. This means we need:
\begin{align}
\Leftarrow \forall |x-x_0| \ge x_m: n_j \ast a_i G &\approx 0
\end{align}
We know that $a_iG$ is truncated at $x_A$, therefore what we need is some $x_n$ such that $n_j(x)\approx 0$ for all $|x-x_0| < x_n$. Note that we still assume $n_j$ to be truncated in image space, so it is impossible for $n_j$ to fall to zero entirely. However, if we get close enough to zero we should be able to make the original approximation work as long as $x_M \ge x_A + x_n$.
The convolution
So at this point everything hinges on us identifying a function $n$ that has finite support in image (frequency) space and falls very close to zero in grid (time) space. There are a number of options here, one of the ad-hoc choices being prolate spheroidal wave functions. Let us construct one for $y_n = y_B$:
End of explanation
xM = xA + xN
m = numpy.where(numpy.abs(x-x0) <= xM, 1, 0)
ideal = conv(n, A*G)
approx = m * conv(n, A*G)
error = (1-m) * conv(n, A*G)
pylab.plot(x, ideal.real, x, approx.real); pylab.legend(["n*AG", "m(n*AG)"]);
pylab.xticks(ticks); mark_range("$x_A$", x0-xA,x0+xA); mark_range("$x_m$", x0-xM,x0+xM); pylab.show();
pylab.semilogy(x, numpy.abs(error)); pylab.legend(["(1-m)(n*AG)"]);
pylab.xticks(ticks); mark_range("$x_A$", x0-xA,x0+xA); mark_range("$x_m$", x0-xM,x0+xM); pylab.show();
pylab.semilogy(fx, numpy.abs(fft(error))); pylab.legend(["F[(1-m)(n*AG)]"]);
pylab.xticks(fticks); mark_range("$y_n=y_B$", -yN,yN); pylab.show();
print("RMSE:", numpy.sqrt(numpy.mean(numpy.abs(error)**2)), "(image:",numpy.sqrt(numpy.mean(numpy.abs(fft(error))**2)),")")
Explanation: That is pretty close to what we are looking for: In image space the function falls almost completely to zero, while in grid space we are left with relatively low errors. We can reduce those further by increasing either $x_n$ or the sampling rate.
Let us evaluate the error terms for this function:
End of explanation
b = ifft(pad_mid(1/anti_aliasing_function(int(yB*2), alpha, 2*numpy.pi*yN*xN), N)).real
pylab.plot(x, b); pylab.legend(["b"]); pylab.xticks(ticks); pylab.show();
pylab.semilogy(fx, numpy.abs(fft(b)), fx, numpy.abs(fft(conv(b,n)))); pylab.legend(["F[b]", "F[b*n]"]);
pylab.xticks(fticks); mark_range("$y_n=y_B$", -yN,yN); pylab.show();
Explanation: Close enough, this precision is more than enough for our purposes! However, we are not quite finished yet.
Correction for convolution
Our original goal was to construct $B_j \ast A_i G$. At this point we have a good approximation for $n_j \ast A_i G$. So it might look like all we need is the truncated inverse of the wave function $b_j$ such that $b_j \ast n_j = B_j$:
End of explanation
ideal = conv(B, A*G)
approx = conv(b, m * conv(n, A*G))
error = approx - ideal
pylab.plot(x,ideal.real, x, approx.real); pylab.legend(["B*AG", "b*m(n*AG)"]);
pylab.xticks(ticks); mark_range("$x_A$", x0-xA,x0+xA); mark_range("$x_m$", x0-xM,x0+xM); pylab.show();
Explanation: The function $b$ looks pretty wild in grid space, with a lot of high frequency. But that is to be expected to some degree, after all it "undoes" the well-behavedness of $n$. The second diagram shows that we indeed have $b_j\ast n_j = B_j$ as intended.
Now let us attempt to go after the grand price, $b_j \ast m_i (n_j \ast a_i G)$:
End of explanation
pylab.semilogy(x, numpy.abs(error)); pylab.legend(["b*(1-m)(n*AG)"]);
pylab.xticks(ticks); mark_range("$x_A$", x0-xA,x0+xA); mark_range("$x_m$", x0-xM,x0+xM); pylab.show();
pylab.semilogy(fx, numpy.abs(fft(error))); pylab.legend(["F[b*(1-m)(n*AG)]"]);
pylab.xticks(fticks); mark_range("$y_n=y_B$", -yN,yN); pylab.show();
print("RMSE:", numpy.sqrt(numpy.mean(numpy.abs(error)**2)), "(image:",numpy.sqrt(numpy.mean(numpy.abs(fft(error))**2)),")")
Explanation: Looks okay-ish at first glance. However, let us measure the errors objectively:
End of explanation
alpha = 0; yN = yB + N * 0.02
pswf = anti_aliasing_function(int(yN*2), alpha, 2*numpy.pi*yN*xN)
n = ifft(pad_mid(pswf, N)).real
b = ifft(pad_mid(1/extract_mid(pswf, int(yB*2)), N)).real
error1 = (1-m) * conv(n, A*G)
error2 = conv(b, (1-m) * conv(n, A*G))
pylab.semilogy(fx, numpy.abs(fft(n)), fx, numpy.abs(fft(b)), fx, numpy.abs(fft(conv(b,n))));
pylab.legend(["F[n]","F[b]", "F[b*n]"]);
pylab.xticks(fticks); mark_range("$y_B$", -yB,yB); mark_range("$y_n$", -yN,yN); pylab.show();
pylab.semilogy(fx, numpy.abs(fft(error1)), fx, numpy.abs(fft(error2))); pylab.legend(["F[(1-m)(n*AG)]", "F[b*(1-m)(n*AG)]"]);
pylab.xticks(fticks); mark_range("$y_B$", -yB,yB); mark_range("$y_n$", -yN,yN); pylab.show();
print("RMSE (w/o b):", numpy.sqrt(numpy.mean(numpy.abs(error1)**2)), "(image:",numpy.sqrt(numpy.mean(numpy.abs(fft(error1))**2)),")")
print("RMSE:", numpy.sqrt(numpy.mean(numpy.abs(error2)**2)), "(image:",numpy.sqrt(numpy.mean(numpy.abs(fft(error2))**2)),")")
Explanation: A lot worse than we had hoped!
It is not too hard to trace the source of the errors back: Note that in image space the error term $(1-m)(n*AG)$ had a distinct peak at $\pm y_n$ (by about $10^2$), which is where the Fourier transform of $b$ also reaches a value about $10^5$ higher than at the lowest point (the high frequencies we noted before!).
Therefore it should be no surprise that we end up with an error that is a whopping $10^7$ higher at the edges of the facet. This error "spreads out" a bit more in grid space just because of the randomness of $G$, but we clearly have quite nasty worst case errors now.
Damage control
This is a problem inherent to prolate spheroidal wave functions - and likely all functions of the type we might select for $n$ and therefore $b$. However, note that we are constructing
$$b_j \ast m_i (n_j \ast a_i G) \approx B_j \ast A_i G$$
and simply chose $y_n = y_B$. Clearly the support of $b$ must be the same as $B$, but we can easily make the support of $n$ larger as long as we maintain $b_j\ast n_j = B_j$. As we have just seen, the inverse wave function falls off very quickly away from the image edges, so let us simply truncate those parts away. That way, we will disconnect from the error peak of $(1-m_i) (n_j \ast G)$ (which will remain at $y_n$ and subsequentially get filtered out due to $y_b<y_n$) and substantially reduce the magnification of error due to $b$:
End of explanation
selM = (numpy.abs(x-x0) <= xM + 1e-13)
def pad_by_sel(sel, x): xp = numpy.zeros(len(sel), dtype=x.dtype); xp[sel] = x; return xp
ideal = conv(n, A*G)
approx = m * conv(n, A*G)
approx_r = pad_by_sel(selM, conv(n[numpy.abs(x) <= xM], (A*G)[selM]))
error1 = approx - ideal
error1_r = approx_r - ideal
pylab.plot(x[selM], ideal[selM].real, x[selM], approx_r[selM].real); pylab.legend(["n*AG", "[r] m(n*AG)"]);
mark_range("$x_A$", x0-xA,x0+xA); mark_range("$x_m$", x0-xM,x0+xM); pylab.show();
pylab.semilogy(fx[selM], numpy.abs(error1_r[selM])); pylab.legend(["[r] (1-m)(n*AG)"]); pylab.show();
pylab.semilogy(fx, numpy.abs(fft(error1)), fx, numpy.abs(fft(error1_r))); pylab.legend(["F[(1-m)(n*AG)]", "[r] F[(1-m)(n*AG)]"]);
mark_range("$y_n$", -yN,yN); mark_range("$y_B$", -yB,yB); pylab.show();
print("RMSE (w/o b):", numpy.sqrt(numpy.mean(numpy.abs(error1)**2)), "(image:",numpy.sqrt(numpy.mean(numpy.abs(fft(error1))**2)),")")
print("RMSE (w/o b, reduced):", numpy.sqrt(numpy.mean(numpy.abs(error1_r)**2)), "(image:",numpy.sqrt(numpy.mean(numpy.abs(fft(error1_r))**2)),")")
Explanation: Note that the base accuracy performance of the wave function has improved a bit simply because we have increased $y_n$ slightly. However, the important part is that we can make much better use of the results: Now we only lose an order of magnitude of error due to the convolution with $b$!
Downsampling
The point of this effort was to reduce the storage size of intermediate data involved in reconstructing a facet from all subgrids or a subgrid from all facets respectively. We have found a number of ways we can "compress" the signal, but have we achieved our purpose?
Let us again have a look at the approximation formula:
$$b_j \ast m_i (n_j \ast a_i G) \approx B_j \ast A_i G$$
We showed that $n_j \ast a_i' G \approx m_i (n_j \ast a_i G)$, therefore this term is approximately bound in grid space ($x_m$) as well as image space ($y_n$).
End of explanation
fx_r = N * coordinates(len(approx_r[selM]))
selN = numpy.abs(fx_r) <= yN
approx_core = ifft(fft(approx_r[selM])[selN])
approx_r2 = pad_by_sel(selM, ifft(pad_by_sel(selN, fft(approx_core))))
error1_r2 = ideal - approx_r2
pylab.semilogy(fx, numpy.abs(fft(ideal)),fx_r[selN], numpy.abs(fft(approx_core)));
mark_range("$y_n$", -yN,yN); mark_range("$y_B$", -yB,yB);
pylab.legend(["F[n*AG]", "[r²] F[m(n*AG)]"]); pylab.show();
pylab.plot(x[selM], ideal[selM].real, x[selM], approx_r2[selM].real); pylab.legend(["n*AG", "[r²] m(n*AG)"]);
mark_range("$x_A$", x0-xA,x0+xA); mark_range("$x_m$", x0-xM,x0+xM); pylab.show();
Explanation: Reducing the grid-size / image sampling rate has clearly not impacted the precision too much. Note that while the $(1-m)(n*AG)$ error term should be zero in the $x_m$ grid region, with the reduced size the errors get aliased back onto the subgrid. For our parameters this happens to cause an error "well" in the centre of the sub-grid.
However, limiting in grid space is only half the story. As established at the start we not only want to reduce our grid size down to $x_m$, but want to also get rid of image data beyond $y_n$:
End of explanation
pylab.semilogy(x[selM], numpy.abs(error1_r[selM]), x[selM], numpy.abs(error1_r2[selM]));
pylab.legend(["[r] (1-m)(n*AG)", "[r²] (1-m)(n*AG)"]); pylab.show();
pylab.semilogy(fx, numpy.abs(fft(error1)), fx, numpy.abs(fft(error1_r)), fx, numpy.abs(fft(error1_r2)));
pylab.legend(["F[(1-m)(n*AG)]", "[r] F[(1-m)(n*AG)]", "[r²] F[(1-m)(n*AG)]"]);
mark_range("$y_n$", -yN,yN); mark_range("$y_B$", -yB,yB); pylab.show();
print("RMSE (w/o b):", numpy.sqrt(numpy.mean(numpy.abs(error1)**2)), "(image:",numpy.sqrt(numpy.mean(numpy.abs(fft(error1))**2)),")")
print("RMSE (w/o b, reduced):", numpy.sqrt(numpy.mean(numpy.abs(error1_r)**2)), "(image:",numpy.sqrt(numpy.mean(numpy.abs(fft(error1_r))**2)),")")
print("RMSE (w/o b, twice reduced):", numpy.sqrt(numpy.mean(numpy.abs(error1_r2)**2)), "(image:",numpy.sqrt(numpy.mean(numpy.abs(fft(error1_r2))**2)),")")
Explanation: As intended, $n_j\ast a_iG$ drops to zero past $y_N$. This is clearly true even after downsampling, so we can truncate the signal in image space as well. This does not impact signal reconstruction in any visible way.
End of explanation
pylab.semilogy(x, numpy.abs(conv(b,error1_r)), x, numpy.abs(conv(b,error1_r2)));
mark_range("$x_A$", x0-xA,x0+xA); mark_range("$x_m$", x0-xM,x0+xM)
pylab.legend(["[r] b*(1-m)(n*AG)", "[r²] b*(1-m)(n*AG)"]); pylab.show();
pylab.semilogy(fx, numpy.abs(fft(conv(b,error1))), fx, numpy.abs(fft(conv(b,error1_r))), fx, numpy.abs(fft(conv(b,error1_r2))));
pylab.legend(["F[b*(1-m)(n*AG)]", "[r] F[b*(1-m)(n*AG)]", "[r²] F[b*(1-m)(n*AG)]"]);
mark_range("$y_n$", -yN,yN); mark_range("$y_B$", -yB,yB); pylab.show();
print("RMSE:", numpy.sqrt(numpy.mean(numpy.abs(conv(b,error1))**2)), "(image:",numpy.sqrt(numpy.mean(numpy.abs(fft(conv(b,error1)))**2)),")")
print("RMSE (reduced):", numpy.sqrt(numpy.mean(numpy.abs(conv(b,error1_r))**2)), "(image:",numpy.sqrt(numpy.mean(numpy.abs(fft(conv(b,error1_r)))**2)),")")
print("RMSE (twice reduced):", numpy.sqrt(numpy.mean(numpy.abs(conv(b,error1_r2))**2)), "(image:",numpy.sqrt(numpy.mean(numpy.abs(fft(conv(b,error1_r2)))**2)),")")
Explanation: The errors have not changed too much even though we have now significantly reduced the number of samples in both grid and image space. The downsampling operation eliminated the incidental "well" in grid space, but $n_j\ast a_iG \approx m_i (n_j \ast a_i G)$ holds just as it did before. This is what we were after!
For the final step, let's convolve with $b_j$. Notice that:
$$F_j = \sum_i \left( B_j \ast A_i G \right)
\approx \sum_j b_j \ast n_i (m_j \ast a_i G)
= b_j \ast \sum_j n_i (m_j \ast a_i G)$$
So we only need to do this once per facet, and can therefore easily work at full facet resolution. This doesn't matter for data transfers, but it is good to know that this can also be done efficiently from a computational point of view.
End of explanation
Gp = conv(b, G)
ideal = A * conv(n, Gp)
approx = A * conv(n, m * Gp)
error = A * conv(n, (1-m) * Gp)
pylab.plot(x, ideal.real, x, approx.real); pylab.legend(["A(n*G')", "A(n*mG')"]);
pylab.xticks(ticks); mark_range("$x_A$", x0-xA,x0+xA); mark_range("$x_m$", x0-xM,x0+xM); pylab.show();
pylab.semilogy(x, numpy.abs(error)); pylab.legend(["A(n*(1-m)G')"]); #pylab.ylim(1e-12,1e-7)
pylab.xticks(ticks); mark_range("$x_A$", x0-xA,x0+xA); mark_range("$x_m$", x0-xM,x0+xM); pylab.show();
pylab.semilogy(fx, numpy.abs(fft(error))); pylab.legend(["F[n*(1-m)G']"]);
pylab.xticks(fticks); mark_range("$y_n$", -yN,yN); mark_range("$y_B$", -yB,yB); pylab.show();
print("RMSE:", numpy.sqrt(numpy.mean(numpy.abs(error)**2)), "(image:",numpy.sqrt(numpy.mean(numpy.abs(fft(error))**2)),")")
Explanation: And we made it out the other end. Seems the approximation is indeed robust against truncation and down-sampling as intended.
Sub-grids recombination
So we can obtain facet data from sub-grids effectively. But does this also work in the "opposite" direction, if we want to construct a sub-grid from facets?
\begin{align}
S_i = A_iG &= A_i \left( \left( \sum_j B_j \right) \ast G \right) = \sum_j A_i \left( B_j \ast G \right) \
\end{align}
Let us split $A_i$ and $B_j$ as we did before. Now what we would like to establish is that:
$$A_i (B_j \ast G) = a_i m_i (n_j \ast b_j \ast G) \approx a_i \left(n_j \ast m_i (b_j \ast G)\right)$$
If we set $G' = b_j \ast G$ for the moment and use $a_im_i = a_i = A_i$, what we need to show is:
\begin{align}
a_i (n_j \ast G') &\approx a_i (n_j \ast m_i G') \
\Leftrightarrow a_i (n_j \ast (1 - m_i) G') &\approx 0 \
\Leftarrow \forall |x-x_0| \le x_a: n_j \ast (1 - m_i) G' &\approx 0 \
\Leftarrow \forall |x-x_0| \le x_a + x_n: (1 - m_i) G' &\approx 0
\end{align}
Which should be the case as long as $x_m \ge x_a + x_n$, like established before.
End of explanation
approx_r = conv(n[numpy.abs(x) <= xM], Gp[selM])
pylab.plot(x, conv(n, Gp).real, x, conv(n, m * Gp).real, x[selM], approx_r.real);
pylab.legend(["n*G'", "n*mG'", "[r]n*mG'"]); pylab.ylim(-1,1)
pylab.xticks(ticks); mark_range("$x_A$", x0-xA,x0+xA); mark_range("$x_m$", x0-xM,x0+xM); pylab.show();
Explanation: This is entirely dual to the original method -- we are doing everything backwards. Now $G$ needs to get convolved with $b_j$ right away, and we end with the $A_i$ mask cutting away a bit of grid space.
It is instructive to have closer look at the function of this mask operation, especially considering the reduced case:
End of explanation
print("Samples: %d (%d*%d=%d)" % (N, 1, N, N))
print("Samples (reduced): %d (%.2f*%d=%d, limit %.2f*%d=%d)" % (numpy.sum(selM), 2*xM,N,2*xM*N, 2*xA,N,2*xA*N, ))
print("Samples (twice reduced): %d (%.2f*%d=%d, limit %.2f*%d=%d)" %
(numpy.sum(selN), 2*xM,2*yN,4*xM*yN, 2*xA,2*yB,4*xA*yB))
Explanation: Reproduction of the original signal quickly detoriates beyond the $x_A$ region. Especially note how $n\ast mG'$ tends to jump around violently beyond $x_A$. This is even worse for the reduced case, where the error aliases around the $x_m$ edges.
Data sizes and scaling
Let us have a closer look and check exactly how effective this has been. If $A (B \ast G) = B (A \ast G)$ was true, we would expect that we would be able to represent the dependencies between a facet and a subgrid with about $\lceil 4x_Ay_B \rceil$ samples.
Our method only allows truncating to $x_m$ in grid and $y_n$ in image space, therefore we need $\lceil 4x_my_n \rceil \approx \lceil 4(x_A+x_n)y_n \rceil$ samples:
End of explanation
def overhead_approx(yB, yN, xA, xN):
return numpy.ceil(4*(xA+xN)*yN) / numpy.ceil(4*xA*yB)
def error_approx(yB, yN, xN, alpha=0, dim=1, hexagon=False):
# gridding error
assert yB < yN
pswf = anti_aliasing_function(int(yN)*2, alpha, 2*numpy.pi*yN*xN)
pswf /= numpy.prod(numpy.arange(2*alpha-1,0,-2, dtype=float)) # double factorial
grid_error = numpy.abs(numpy.sum(pswf[::2] - pswf[1::2]))
# correction error
b_error = numpy.abs(pswf[int(yN) + int(yB)])
if dim >= 2 and hexagon:
b_error *= numpy.abs(pswf[int(yN) + int(yB/2)])**(dim-1)
else:
b_error **= dim
return numpy.abs(grid_error) / (2*xM) / b_error
print("Predicted error:", error_approx(yB, yN, xN))
print("Observed worst case:", numpy.max(numpy.abs(fft(conv(b,error1_r2)))))
Explanation: As expected, modulo some rounding of pixels at the edges. We are not actually that far away from the limits given by information theory. However, this is clearly for a very small data size, so what would happen if we started doing this for larger grids? Would we be able to retain the reached accuracy and efficiency?
To do this quickly, let us attempt to predict how much error we expect from our method. As we have seen before, the error of
$$b_j \ast m_i (n_j \ast a_i G)$$
consists of:
The error term $(1 - m_i) (n_j \ast a_i G)$, which for $x_m \ge x_A + x_n$ should be roughly equal to the PSWF grid values outside the $x_n$ region. This is mainly limited by the sampling of the wave function in image space, so we can approximate it by calculating the maximum-frequency value ($1/y_n$) in grid space.
We also have to note that this error gets "concentrated" more the smaller the sub-grid is. We should therefore divide by $2x_N$.
The $b_j$ convolution, which might magnify the error by the dynamic range of $\mathcal F b_j$. As $\mathcal F b_j(0) = 1$ for the PSWF, this is given by $\mathcal F n_j(y_B)$.
End of explanation
yNs = 2**numpy.arange(5,20); yBs = yNs*yB/yN; xNs = xN*yN/yNs
errs_approx = numpy.vectorize(error_approx)(yBs, yNs, xNs)
pylab.loglog(yBs, errs_approx); pylab.xlabel('$y_B$'); pylab.show()
Explanation: This is clearly quite pessimistic even when comparing against the worst case, but it gives us a decent starting point.
Let us assume that we increase the facet size ($y_B$). To make sure that we compare like with like, let us attempt to keep $\mathcal F n_j(y_B)$ constant. This means that we need:
$y_B / y_n$ constant so we sample the same point. This effectively means that the image margin stays constant relative to the size of the image.
$y_n x_n$ constant so the wave function parameter doesn't change. This effectively means that the grid margin stays constant in terms of the grid resolution.
This error approximation steadily improves from 100 to 100,000 (we increase size in factor-of-two steps to reduce jitter):
End of explanation
xAs = numpy.hstack([numpy.arange(0.01,0.05,0.01), numpy.arange(0.05,0.2,0.05)])
for xA_ in xAs:
pylab.semilogx(yNs, numpy.vectorize(overhead_approx)(yNs*0.75, yNs, xA_, xN*yN/yNs));
pylab.legend(["xA=%.2f" % xA for xA in xAs]); pylab.show()
Explanation: At the same time, the "overhead" of our solution decreases slowly depending on the sub-grid size $x_A$, eventually approximating $y_N/y_B$. This is of course because the constant grid margin eventually stops mattering:
End of explanation
ov = 1.5
pylab.rcParams['figure.figsize'] = 16, 4
for yN_ in [ 256, 2048]:
xNyNs = numpy.hstack([numpy.arange(0.125, 1, 0.125), numpy.arange(1, 5, 1), numpy.arange(5, 6, 0.125), numpy.arange(6,7,0.25)])
xNs = xNyNs / yN_
for xA_ in xAs:
yBs = (yN_ + xNyNs / xA_) / ov
sel = yBs < yN_
if numpy.sum(sel) == 0: continue
errs_approx = numpy.vectorize(error_approx)(yBs[sel], yN_, xNs[sel], dim=2)
pylab.semilogy(xNyNs[sel], errs_approx)
pylab.ylim(pylab.gca().get_ylim()[0], 1); pylab.xlabel("$x_ny_n$")
pylab.legend(["$x_A=%.2f$" % xA for xA in xAs]); pylab.title("$o=%.1f, y_N=%.1f$" % (ov, yN_)); pylab.show()
Explanation: So how much accuracy can we get for a certain overhead, optimally? If we settle on a certain overhead value $o$, we can calculate $y_n$ from a value of $x_ny_n$:
$$o = \frac{4(x_A+x_n)y_n}{4x_Ay_B} \quad\Rightarrow\quad y_n = y_Bo - \frac{x_ny_n}{x_A}
\quad\text{, or: }
y_B = \frac{y_n + \frac{x_ny_n}{x_A}}{o}
$$
End of explanation
yP = yN + yB/2
IIIp_d = numpy.zeros(N)
for i in range(-N // int(yP) // 4+1, (N-1) // int(yP) // 4+1):
IIIp_d[N//2+2*int(yP)*i] = 1
pylab.plot(fx, IIIp_d, fx, fft(m).real, fx, fft(m).real - conv(fft(m), IIIp_d).real)
pylab.legend(["III", "Fm", "Fm-Fm*III"], loc=2)
pylab.ylim(-10,10)
mark_range("yN", -yN, yN); mark_range("yB", -yB, yB); mark_range("yP", -yN-yB, yN+yB); mark_range("2yP", -2*yP, 2*yP)
Explanation: Facet Split
So far we have been rather cavalier with using arrays larger than their "true" size. This simplifies things mathematically, yet we must remember that we don't actually have the $b_j \ast F_j$ available. Instead, we will handle an FFT grid, which is actually sampled at a finite rate - which we can represent as multiplying by a comb function:
$$(b_j \ast G) \operatorname{III}_{\frac1{2y_P}}$$
The choice of $y_P$ corresponds to the actual memory size of the facet for the purpose of this algorithm -- the finer we need to sample, the more memory we will have to use to hold the facet in our calculations. Clearly we don't lose any information as long as $y_P \ge y_B$, so we might be tempted to choose $y_P = y_B$. Yet that might not be enough, after all if we try to find $S_i$ in the obvious way we would like:
$$n_j \ast m_i (b_j \ast G) = n_j \ast m_i (b_j \ast G) \operatorname{III}_{\frac1{2y_P}}$$
However, this is not the case. To understand why, note that if we Fourier-transform both sides and re-order slightly, we get:
\begin{align}
\mathcal F n_j (\mathcal F m_i \ast \mathcal Fb_j \mathcal FG) &=
\mathcal F n_j (\operatorname{III}{2y_P} \ast \mathcal F m_i \ast \mathcal Fb_j \mathcal FG ) \
\Leftrightarrow
\mathcal F n_j \left( ( \mathcal F m_i - \operatorname{III}{2y_P} \ast \mathcal F m_i) \ast \mathcal Fb_j \mathcal FG \right) &= 0 \
\Leftarrow
\forall |y| < y_N: \left ( \mathcal F m_i - \operatorname{III}_{2y_P} \ast \mathcal F m_i) \ast \mathcal Fb_j \mathcal FG\right &= 0 \
\Leftarrow
\forall |y| < y_N + y_B: \left \mathcal Fm_i - \operatorname{III}_{2y_P} \ast \mathcal F m_i\right &= 0 \
\end{align}
Let us have a closer look:
End of explanation
xM = xA + 2 * xN
mp = conv(m, n).real
pylab.semilogy(x, numpy.maximum(1e-15, numpy.abs(m)));
pylab.semilogy(x, numpy.abs(mp));
pylab.ylim((1e-13, 1e1)); pylab.legend(["m", "m*n"])
mark_range("$x_0+x_A+2x_N$", x0-xM, x0+xM); mark_range("$x_0-x_A-x_N$", x0+xA+xN, x0-xA-xN);
Explanation: The problem here is that $\mathcal Fm$ never approaches zero - it is a sinc function that doesn't fall off very much at all. Therefore we have $\mathcal F m_i(y) \ne \left\operatorname{III}_{2y_P} \ast \mathcal F m_i \right$ no matter how much we restrict the $y$ region.
However, we can actually choose $m$: The two properties we need to conserve is that $m(x)=1$ for $|x| < x_A+x_N$, and we would like to fall it to zero by $x_M$ in order to limit the amount of data we need to exchange between facets and subgrids. This means that if we reserve some space between $x_A+x_N$ and $x_M$, we can fill it in whatever way we want in order to make $m$ more manageable.
In fact we can simply utilise the same prolate spheroidal wave function we used before:
$$m' = m \ast n$$
We re-use the same function just for convenience, this way we don't have to involve a second PSWF function. In this case, we know this increases the support of $m'$ in grid space to $x_{m'}=x_A+2x_N$:
End of explanation
pylab.semilogy(fx, IIIp_d, fx, numpy.abs(fft(mp)), fx, numpy.abs(fft(mp) - conv(fft(mp), IIIp_d)))
mark_range("$y_N$", -yN, yN); mark_range("$y_B$", -yB, yB); mark_range("$y_P$", -yP, yP)
mark_range("$2y_P$", -2*yP, 2*yP); mark_range("$y_B+y_N$\n$=2y_P-y_N$", -yB-yN, yB+yN)
pylab.legend(["III", "Fm'", "Fm'-Fm'*III"], loc=2);
Explanation: However, now this also limits $m$ to $y_N$ in image space:
End of explanation
pylab.semilogy(fx, numpy.abs(conv(fft(b) * fft(G), fft(mp) - conv(fft(mp), IIIp_d))),
fx, numpy.abs(fft(n) * conv(fft(b) * fft(G), fft(mp) - conv(fft(mp), IIIp_d))))
mark_range("$y_N$", -yN, yN); mark_range("$y_B$", -yB, yB); mark_range("yP", -yP, yP)
pylab.legend(["$FbFG*(Fm-Fm*III)$", "$Fn(FbFG*(Fm-Fm*III$))"]);
Explanation: Good starting point. We just need to figure out how far we can reduce the image size (here $y_P$) without impacting our ability to reconstruct the image. What we need is:
\begin{align}
\mathcal F n_j (\mathcal F m'i \ast \mathcal Fb_j \mathcal FG) &=
\mathcal F n_j (\operatorname{III}{2y_P} \ast \mathcal F m'_i \ast \mathcal Fb_j \mathcal FG) \
\Leftarrow \forall |y| < y_N + y_B:
\left \mathcal F m_i' - \operatorname{III}_{2y_P} \ast \mathcal F m_i' \right &= 0
\end{align}
As the image "loops" every $2y_P$ and the width of $\mathcal Fm'$ is $y_N$, this means:
$$2y_P - y_N \ge y_N + y_B \;\Rightarrow\; y_P \ge y_N + \frac12 y_B$$
With this sizing, the error relative to $\mathcal Fm_i\ast \mathcal Fb_j\mathcal FG$ is zero within $y_N$, and therefore close to zero entirely for $\mathcal Fn_j (\mathcal Fm_i\ast \mathcal Fb_j\mathcal FG)$:
End of explanation
def red_2yP(xs): return extract_mid(xs, int(2*yP))
ref = fft(n) * conv(fft(b) * fft(G), fft(mp))
reduced = red_2yP(fft(n)) * conv(red_2yP(fft(b) * fft(G)), red_2yP(fft(mp)))
pylab.semilogy(red_2yP(fx), numpy.abs(reduced - red_2yP(ref)));
Explanation: This shows that by using the low-pass-filtered $m'$ we can indeed perform the entire computation using just $2y_P$ sample points:
End of explanation
nsubgrid = int(1 / (2*xA)); subgrid_size = int(N * 2*xA)
assert nsubgrid * subgrid_size == N
nfacet = int(N / (2*yB)); facet_size = int(2*yB)
assert nfacet * facet_size == N
print(nsubgrid,"subgrids,",nfacet,"facets")
subgrid = numpy.empty((nsubgrid, subgrid_size), dtype=complex)
facet = numpy.empty((nfacet, facet_size), dtype=complex)
for i in range(nsubgrid):
subgrid[i] = extract_mid(numpy.roll(G, -i * subgrid_size), subgrid_size)
FG = fft(G)
for j in range(nfacet):
facet[j] = extract_mid(numpy.roll(FG, -j * facet_size), facet_size)
Explanation: Note that we are only truncating terms here that we already know to fall to zero. However, the choice of $y_P$ is still quite subtle here, as we need padding to make sure that the convolutions work out in the correct way.
Actual implementation
So to review, what we have identified are two approximations:
\begin{align}
S_i &\approx
a_i \sum_j \left( n_j \ast m_i (b_j \ast F_j)\right) \
F_j &\approx b_j \ast \sum_i m_i (n_j \ast S_i)
\end{align}
Some quick observations: The slight asymmetry comes from the fact that $a_i = A_i$ and assumed to be a mask (and therefore $a_iS_i = S_i$), but $b_j \ne B_j$. Also note that $b_j$ always gets applied on the image side, whereas $n_j$ gets applied on the grid plane, therefore appearing in pairs if we consider the round-trip. This is correct, and has the same underlying reason why we can use the same type of kernels for gridding and degridding.
End of explanation
yB_size = int(yB*2)
yN_size = int(yN*2)
# Find yP that allows us to align subgrid masks easily (brute force!)
yP_size = int(yP*2)+1
while numpy.abs(yP_size * 2*xA - int(yP_size * 2*xA)) >= 1e-13 or \
numpy.abs(yP_size * 2*xM - int(yP_size * 2*xM)) >= 1e-13:
yP_size+=1
xM_yP_size = int(2*xM*yP_size)
xMxN_yP_size = int(2*(xM+xN)*yP_size)
xMxN_yN_size = int(2*(xM+xN)*yN_size)
pswf = anti_aliasing_function(yN_size, alpha, 2*numpy.pi*yN*xN).real
Fb = 1/extract_mid(pswf, yB_size)
b = ifft(pad_mid(Fb, N))
n = ifft(pad_mid(pswf, N))
Fn = extract_mid(fft(n), yN_size)
FBjFj = numpy.empty((nfacet, yP_size), dtype=complex)
for j in range(nfacet):
FBjFj[j] = pad_mid(facet[j] * Fb, yP_size)
Explanation: So let us start by reconstructing sub-grids from facets. First step is to convolve in $b_j$, derived from the PSWF. This is a cheap multiplication in image space. We then pad to $2y_P$ which yields us
$$\operatorname{III}_{2y_P} \ast \mathcal Fb_j \mathcal FG$$
in FFT convention.
End of explanation
facet_m0 = conv(n, pad_mid(numpy.ones(int(N*xM*2)), N))
facet_m0_trunc = ifft(extract_mid(fft(facet_m0), yP_size))
pylab.semilogy(coordinates(yP_size), numpy.abs(facet_m0_trunc));
mark_range("$x_M$", xM, -xM); mark_range("$x_M+x_N$", -xM-xN, xM+xN);
Explanation: Next we need to cut out the appropriate sub-grid for $m_i$ for the subgrid we want to construct (here: the centre subgrid). As explained above, to do this with a fact that has only been padded to $y_P$ we need to consider the truncated $\mathcal F m_i$:
$$\mathcal F^{-1}[\Pi_{2y_P} \mathcal F m_i \ast \operatorname{III}{2y_P} \ast \mathcal Fb_j \mathcal FG]
= \mathcal F^{-1}[\Pi{2y_P} \mathcal F m_i] \mathcal F^{-1}[ \operatorname{III}{2y_P} \ast \mathcal Fb_j \mathcal FG]
$$
First let us construct the $m_i' = \mathcal F^{-1}[\Pi{2y_P} \mathcal F m_i]$ terms:
End of explanation
MiBjFj = numpy.empty((nsubgrid, nfacet, yP_size), dtype=complex)
assert numpy.abs(yP_size * 2*xA - int(yP_size * 2*xA)) < 1e-13
for i in range(nsubgrid):
for j in range(nfacet):
MiBjFj[i,j] = facet_m0_trunc * numpy.roll(ifft(FBjFj[j]), -i*int(yP_size * 2*xA))
Explanation: Note that we would clearly not construct them separately for a real pipeline, as they are simply shifted. Due to the truncation in frequency space these are not quite top-hat functions any more. These terms now get used to extract the sub-grid data from each facet:
End of explanation
Fn = extract_mid(fft(n), yN_size)
NjMiBjFj = numpy.empty((nsubgrid, nfacet, yN_size), dtype=complex)
for i in range(nsubgrid):
for j in range(nfacet):
NjMiBjFj[i,j] = Fn * extract_mid(fft(MiBjFj[i,j]), yN_size) * yP_size / N
Explanation: Next step is to multiply in $n_j$ in order to un-do the effects of $b$ and cut out the garbage between $y_N$ and $y_P$. This means we arrive at:
$$n_j \ast m'i (b_j \ast G) \operatorname{III}{\frac1{2y_P}} =
\mathcal F n_j \mathcal F\left[ m'i (b_j \ast G) \operatorname{III}{\frac1{2y_P}} \right]$$
Which we truncate further to $y_N$, as we are now done with convolutions in image space.
End of explanation
fig = pylab.figure(figsize=(16, 8))
ax1, ax2 = fig.add_subplot(211), fig.add_subplot(212)
for i in range(nsubgrid):
Gr = numpy.roll(G, -i * subgrid_size)
for j in range(nfacet):
Grr = ifft(numpy.roll(fft(Gr), -j * facet_size))
ax1.semilogy(yN_size*coordinates(yN_size),
numpy.abs(extract_mid(fft(conv(n, facet_m0 * conv(b, Grr))), yN_size)
- NjMiBjFj[i,j]))
ax2.semilogy(x, numpy.abs(ifft(pad_mid(NjMiBjFj[i,j], N))));
ax1.set_title("Error compared with $n_j * m_i(B_j * G)$")
mark_range("yN", -yN, yN,ax=ax1); mark_range("yB", -yB, yB,ax=ax1);
ax2.set_title("Signal left in grid space")
mark_range("xA", -xA, xA, ax=ax2); mark_range("xM", -xM, xM, ax=ax2); mark_range("xN+xN", -xM-xN, xM+xN, ax=ax2)
Explanation: Quick mid-point accuracy check against the approximation formula using full resultion. We should be looking at only rounding errors.
Furthermore, when padded to the full density the Fourier transform should fall to the error level provided by the PSWF past $x_N+x_M$ - indicating that we can reduce the sampling rate / grid size without losing information:
End of explanation
assert(numpy.abs(int(yN * xM) - yN * xM) < 1e-13)
assert(numpy.abs(int(1/2/xM) - 1/2/xM) < 1e-13)
xM_yN_size = int(xM*2*yN*2)
RNjMiBjFj = numpy.empty((nsubgrid, nfacet, xM_yN_size), dtype=complex)
RNjMiBjFj[:,:] = NjMiBjFj[:,:,::int(1/2/xM)]
print("Split", nfacet,"*",facet_size, "=", N, "data points into",
nfacet,'*',nsubgrid,'*',xM_yN_size,'=',nsubgrid*nfacet*xM_yN_size, ", overhead", nsubgrid*xM_yN_size/facet_size-1)
for i,j in itertools.product(range(nsubgrid), range(nfacet)):
Grr = ifft(numpy.roll(fft(numpy.roll(G, -i * subgrid_size)), -j * facet_size))
pylab.semilogy(xM*2*coordinates(int(xM*2*N)),
numpy.abs( extract_mid(facet_m0 * conv(conv(n,b), Grr), int(xM*2*N)) -
ifft(pad_mid(RNjMiBjFj[i,j], int(xM*2*N)))))
pylab.title("Error compared with $m_i (B_j * G)$")
mark_range("xA", -xA, xA); mark_range("xM", -xM, xM)
Explanation: So our final step is to reduce the sampling rate (sub-grid size) to $x_M$. This is the step where we actually introduce the bulk of our error, as the "tail" regions outside $x_M$ get aliased in. As established before, this especially copies the $x_M+x_N$ region inside, which doesn't hurt because we are only interested in the centre $x_A$ part.
As long as $x_M$ divides the grid well, this can simply be achieved by selecting every $\frac1{x_M}$th sample:
End of explanation
fig = pylab.figure(figsize=(16, 8))
ax1, ax2 = fig.add_subplot(211), fig.add_subplot(212)
err_sum = 0
for i in range(nsubgrid):
approx = numpy.zeros(int(xM*2*N), dtype=complex)
for j in range(nfacet):
approx += numpy.roll(pad_mid(RNjMiBjFj[i,j], int(xM*2*N)), j * int(xM*2*yB*2))
approx = extract_mid(ifft(approx), int(xA*2*N))
ax1.semilogy(xA*2*coordinates(subgrid_size), numpy.abs( approx - subgrid[i] ))
ax2.semilogy(N*coordinates(subgrid_size), numpy.abs( fft(approx - subgrid[i]) ))
err_sum += numpy.abs(approx - subgrid[i])**2
mark_range("xA", -xA, xA, ax=ax1); ax1.set_title("Error compared with $S_i = A_i G$")
mark_range("yB", -yB, yB, ax=ax2); mark_range("yN", -yN, yN, ax=ax2);
pylab.show()
print("MRSE:", numpy.sqrt(numpy.mean(err_sum)))
Explanation: At this point, all that is left is to put together the sum
$$S_i = a_i \sum_j \left( n_j \ast m_i (b_j \ast F_j)\right)$$
which eliminates $b_j$. Note that we know $A$ to be a pure mask, therefore this is simply truncation in grid space.
End of explanation
seed = numpy.random.randint(2**31)
@interact(N=(0,8192),
x0=(-0.5,0.5,0.1), xA=(0,0.5,0.01), xN=(0,0.5,0.01), xM=(0,0.5,0.01),
yN=(0,1024,25), yB=(0,1024,25), alpha=(0,20))
def test_it(N=N, x0=x0, xA=xA, xN=xN, xM=xM, yN=yN, yB=yB, alpha=alpha):
x = coordinates(N); fx = N * x
G = numpy.random.RandomState(seed).rand(N) - .5
A = numpy.roll(pad_mid(numpy.ones(int(2*xA*N)), N), int(x0*N))
m = numpy.roll(pad_mid(numpy.ones(int(2*xM*N)), N), int(x0*N))
selM = (m == 1)
selM0 = pad_mid(numpy.ones(int(2*xM*N),dtype=bool), N)
selN = pad_mid(numpy.ones(int(2*xM*2*yN),dtype=bool), numpy.sum(selM0))
pswf = anti_aliasing_function(int(yN*2), alpha, 2*numpy.pi*yN*xN)
pswf /= numpy.prod(numpy.arange(2*alpha-1,0,-2, dtype=float)) # double factorial
n = ifft(pad_mid(pswf, N)).real
m = conv(n,m).real
b = ifft(pad_mid(1/extract_mid(pswf, int(yB*2)), N)).real
Gp = conv(b, G)
ideal = A * conv(n, Gp)
approx = A * conv(n, m * Gp)
error = A * conv(n, (1-m) * Gp)
approx_r = A[selM] * conv(n[selM0], Gp[selM])
error_r = approx_r - ideal[selM]
approx_core = ifft(fft(n[selM0])[selN] * fft(Gp[selM])[selN])
approx_r2 = A[selM] * ifft(pad_by_sel(selN, fft(approx_core)))
error_r2 = approx_r2 - ideal[selM]
print("PSWF parameter:", 2*numpy.pi*xN*yN)
print("Worst error:", numpy.max(numpy.abs(error_r2)), "(image:", numpy.max(numpy.abs(fft(error))),
", predicted:", error_approx(yB, yN, xN, alpha=alpha),")")
print("RMSE:", numpy.sqrt(numpy.mean(numpy.abs(error)**2)), "(reduced:", numpy.sqrt(numpy.mean(numpy.abs(error_r)**2)),
", +downsample:", numpy.sqrt(numpy.mean(numpy.abs(error_r2)**2)), ")")
print("RMSE image:", numpy.sqrt(numpy.mean(numpy.abs(fft(error))**2)), "(reduced:", numpy.sqrt(numpy.mean(numpy.abs(fft(error_r))**2)),
", +downsample:", numpy.sqrt(numpy.mean(numpy.abs(fft(error_r2))**2)), ")")
print("Samples:", len(approx_core), "(",2*xM, "*", 2*yN,"=",4*xM*yN, ", overhead: %.2f)" % (xM*yN/xA/yB))
ticks = coordinates(10)
pylab.figure(figsize=(16, 18))
pylab.subplot(4,1,1); pylab.title("Input");
pylab.plot(x, A, x, conv(b,n).real, x, n, x, G, x, m); pylab.legend(["A", "B", "n", "G", "m"]); pylab.xticks(ticks);
mark_range("$x_A$", x0-xA,x0+xA); mark_range("$x_m$", x0-xM,x0+xM)
pylab.subplot(4,1,2); pylab.title("Output")
pylab.plot(x, conv(n, Gp).real, x, conv(n, m * Gp).real, x[selM], conv(n[selM0], Gp[selM]).real);
pylab.legend(["n*(b*G)", "n*m(b*G)", "[r]n*m(b*G)"]); pylab.ylim((-0.6,0.6)); pylab.xticks(ticks)
mark_range("$x_A$", x0-xA,x0+xA); mark_range("$x_m$", x0-xM,x0+xM);
pylab.subplot(4,1,3); pylab.title("Errors (Grid space)");
pylab.semilogy(x, numpy.abs(n), x, numpy.abs(conv(n, (1-m) * Gp)), x[selM], numpy.abs(error_r), x[selM], numpy.abs(error_r2));
pylab.legend(["n", "n*(1-m)(b*G)", "[r] A(n*(1-m)(b*G))", "[r+d] A(n*(1-m)(b*G))"]); pylab.xticks(ticks);
mark_range("$x_A$", x0-xA,x0+xA); mark_range("$x_n$", -xN,xN)
pylab.subplot(4,1,4); pylab.title("Errors (Image space)")
pylab.semilogy(fx, numpy.abs(fft(n)), fx, numpy.abs(fft(b)), fx, numpy.abs(fft(conv(n, m * Gp))), fx, numpy.abs(fft(error)),
N*coordinates(len(error_r)), numpy.abs(fft(conv(n[selM0], Gp[selM]))),
N*coordinates(len(error_r)), numpy.abs(fft(error_r)));
pylab.legend(["F[n]", "F[b]", "F[n*m(b*G)]", "F[A(n*(1-m)(b*G))]", "[r] F[n*m(b*G)]", "[r] F[A(n*(1-m)(b*G))]"]);
mark_range("$y_n$", -yN,yN); mark_range("$y_B$", -yB,yB);
pylab.xticks(N*ticks); pylab.show()
seed = numpy.random.randint(2**31)
@interact(N=(0,8192),
x0=(-0.5,0.5,0.1), xA=(0,0.5,0.01), xN=(0,0.5,0.01), xM=(0,0.5,0.01),
yN=(0,1024,25), yB=(0,1024,25), alpha=(0,20))
def test_it(N=N, x0=x0, xA=xA, xN=xN, xM=xM, yN=yN, yB=yB, alpha=alpha):
x = coordinates(N); fx = N * x
G = numpy.random.RandomState(seed).rand(N) - .5
A = numpy.roll(pad_mid(numpy.ones(int(2*xA*N)), N), int(x0*N))
m = numpy.roll(pad_mid(numpy.ones(int(2*xM*N)), N), int(x0*N))
selM = (m == 1)
selM0 = pad_mid(numpy.ones(int(2*xM*N),dtype=bool), N)
selN = pad_mid(numpy.ones(int(2*xM*2*yN),dtype=bool), numpy.sum(selM0))
pswf = anti_aliasing_function(int(yN*2), alpha, 2*numpy.pi*yN*xN)
pswf /= numpy.prod(numpy.arange(2*alpha-1,0,-2, dtype=float)) # double factorial
n = ifft(pad_mid(pswf, N)).real
b = ifft(pad_mid(1/extract_mid(pswf, int(yB*2)), N)).real
m = conv(n,m).real
ideal = conv(n, A*G)
approx = m * conv(n, A*G)
error = (1-m) * conv(n, A*G)
error_b = conv(b, error)
approx_r = conv(n[selM0], (A*G)[selM])
error = ideal - approx
error_r = ideal[selM] - approx_r
approx_core = ifft(fft(approx_r)[selN])
approx_r2 = ifft(pad_by_sel(selN, fft(approx_core)))
error_r2 = ideal[selM] - approx_r2
error_rb = conv(b, pad_by_sel(selM, error_r))
error_r2b = conv(b, pad_by_sel(selM, error_r2))
print("PSWF parameter:", 2*numpy.pi*xN*yN)
print("Worst error:", numpy.max(numpy.abs(error_r2b)), "(image:", numpy.max(numpy.abs(fft(error_r2b))),
", predicted:", error_approx(yB, yN, xN, alpha=alpha),")")
print("RMSE:", numpy.sqrt(numpy.mean(numpy.abs(error_b)**2)), "(reduced:", numpy.sqrt(numpy.mean(numpy.abs(error_rb)**2)),
", +downsample:", numpy.sqrt(numpy.mean(numpy.abs(error_r2b)**2)), ")")
print("RMSE image:", numpy.sqrt(numpy.mean(numpy.abs(fft(error_b))**2)), "(reduced:", numpy.sqrt(numpy.mean(numpy.abs(fft(error_rb))**2)),
", +downsample:", numpy.sqrt(numpy.mean(numpy.abs(fft(error_r2b))**2)), ")")
print("Samples:", len(approx_core), "(",2*xM, "*", 2*yN,"=",4*xM*yN, ", overhead: %.2f)" % (xM*yN/xA/yB))
# Input graph
ticks = coordinates(10)
pylab.figure(figsize=(16, 18))
pylab.subplot(4,1,1); pylab.title("Input");
pylab.plot(x, A, x, conv(b,n), x, n, x, G, x, m); pylab.legend(["A", "B", "n", "G", "m"]); pylab.xticks(ticks);
mark_range("$x_A$", x0-xA,x0+xA); mark_range("$x_m$", x0-xM,x0+xM)
# Output graph
pylab.subplot(4,1,2); pylab.title("Output");
pylab.plot(x, conv(b, ideal), x, conv(b,approx), x, conv(b,pad_by_sel(selM, approx_r2)))
pylab.ylim((-0.5,0.5)); pylab.legend(["B*AG", "b*m(n*aG)", "[r+d] b*m(n*aG)"]);
mark_range("$x_A$", x0-xA,x0+xA); mark_range("$x_m$", x0-xM,x0+xM)
# Error graph (image space)
pylab.subplot(4,1,3); pylab.title("Errors (Grid space)");
pylab.semilogy(x, numpy.abs(n), x, numpy.abs(error), x[selM], numpy.abs(error_r), x[selM], numpy.abs(error_r2), x, numpy.abs(error_r2b));
mark_range("$x_n$", -xN,xN); mark_range("$x_m$", x0-xM,x0+xM)
pylab.legend(["n","(1-m)(n*AG)","[r] (1-m)(n*AG)","[r+d] (1-m)(n*AG)","[r+d] b*(1-m)(n*AG)"]);
pylab.xticks(ticks);
# Error graph (frequency space)
pylab.subplot(4,1,4); pylab.title("Errors (Image space)")
pylab.semilogy(fx, numpy.abs(fft(n)), fx, numpy.abs(fft(b)), fx, numpy.abs(fft(ideal)), fx, numpy.abs(fft(error)),
N*coordinates(len(error_r)), numpy.abs(fft(error_r2)), fx, numpy.abs(fft(error_r2b)));
mark_range("$y_n$", -yN,yN); mark_range("$y_B$", -yB,yB);
pylab.legend(["n", "b", "n*AG", "(1-m)(n*AG)", "[r+d] (1-m)(n*AG)", "[r+d] b*(1-m)(n*AG)"]);
pylab.xticks(N * ticks);
pylab.show()
Explanation: Note the pattern of the errors in image space - this is the position dependence of the accuracy pattern from the $b_j$ multiplication. As we stitch together 5 facets, this pattern repeats 5 times.
However, note how the "stitching" works out really well - accuracy is especially good around the $y_B$ areas where the data from two facets "overlaps". This is likely because are effectively averaging over two samples here, which boost our accuracy.
Interactive playground
Test different parameters:
End of explanation
seed = numpy.random.randint(2**31)
# Subgrid region marker, assuming size to be even
def grid_patch(size):
return patches.Rectangle((-(size+1)/theta/2, -(size+1)/theta/2), size/theta, size/theta, fill=False, linestyle="dashed")
def pad_by_sel_(shape, sel, x):
xp = numpy.zeros(shape, dtype=x.dtype); xp[sel] = x; return xp
@interact(N=(0,1024,128),
x0=(-0.5,0.5,0.1), y0=(-0.5,0.5,0.1), xA=(0,0.5,0.01), xN=(0,0.5,0.01), xM=(0,0.5,0.01),
yN=(0,1024,25), yB=(0,1024,25), alpha=(0,20))
def test_it(N=N, x0=x0, y0=-0.1, xA=xA, xN=xN, xM=xM, yN=yN, yB=yB, alpha=alpha, hexagon=False):
x,y = coordinates2(N)
G = numpy.random.RandomState(seed).rand(N, N) - .5
A = numpy.roll(pad_mid(numpy.ones((int(2*xA*N), int(2*xA*N))), N), (int(y0*N), int(x0*N)), (0,1))
m = numpy.roll(pad_mid(numpy.ones((int(2*xM*N), int(2*xM*N))), N), (int(y0*N), int(x0*N)), (0,1))
A = numpy.where((numpy.abs(x-x0) <= xA) & (numpy.abs(y-y0) <= xA), 1, 0)
m = numpy.where((numpy.abs(x-x0) <= xM) & (numpy.abs(y-y0) <= xM), 1, 0)
selM_1 = pad_mid(numpy.ones(int(2*xM*N), dtype=bool), N)
selM = numpy.ix_(numpy.roll(selM_1, int(y0*N)), numpy.roll(selM_1, int(x0*N)))
selM0 = numpy.ix_(selM_1, selM_1)
selN_1 = pad_mid(numpy.ones(int(2*xM*2*yN),dtype=bool), numpy.sum(selM_1))
selN = numpy.ix_(selN_1, selN_1)
pswf = anti_aliasing_function((int(yN*2),int(yN*2)), alpha, 2*numpy.pi*yN*xN)
pswf /= numpy.prod(numpy.arange(2*alpha-1,0,-2, dtype=float)) # double factorial
n = ifft(pad_mid(pswf, N)).real
b = pad_mid(1/extract_mid(pswf, int(yB*2)), N)
if hexagon: b[numpy.where(numpy.abs(N*x) > yB - numpy.abs(N*y) / 2)] = 0
b = ifft(b).real
m = conv(n,m).real
ideal = conv(n, A*G)
approx = m * conv(n, A*G)
error = (1-m) * conv(n, A*G)
error_b = conv(b, error)
approx_r = conv(n[selM0], (A*G)[selM])
error = ideal - approx
error_r = ideal[selM] - approx_r
approx_core = ifft(fft(approx_r)[selN])
approx_r2 = ifft(pad_by_sel_(approx_r.shape, selN, fft(approx_core)))
error_r2 = ideal[selM] - approx_r2
error_rb = conv(b, pad_by_sel_(ideal.shape, selM, error_r))
error_r2b = conv(b, pad_by_sel_(ideal.shape, selM, error_r2))
print("PSWF parameter:", 2*numpy.pi*xN*yN)
print("Worst error:", numpy.max(numpy.abs(error_r2b)), "(image:", numpy.max(numpy.abs(fft(error_r2b))),
", predicted:", error_approx(yB, yN, xN, alpha=alpha, dim=2, hexagon=hexagon),")")
print("RMSE:", numpy.sqrt(numpy.mean(numpy.abs(error_b)**2)), "(reduced:", numpy.sqrt(numpy.mean(numpy.abs(error_rb)**2)),
", +downsample:", numpy.sqrt(numpy.mean(numpy.abs(error_r2b)**2)), ")")
print("RMSE image:", numpy.sqrt(numpy.mean(numpy.abs(fft(error_b))**2)), "(reduced:", numpy.sqrt(numpy.mean(numpy.abs(fft(error_rb))**2)),
", +downsample:", numpy.sqrt(numpy.mean(numpy.abs(fft(error_r2b))**2)), ")")
print("Samples:", approx_core.shape[0] * approx_core.shape[1], "(",2*xM, "² *", 2*yN,"² =",(4*xM*yN)**2, ", overhead: %.2f)" % (xM*yN/xA/yB)**2)
fig = pylab.figure(figsize=(16,4))
show_grid(numpy.log(numpy.maximum(1e-20,numpy.abs(n))) / numpy.log(10), "n", N, axes=fig.add_subplot(121))
show_grid(numpy.log(numpy.maximum(1e-20,numpy.abs(b))) / numpy.log(10), "b", N, axes=fig.add_subplot(122))
fig = pylab.figure(figsize=(16,4))
show_image(numpy.log(numpy.maximum(1e-20,numpy.abs(fft(n)))) / numpy.log(10), "F[n]", N, axes=fig.add_subplot(121))
show_image(numpy.log(numpy.maximum(1e-20,numpy.abs(fft(b)))) / numpy.log(10), "F[b]", N, axes=fig.add_subplot(122))
fig = pylab.figure(figsize=(16,4))
show_grid(numpy.abs(conv(b,ideal)), "b*n*AG", N, axes=fig.add_subplot(121))
show_grid(numpy.abs(conv(b,pad_by_sel_(ideal.shape, selM, approx_r2))), "[r+d] b*m(n*aG)", N, axes=fig.add_subplot(122))
fig = pylab.figure(figsize=(16,4))
show_grid(numpy.log(numpy.maximum(1e-20,numpy.abs(error))) / numpy.log(10), "log_10 (1-m)(n*AG)", N, axes=fig.add_subplot(131))
show_grid(numpy.log(numpy.maximum(1e-20,numpy.abs(pad_by_sel_(ideal.shape, selM, error_r)))) / numpy.log(10), "[r] log_10 (1-m)(n*AG)", N, axes=fig.add_subplot(132))
show_grid(numpy.log(numpy.maximum(1e-20,numpy.abs(pad_by_sel_(ideal.shape, selM, error_r2)))) / numpy.log(10), "[r+d] log_10 (1-m)(n*AG)", N, axes=fig.add_subplot(133))
fig = pylab.figure(figsize=(16,6))
show_grid(numpy.log(numpy.maximum(1e-20,numpy.abs(fft(error)))) / numpy.log(10), "log_10 F[(1-m)(n*AG)]", N, axes=fig.add_subplot(121))
show_grid(numpy.log(numpy.maximum(1e-20,numpy.abs(fft(error_r2b)))) / numpy.log(10), "[r+d] log_10 F[b*(1-m)(n*AG)]", N, axes=fig.add_subplot(122))
pylab.show()
#show_image(b, "b", 1)
return
xs,ys = coordinates2(len(pswf))
pylab.rcParams['figure.figsize'] = 20, 16
pylab.contour(xs,ys, numpy.log(1/numpy.outer(pswf, pswf)) / numpy.log(10), levels=numpy.arange(0,12))
r = 0.4
pylab.gca().add_patch(patches.Circle((0,0), radius=r, fill=False))
pylab.gca().add_patch(patches.Polygon(r*numpy.transpose([numpy.cos(numpy.arange(6)/6*2*numpy.pi), numpy.sin(numpy.arange(6)/6*2*numpy.pi)]), True, fill=False))
pylab.colorbar()
Explanation: 2D case
Clearly this can be generalised to arbitrarily high dimensions. However, note that this makes things worse in a few ways at the same time:
Any overhead will be per image axis, so 2D overhead is squared 1D overhead
As we would use the outer product of the PSWF, the worst-case error is also squared (i.e. the corners are quite a bit worse).
There is unfortunately no good way around this, as the only way to significantly reduce overhead is to throw less of the image away, however this will increase exactly the error multiplier that we now have to square. As the corners are really the problem here, it might increase efficiency if we tile the plane into hexagons instead of squares...
End of explanation |
10,068 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Test the Retrieval Latency of Approximate vs Exact Matching
Step1: Exact Matching
Step2: Approximate Matching (ScaNN) | Python Code:
import tensorflow as tf
import time
PROJECT_ID = 'ksalama-cloudml'
BUCKET = 'ksalama-cloudml'
INDEX_DIR = f'gs://{BUCKET}/bqml/scann_index'
BQML_MODEL_DIR = f'gs://{BUCKET}/bqml/item_matching_model'
LOOKUP_MODEL_DIR = f'gs://{BUCKET}/bqml/embedding_lookup_model'
songs = {
'2114406': 'Metallica: Nothing Else Matters',
'2114402': 'Metallica: The Unforgiven',
'2120788': 'Limp Bizkit: My Way',
'2120786': 'Limp Bizkit: My Generation',
'1086322': 'Jacques Brel: Ne Me Quitte Pas',
'3129954': 'Édith Piaf: Non, Je Ne Regrette Rien',
'53448': 'France Gall: Ella, Elle l\'a',
'887688': 'Enrique Iglesias: Tired Of Being Sorry',
'562487': 'Shakira: Hips Don\'t Lie',
'833391': 'Ricky Martin: Livin\' la Vida Loca',
'1098069': 'Snoop Dogg: Drop It Like It\'s Hot',
'910683': '2Pac: California Love',
'1579481': 'Dr. Dre: The Next Episode',
'2675403': 'Eminem: Lose Yourself',
'2954929': 'Black Sabbath: Iron Man',
'625169': 'Black Sabbath: Paranoid',
}
Explanation: Test the Retrieval Latency of Approximate vs Exact Matching
End of explanation
class ExactMatcher(object):
def __init__(self, model_dir):
print("Loading exact matchg model...")
self.model = tf.saved_model.load(model_dir)
print("Exact matchg model is loaded.")
def match(self, instances):
outputs = self.model.signatures['serving_default'](tf.constant(instances, tf.dtypes.int64))
return outputs['predicted_item2_Id'].numpy()
exact_matcher = ExactMatcher(BQML_MODEL_DIR)
exact_matches = {}
start_time = time.time()
for i in range(100):
for song in songs:
matches = exact_matcher.match([int(song)])
exact_matches[song] = matches.tolist()[0]
end_time = time.time()
exact_elapsed_time = end_time - start_time
print(f'Elapsed time: {round(exact_elapsed_time, 3)} seconds - average time: {exact_elapsed_time / (100 * len(songs))} seconds')
Explanation: Exact Matching
End of explanation
from index_server.matching import ScaNNMatcher
scann_matcher = ScaNNMatcher(INDEX_DIR)
embedding_lookup = tf.saved_model.load(LOOKUP_MODEL_DIR)
approx_matches = dict()
start_time = time.time()
for i in range(100):
for song in songs:
vector = embedding_lookup([song]).numpy()[0]
matches = scann_matcher.match(vector, 50)
approx_matches[song] = matches
end_time = time.time()
scann_elapsed_time = end_time - start_time
print(f'Elapsed time: {round(scann_elapsed_time, 3)} seconds - average time: {scann_elapsed_time / (100 * len(songs))} seconds')
speedup_percent = round(exact_elapsed_time / scann_elapsed_time, 1)
print(f'ScaNN speedup: {speedup_percent}x')
Explanation: Approximate Matching (ScaNN)
End of explanation |
10,069 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In this post, we are going to construct a unit conversion table in python. The table will have columns for meters (m), centimeter (cm), and inches (in). We will start off with a list of values that will be our meter collumn.
Step1: One way we can accomplish this is with list comprehensions.
Step2: We can also create our originol meter list in a more dynamic way.
Step3: Another way to accomplish this same thing is to use numpy's array function
Step4: Now making the centimeters list is a little easier, because we can multiply each value in the array by the scalar 100 with one opperation and no need for list comprehensions. | Python Code:
meters = [0, 10, 20, 30, 40, 50]
meters
centimeters = meters*0.01
centimeters
Explanation: In this post, we are going to construct a unit conversion table in python. The table will have columns for meters (m), centimeter (cm), and inches (in). We will start off with a list of values that will be our meter collumn.
End of explanation
centimeters = [value*100 for value in meters]
centimeters
Explanation: One way we can accomplish this is with list comprehensions.
End of explanation
meters = list(range(11))
meters = [value*10 for value in meters]
meters
Explanation: We can also create our originol meter list in a more dynamic way.
End of explanation
import numpy as np
meters = np.arange(0,110,10)
meters
Explanation: Another way to accomplish this same thing is to use numpy's array function
End of explanation
centimeters = meters*100
centimeters
table = np.concatenate((meters,centimeters),axis=0)
print(table)
np.reshape(table,(2,11))
table
table.shape
Explanation: Now making the centimeters list is a little easier, because we can multiply each value in the array by the scalar 100 with one opperation and no need for list comprehensions.
End of explanation |
10,070 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
From raw data to dSPM on SPM Faces dataset
Runs a full pipeline using MNE-Python
Step1: Load and filter data, set up epochs
Step2: Visualize fields on MEG helmet
Step3: Look at the whitened evoked daat
Step4: Compute forward model
Step5: Compute inverse solution | Python Code:
# Authors: Alexandre Gramfort <[email protected]>
# Denis Engemann <[email protected]>
#
# License: BSD-3-Clause
import matplotlib.pyplot as plt
import mne
from mne.datasets import spm_face
from mne.preprocessing import ICA, create_eog_epochs
from mne import io, combine_evoked
from mne.minimum_norm import make_inverse_operator, apply_inverse
print(__doc__)
data_path = spm_face.data_path()
subjects_dir = data_path / 'subjects'
spm_path = data_path / 'MEG' / 'spm'
Explanation: From raw data to dSPM on SPM Faces dataset
Runs a full pipeline using MNE-Python:
- artifact removal
- averaging Epochs
- forward model computation
- source reconstruction using dSPM on the contrast : "faces - scrambled"
<div class="alert alert-info"><h4>Note</h4><p>This example does quite a bit of processing, so even on a
fast machine it can take several minutes to complete.</p></div>
End of explanation
raw_fname = spm_path / 'SPM_CTF_MEG_example_faces%d_3D.ds'
raw = io.read_raw_ctf(raw_fname % 1, preload=True) # Take first run
# Here to save memory and time we'll downsample heavily -- this is not
# advised for real data as it can effectively jitter events!
raw.resample(120., npad='auto')
picks = mne.pick_types(raw.info, meg=True, exclude='bads')
raw.filter(1, 30, method='fir', fir_design='firwin')
events = mne.find_events(raw, stim_channel='UPPT001')
# plot the events to get an idea of the paradigm
mne.viz.plot_events(events, raw.info['sfreq'])
event_ids = {"faces": 1, "scrambled": 2}
tmin, tmax = -0.2, 0.6
baseline = None # no baseline as high-pass is applied
reject = dict(mag=5e-12)
epochs = mne.Epochs(raw, events, event_ids, tmin, tmax, picks=picks,
baseline=baseline, preload=True, reject=reject)
# Fit ICA, find and remove major artifacts
ica = ICA(n_components=0.95, max_iter='auto', random_state=0)
ica.fit(raw, decim=1, reject=reject)
# compute correlation scores, get bad indices sorted by score
eog_epochs = create_eog_epochs(raw, ch_name='MRT31-2908', reject=reject)
eog_inds, eog_scores = ica.find_bads_eog(eog_epochs, ch_name='MRT31-2908')
ica.plot_scores(eog_scores, eog_inds) # see scores the selection is based on
ica.plot_components(eog_inds) # view topographic sensitivity of components
ica.exclude += eog_inds[:1] # we saw the 2nd ECG component looked too dipolar
ica.plot_overlay(eog_epochs.average()) # inspect artifact removal
ica.apply(epochs) # clean data, default in place
evoked = [epochs[k].average() for k in event_ids]
contrast = combine_evoked(evoked, weights=[-1, 1]) # Faces - scrambled
evoked.append(contrast)
for e in evoked:
e.plot(ylim=dict(mag=[-400, 400]))
plt.show()
# estimate noise covarariance
noise_cov = mne.compute_covariance(epochs, tmax=0, method='shrunk',
rank=None)
Explanation: Load and filter data, set up epochs
End of explanation
# The transformation here was aligned using the dig-montage. It's included in
# the spm_faces dataset and is named SPM_dig_montage.fif.
trans_fname = spm_path / 'SPM_CTF_MEG_example_faces1_3D_raw-trans.fif'
maps = mne.make_field_map(evoked[0], trans_fname, subject='spm',
subjects_dir=subjects_dir, n_jobs=None)
evoked[0].plot_field(maps, time=0.170)
Explanation: Visualize fields on MEG helmet
End of explanation
evoked[0].plot_white(noise_cov)
Explanation: Look at the whitened evoked daat
End of explanation
src = subjects_dir / 'spm' / 'bem' / 'spm-oct-6-src.fif'
bem = subjects_dir / 'spm' / 'bem' / 'spm-5120-5120-5120-bem-sol.fif'
forward = mne.make_forward_solution(contrast.info, trans_fname, src, bem)
Explanation: Compute forward model
End of explanation
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = 'dSPM'
inverse_operator = make_inverse_operator(contrast.info, forward, noise_cov,
loose=0.2, depth=0.8)
# Compute inverse solution on contrast
stc = apply_inverse(contrast, inverse_operator, lambda2, method, pick_ori=None)
# stc.save('spm_%s_dSPM_inverse' % contrast.comment)
# Plot contrast in 3D with mne.viz.Brain if available
brain = stc.plot(hemi='both', subjects_dir=subjects_dir, initial_time=0.170,
views=['ven'], clim={'kind': 'value', 'lims': [3., 6., 9.]})
# brain.save_image('dSPM_map.png')
Explanation: Compute inverse solution
End of explanation |
10,071 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Paris Saclay Center for Data Science
Titanic RAMP
Step1: Exploratory data analysis
Loading the data
Step2: The original training data frame has 891 rows. In the starting kit, we give you a subset of 445 rows. Some passengers have missing information
Step3: About two thirds of the passengers perished in the event. A dummy classifier that systematically returns "0" would have an accuracy of 62%, higher than that of a random model.
Some plots
Features densities and co-evolution
A scatterplot matrix allows us to visualize
Step4: Non-linearly transformed data
The Fare variable has a very heavy tail. We can log-transform it.
Step5: Plot the bivariate distributions and marginals of two variables
Another way of visualizing relationships between variables is to plot their bivariate distributions.
Step6: The pipeline
For submitting at the RAMP site, you will have to write two classes, saved in two different files
Step7: Classifier
The classifier follows a classical scikit-learn classifier template. It should be saved in the file submissions/starting_kit/classifier.py. In its simplest form it takes a scikit-learn pipeline, assigns it to self.clf in __init__, then calls its fit and predict_proba functions in the corresponding member functions.
Step8: Local testing (before submission)
It is <b><span style="color
Step9: Submitting to ramp.studio
Once you found a good feature extractor and classifier, you can submit them to ramp.studio. First, if it is your first time using RAMP, sign up, otherwise log in. Then find an open event on the particular problem, for example, the event titanic for this RAMP. Sign up for the event. Both signups are controlled by RAMP administrators, so there can be a delay between asking for signup and being able to submit.
Once your signup request is accepted, you can go to your sandbox and copy-paste (or upload) feature_extractor.py and classifier.py from submissions/starting_kit. Save it, rename it, then submit it. The submission is trained and tested on our backend in the same way as ramp_test_submission does it locally. While your submission is waiting in the queue and being trained, you can find it in the "New submissions (pending training)" table in my submissions. Once it is trained, you get a mail, and your submission shows up on the public leaderboard.
If there is an error (despite having tested your submission locally with ramp_test_submission), it will show up in the "Failed submissions" table in my submissions. You can click on the error to see part of the trace.
After submission, do not forget to give credits to the previous submissions you reused or integrated into your submission.
The data set we use at the backend is usually different from what you find in the starting kit, so the score may be different.
The usual way to work with RAMP is to explore solutions, add feature transformations, select models, perhaps do some AutoML/hyperopt, etc., locally, and checking them with ramp_test_submission. The script prints mean cross-validation scores
```
train auc = 0.85 ± 0.005
train acc = 0.81 ± 0.006
train nll = 0.45 ± 0.007
valid auc = 0.87 ± 0.023
valid acc = 0.81 ± 0.02
valid nll = 0.44 ± 0.024
test auc = 0.83 ± 0.006
test acc = 0.76 ± 0.003
test nll = 0.5 ± 0.005
``
The official score in this RAMP (the first score column after "historical contributivity" on the [leaderboard](http
Step10: Get the training data.
Step11: Get the first cv fold, creating training and validation indices.
Step12: Train your starting kit.
Step13: Get the full prediction (train and validation).
Step14: Print the training and validation scores.
Step15: score_function is callable, wrapping scikit-learn's roc_auc_score. It expects a 0/1 vector as ground truth (since out labels are 0 and 1, y_train can be passed as is), and a 1D vector of predicted probabilities of class '1', which means we need the second column of y_pred.
Step16: You can check that it is just a wrapper of roc_auc_score.
Step17: Get the independent test data.
Step18: Test the submission on it.
Step19: If you want to execute training step by step, go to the feature_extractor_classifier, feature_extractor, and classifier workflows and deconstruct them.
First load the submission files and instantiate the feature extractor and regressor objects.
Step20: Select the training folds.
Step21: Fit the feature extractor.
Step22: Transform the training dataframe into numpy array.
Step23: Fit the classifier.
Step24: Transform the whole (training + validation) dataframe into a numpy array and compute the prediction.
Step25: Print the errors. | Python Code:
%matplotlib inline
import os
import glob
import numpy as np
from scipy import io
import matplotlib.pyplot as plt
import pandas as pd
from rampwf.utils.importing import import_module_from_source
Explanation: Paris Saclay Center for Data Science
Titanic RAMP: survival prediction of Titanic passengers
Benoit Playe (Institut Curie/Mines ParisTech), Chloé-Agathe Azencott (Institut Curie/Mines ParisTech), Alex Gramfort (LTCI/Télécom ParisTech), Balázs Kégl (LAL/CNRS)
Introduction
This is an initiation project to introduce RAMP and get you to know how it works.
The goal is to develop prediction models able to identify people who survived from the sinking of the Titanic, based on gender, age, and ticketing information.
The data we will manipulate is from the Titanic kaggle challenge.
Requirements
numpy>=1.10.0
matplotlib>=1.5.0
pandas>=0.19.0
scikit-learn>=0.17 (different syntaxes for v0.17 and v0.18)
seaborn>=0.7.1
End of explanation
train_filename = 'data/train.csv'
data = pd.read_csv(train_filename)
y_df = data['Survived']
X_df = data.drop(['Survived', 'PassengerId'], axis=1)
X_df.head(5)
data.describe()
data.count()
Explanation: Exploratory data analysis
Loading the data
End of explanation
data.groupby('Survived').count()
Explanation: The original training data frame has 891 rows. In the starting kit, we give you a subset of 445 rows. Some passengers have missing information: in particular Age and Cabin info can be missing. The meaning of the columns is explained on the challenge website:
Predicting survival
The goal is to predict whether a passenger has survived from other known attributes. Let us group the data according to the Survived columns:
End of explanation
from pandas.plotting import scatter_matrix
scatter_matrix(data.get(['Fare', 'Pclass', 'Age']), alpha=0.2,
figsize=(8, 8), diagonal='kde');
Explanation: About two thirds of the passengers perished in the event. A dummy classifier that systematically returns "0" would have an accuracy of 62%, higher than that of a random model.
Some plots
Features densities and co-evolution
A scatterplot matrix allows us to visualize:
* on the diagonal, the density estimation for each feature
* on each of the off-diagonal plots, a scatterplot between two features. Each dot represents an instance.
End of explanation
data_plot = data.get(['Age', 'Survived'])
data_plot = data.assign(LogFare=lambda x : np.log(x.Fare + 10.))
scatter_matrix(data_plot.get(['Age', 'LogFare']), alpha=0.2, figsize=(8, 8), diagonal='kde');
data_plot.plot(kind='scatter', x='Age', y='LogFare', c='Survived', s=50, cmap=plt.cm.Paired);
Explanation: Non-linearly transformed data
The Fare variable has a very heavy tail. We can log-transform it.
End of explanation
import seaborn as sns
sns.set()
sns.set_style("whitegrid")
sns.jointplot(data_plot.Age[data_plot.Survived == 1],
data_plot.LogFare[data_plot.Survived == 1],
kind="kde", size=7, space=0, color="b");
sns.jointplot(data_plot.Age[data_plot.Survived == 0],
data_plot.LogFare[data_plot.Survived == 0],
kind="kde", size=7, space=0, color="y");
Explanation: Plot the bivariate distributions and marginals of two variables
Another way of visualizing relationships between variables is to plot their bivariate distributions.
End of explanation
%%file submissions/starting_kit/feature_extractor.py
# This file is generated from the notebook, you need to edit it there
import pandas as pd
class FeatureExtractor():
def __init__(self):
pass
def fit(self, X_df, y):
pass
def transform(self, X_df):
X_df_new = pd.concat(
[X_df.get(['Fare', 'Age', 'SibSp', 'Parch']),
pd.get_dummies(X_df.Sex, prefix='Sex', drop_first=True),
pd.get_dummies(X_df.Pclass, prefix='Pclass', drop_first=True),
pd.get_dummies(
X_df.Embarked, prefix='Embarked', drop_first=True)],
axis=1)
X_df_new = X_df_new.fillna(-1)
XX = X_df_new.values
return XX
Explanation: The pipeline
For submitting at the RAMP site, you will have to write two classes, saved in two different files:
* the class FeatureExtractor, which will be used to extract features for classification from the dataset and produce a numpy array of size (number of samples $\times$ number of features).
* a class Classifier to predict survival
Feature extractor
The feature extractor implements a transform member function. It is saved in the file submissions/starting_kit/feature_extractor.py. It receives the pandas dataframe X_df defined at the beginning of the notebook. It should produce a numpy array representing the extracted features, which will then be used for the classification.
Note that the following code cells are not executed in the notebook. The notebook saves their contents in the file specified in the first line of the cell, so you can edit your submission before running the local test below and submitting it at the RAMP site.
End of explanation
%%file submissions/starting_kit/classifier.py
# This file is generated from the notebook, you need to edit it there
from sklearn.impute import SimpleImputer
from sklearn.pipeline import Pipeline
from sklearn.base import BaseEstimator
from sklearn.ensemble import RandomForestClassifier
class Classifier(BaseEstimator):
def __init__(self):
self.clf = Pipeline([
('imputer', SimpleImputer(strategy='median')),
('classifier', RandomForestClassifier(
n_estimators=10, max_leaf_nodes=10, random_state=61))
])
def fit(self, X, y, prev_classifier=None):
if prev_classifier is not None:
self.clf = prev_classifier.clf
rf = self.clf.steps[1][1]
rf.set_params(n_estimators=2 * rf.n_estimators, warm_start=True)
self.clf.fit(X, y)
def predict_proba(self, X):
return self.clf.predict_proba(X)
Explanation: Classifier
The classifier follows a classical scikit-learn classifier template. It should be saved in the file submissions/starting_kit/classifier.py. In its simplest form it takes a scikit-learn pipeline, assigns it to self.clf in __init__, then calls its fit and predict_proba functions in the corresponding member functions.
End of explanation
#!ramp_test_submission
Explanation: Local testing (before submission)
It is <b><span style="color:red">important that you test your submission files before submitting them</span></b>. For this we provide a unit test. Note that the test runs on your files in submissions/starting_kit, not on the classes defined in the cells of this notebook.
First pip install ramp-workflow or install it from the github repo. Make sure that the python files classifier.py and feature_extractor.py are in the submissions/starting_kit folder, and the data train.csv and test.csv are in data. Then run
ramp_test_submission
If it runs and print training and test errors on each fold, then you can submit the code.
End of explanation
problem = import_module_from_source('problem.py', 'problem')
Explanation: Submitting to ramp.studio
Once you found a good feature extractor and classifier, you can submit them to ramp.studio. First, if it is your first time using RAMP, sign up, otherwise log in. Then find an open event on the particular problem, for example, the event titanic for this RAMP. Sign up for the event. Both signups are controlled by RAMP administrators, so there can be a delay between asking for signup and being able to submit.
Once your signup request is accepted, you can go to your sandbox and copy-paste (or upload) feature_extractor.py and classifier.py from submissions/starting_kit. Save it, rename it, then submit it. The submission is trained and tested on our backend in the same way as ramp_test_submission does it locally. While your submission is waiting in the queue and being trained, you can find it in the "New submissions (pending training)" table in my submissions. Once it is trained, you get a mail, and your submission shows up on the public leaderboard.
If there is an error (despite having tested your submission locally with ramp_test_submission), it will show up in the "Failed submissions" table in my submissions. You can click on the error to see part of the trace.
After submission, do not forget to give credits to the previous submissions you reused or integrated into your submission.
The data set we use at the backend is usually different from what you find in the starting kit, so the score may be different.
The usual way to work with RAMP is to explore solutions, add feature transformations, select models, perhaps do some AutoML/hyperopt, etc., locally, and checking them with ramp_test_submission. The script prints mean cross-validation scores
```
train auc = 0.85 ± 0.005
train acc = 0.81 ± 0.006
train nll = 0.45 ± 0.007
valid auc = 0.87 ± 0.023
valid acc = 0.81 ± 0.02
valid nll = 0.44 ± 0.024
test auc = 0.83 ± 0.006
test acc = 0.76 ± 0.003
test nll = 0.5 ± 0.005
``
The official score in this RAMP (the first score column after "historical contributivity" on the [leaderboard](http://www.ramp.studio/events/titanic/leaderboard)) is area under the roc curve ("auc"), so the line that is relevant in the output oframp_test_submissionisvalid auc = 0.87 ± 0.023`. When the score is good enough, you can submit it at the RAMP.
Working in the notebook
When you are developing and debugging your submission, you may want to stay in the notebook and execute the workflow step by step. You can import problem.py and call the ingredients directly, or even deconstruct the code from ramp-workflow.
End of explanation
X_train, y_train = problem.get_train_data()
Explanation: Get the training data.
End of explanation
train_is, test_is = list(problem.get_cv(X_train, y_train))[0]
test_is
Explanation: Get the first cv fold, creating training and validation indices.
End of explanation
fe, clf = problem.workflow.train_submission(
'submissions/starting_kit', X_train, y_train, train_is)
Explanation: Train your starting kit.
End of explanation
y_pred = problem.workflow.test_submission((fe, clf), X_train)
Explanation: Get the full prediction (train and validation).
End of explanation
score_function = problem.score_types[0]
Explanation: Print the training and validation scores.
End of explanation
score_train = score_function(y_train[train_is], y_pred[:, 1][train_is])
print(score_train)
score_valid = score_function(y_train[test_is], y_pred[:, 1][test_is])
print(score_valid)
Explanation: score_function is callable, wrapping scikit-learn's roc_auc_score. It expects a 0/1 vector as ground truth (since out labels are 0 and 1, y_train can be passed as is), and a 1D vector of predicted probabilities of class '1', which means we need the second column of y_pred.
End of explanation
from sklearn.metrics import roc_auc_score
print(roc_auc_score(y_train[train_is], y_pred[:, 1][train_is]))
Explanation: You can check that it is just a wrapper of roc_auc_score.
End of explanation
X_test, y_test = problem.get_test_data()
Explanation: Get the independent test data.
End of explanation
y_test_pred = problem.workflow.test_submission((fe, clf), X_test)
score_test = score_function(y_test, y_test_pred[:, 1])
print(score_test)
Explanation: Test the submission on it.
End of explanation
feature_extractor = import_module_from_source(
'submissions/starting_kit/feature_extractor.py', 'feature_extractor')
fe = feature_extractor.FeatureExtractor()
classifier = import_module_from_source(
'submissions/starting_kit/classifier.py', 'classifier')
clf = classifier.Classifier()
Explanation: If you want to execute training step by step, go to the feature_extractor_classifier, feature_extractor, and classifier workflows and deconstruct them.
First load the submission files and instantiate the feature extractor and regressor objects.
End of explanation
X_train_train_df = X_train.iloc[train_is]
y_train_train = y_train[train_is]
Explanation: Select the training folds.
End of explanation
fe.fit(X_train_train_df, y_train_train)
Explanation: Fit the feature extractor.
End of explanation
X_train_train_array = fe.transform(X_train_train_df)
Explanation: Transform the training dataframe into numpy array.
End of explanation
clf.fit(X_train_train_array, y_train_train)
Explanation: Fit the classifier.
End of explanation
X_train_array = fe.transform(X_train)
y_pred = clf.predict_proba(X_train_array)
Explanation: Transform the whole (training + validation) dataframe into a numpy array and compute the prediction.
End of explanation
score_train = score_function(y_train[train_is], y_pred[:, 1][train_is])
print(score_train)
score_valid = score_function(y_train[test_is], y_pred[:, 1][test_is])
print(score_valid)
Explanation: Print the errors.
End of explanation |
10,072 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bundle setup
Step1: We will set the pblum mode to dataset-scaled for estimators and optimizers, to avoid having to add pblum to the fitted parameters or adjusting it manually. We will also set distortion_method to 'sphere' to speed up the computation of the light curve.
Step2: Set up and flip some constraints needed for adopting the solutions from the estimators
Step3: Periodograms
Step4: The lc_periodogram with algorithm='bls' seems to find the best period, so we'll keep that one moving forward
Step5: V light curve - lc estimators
EBAI
Step6: RV - rv_geometry
Step7: Plot results
data
Step8: parameter values | Python Code:
lc = np.loadtxt('data/lc.V.data')
rv1 = np.loadtxt('data/rv1.data')
rv2 = np.loadtxt('data/rv2.data')
b = phoebe.default_binary()
b.add_dataset('lc', times = lc[:,0], fluxes=lc[:,1], sigmas=lc[:,2], passband='Johnson:V')
b.add_dataset('rv', passband='Johnson:V')
b['times@rv@primary'], b['rvs@rv@primary'], b['sigmas@rv@primary'] = rv1[:,0], rv1[:,1], rv1[:,2]
b['times@rv@secondary'], b['rvs@rv@secondary'], b['sigmas@rv@secondary'] = rv2[:,0], rv2[:,1], rv2[:,2]
b.plot(x='times', show=True)
Explanation: Bundle setup
End of explanation
b.set_value('pblum_mode', 'dataset-scaled')
b.set_value_all('distortion_method', 'sphere')
Explanation: We will set the pblum mode to dataset-scaled for estimators and optimizers, to avoid having to add pblum to the fitted parameters or adjusting it manually. We will also set distortion_method to 'sphere' to speed up the computation of the light curve.
End of explanation
b.add_constraint('requivsumfrac')
b.add_constraint('requivratio')
b.add_constraint('teffratio')
b.flip_constraint('requivratio', solve_for='requiv@secondary')
b.flip_constraint('requivsumfrac', solve_for='requiv@primary')
b.flip_constraint('teffratio', solve_for='teff@secondary')
b.flip_constraint('esinw', solve_for='ecc')
b.flip_constraint('ecosw', solve_for='per0')
Explanation: Set up and flip some constraints needed for adopting the solutions from the estimators:
End of explanation
b.add_solver('estimator.lc_periodogram', solver='lcperiod_bls',
algorithm='bls', minimum_n_cycles=2, sample_mode='manual',
sample_periods = np.linspace(2.,2.5,1000),
overwrite=True)
b.run_solver('lcperiod_bls', solution='lcperiod_bls_sol', overwrite=True)
print(b['lcperiod_bls_sol'])
b.add_solver('estimator.lc_periodogram', solver='lcperiod_ls',
algorithm='ls',sample_mode='manual',
sample_periods = np.linspace(2.,2.5,1000),
overwrite=True)
b.run_solver('lcperiod_ls', solution='lcperiod_ls_sol', overwrite=True)
print(b['lcperiod_ls_sol'])
b.add_solver('estimator.rv_periodogram', solver='rvperiod', overwrite=True)
b.run_solver('rvperiod', solution='rvperiod_sol',
sample_mode='manual', sample_periods=np.linspace(2.,2.5,1000),
overwrite=True)
print(b['rvperiod_sol'])
np.mean([2.3433433433433435, 2.381881881881882, 2.340840840840841])
b.adopt_solution('lcperiod_bls_sol')
# b['period@binary'] = 2.346
b.plot(x='phase', show=True)
lc_bls_periodogram_results = get_current_values(b, ['period@binary',])
b.adopt_solution('lcperiod_ls_sol')
# b['period@binary'] = 2.346
b.plot(x='phase', show=True)
lc_ls_periodogram_results = get_current_values(b, ['period@binary',])
b.adopt_solution('rvperiod_sol')
# b['period@binary'] = 2.346
b.plot(x='phase', show=True)
rv_periodogram_results = get_current_values(b, ['period@binary',])
Explanation: Periodograms
End of explanation
b.adopt_solution('lcperiod_bls_sol')
Explanation: The lc_periodogram with algorithm='bls' seems to find the best period, so we'll keep that one moving forward:
End of explanation
b.add_solver('estimator.ebai', solver='lc_est_ebai_mlp', ebai_method='mlp', phase_bin = False, overwrite=True)
b.run_solver('lc_est_ebai_mlp', solution='lc_soln_ebai_mlp', overwrite=True)
b.adopt_solution('lc_soln_ebai_mlp')
b.run_compute(model='ebai_mlp_model')
b.plot(x='phase',show=True)
ebai_mlp_results = get_current_values(b, ['incl@binary', 'teffratio','requivsumfrac','esinw','ecosw'])
b.add_solver('estimator.ebai', solver='lc_est_ebai_knn', ebai_method='knn', phase_bin = False, overwrite=True)
b.run_solver('lc_est_ebai_knn', solution='lc_soln_ebai_knn', overwrite=True)
b.adopt_solution('lc_soln_ebai_knn')
b.run_compute(model='ebai_knn_model', overwrite=True)
b.plot(x='phase',show=True)
ebai_knn_results = get_current_values(b, ['incl@binary', 'teffratio','requivsumfrac','esinw','ecosw'])
b.add_solver('estimator.lc_geometry', solver='lc_est_lcgeom', phase_bin = False)
b.run_solver('lc_est_lcgeom', solution='lc_soln_lcgeom')
b.flip_constraint('ecc', solve_for='esinw')
b.flip_constraint('per0', solve_for='ecosw')
b.adopt_solution('lc_soln_lcgeom')
b.run_compute(model='lc_geometry_model')
b.plot(x='phase', legend=True, save='testcase_estimators.png', show=True)
b.flip_constraint('esinw', solve_for='ecc')
b.flip_constraint('ecosw', solve_for='per0')
lc_geometry_results = get_current_values(b, ['incl@binary', 'teffratio','requivsumfrac','esinw','ecosw'])
Explanation: V light curve - lc estimators
EBAI
End of explanation
b.add_solver('estimator.rv_geometry', solver='rvgeom')
b.run_solver('rvgeom', solution='rvgeom_sol')
b['adopt_parameters@rvgeom_sol'] = ['q', 'asini@binary', 'vgamma']
b.flip_constraint('asini@binary', solve_for='sma@binary')
# b.flip_constraint('ecc', solve_for='esinw')
# b.flip_constraint('per0', solve_for='ecosw')
b.adopt_solution('rvgeom_sol')
# b['period@binary'] = 2.345678901
b.run_compute(model='rvgeom_model')
b.plot(x='phase', model='rvgeom_model', legend=True, show=True)
rv_geometry_results = get_current_values(b, ['q', 'asini@binary','vgamma'])
b.save('bundles/after_estimators.bundle')
Explanation: RV - rv_geometry
End of explanation
times = b.get_value('times', context='dataset', dataset='lc01')
phases = b.to_phase(times)
fluxes_true = b.get_value('fluxes', context='dataset', dataset='lc01')
sigmas_true = b.get_value('sigmas', context='dataset', dataset='lc01')
times_rv = b.get_value('times', context='dataset', component='primary', dataset='rv01')
phases_rv = b.to_phase(times_rv)
rvs1 = b.get_value('rvs', context='dataset', component='primary', dataset='rv01')
rvs2 = b.get_value('rvs', context='dataset', component='secondary', dataset='rv01')
sigmas1 = b.get_value('sigmas', context='dataset', component='primary', dataset='rv01')
sigmas2 = b.get_value('sigmas', context='dataset', component='secondary', dataset='rv01')
lc_ebai_mlp = get_model(b, model='ebai_mlp_model', dataset='lc01', phase_order=True)
lc_ebai_knn = get_model(b, model='ebai_knn_model', dataset='lc01', phase_order=True)
lc_geom = get_model(b, model='lc_geometry_model', dataset='lc01', phase_order=True)
rv_geom1, rv_geom2 = get_model(b, model='rvgeom_model', dataset='rv01', model_type='rv', phase_order=True)
fig, ((ax1, ax1b, ax2, ax3), (ax4, ax4b, ax5, ax6)) = plt.subplots(nrows = 2, ncols = 4, figsize=(7.25,2),
gridspec_kw={'height_ratios': [2, 1]})
fig.subplots_adjust(hspace=0, wspace=0.3)
lc_datapoints = {'fmt': '.', 'ms': 1, 'c': '0.5', 'zorder':0,}
rv1_datapoints = {'fmt': ',', 'c': '0.0', 'zorder':0}
rv2_datapoints = {'fmt': ',', 'c': '0.5', 'zorder':0}
model_kwargs = {'lw': 1, 'zorder': 1}
res_kwargs = {'s': 0.5}
res_rv_kwargs = {'s': 3}
for ax in [ax1, ax1b, ax2]:
ax.errorbar(x=phases, y=fluxes_true, yerr=sigmas_true, rasterized=True, **lc_datapoints)
ax3.errorbar(x=phases_rv, y=rvs1, yerr=sigmas1, rasterized=True, **rv1_datapoints)
ax3.errorbar(x=phases_rv, y=rvs2, yerr=sigmas2, rasterized=True, **rv2_datapoints)
ax1.plot(lc_ebai_mlp[:,1], lc_ebai_mlp[:,2], c=phoebe_c['orange'], **model_kwargs)
ax4.scatter(lc_ebai_mlp[:,1], lc_ebai_mlp[:,3], c=phoebe_c['orange'], **res_kwargs)
ax1b.plot(lc_ebai_knn[:,1], lc_ebai_knn[:,2], c=phoebe_c['blue'], **model_kwargs)
ax4b.scatter(lc_ebai_knn[:,1], lc_ebai_knn[:,3], c=phoebe_c['blue'], **res_kwargs)
ax2.plot(lc_geom[:,1], lc_geom[:,2],c=phoebe_c['green'], **model_kwargs)
ax5.scatter(lc_geom[:,1], lc_geom[:,3], c=phoebe_c['green'], **res_kwargs)
ax3.plot(rv_geom1[:,1], rv_geom1[:,2], c=phoebe_c['blue'], label='primary', **model_kwargs)
ax3.plot(rv_geom2[:,1], rv_geom2[:,2], c=phoebe_c['red'], label='secondary', **model_kwargs)
ax6.scatter(rv_geom1[:,1], rv_geom1[:,3], c=phoebe_c['blue'], **res_rv_kwargs)
ax6.scatter(rv_geom2[:,1], rv_geom2[:,3], c=phoebe_c['red'],**res_rv_kwargs)
# ax3.legend()
for ax in [ax1, ax2, ax3]:
ax.set_xticks([])
# for ax in [ax4, ax5, ax6]:
# ax.set_xlabel('Phase')
# ax1.set_ylabel('Flux [W/m$^2$]')
# ax2.set_ylabel('Flux [W/m$^2$]', labelpad=12)
# ax3.set_ylabel('RV [km/s]')
# ax4.set_ylabel('Residuals [W/m$^2$]')
# ax5.set_ylabel('Residuals [W/m$^2$]')
# ax6.set_ylabel('Residuals [km/s]')
ax1.set_title('EBAI - MLP', pad=14)
ax1b.set_title('EBAI - kNN', pad=14)
ax2.set_title('lc\_geometry', pad=14)
ax3.set_title('rv\_geometry', pad=14)
# fig.tight_layout()
fig.savefig('figs/2_estimators_data.pdf', dpi=300)
Explanation: Plot results
data
End of explanation
truths, twigs, labels = get_truths_labels()
true_vals = {}
for twig, value in zip(twigs, truths):
true_vals[twig] = value
twigs = [
'period@binary',
'incl@binary',
'teffratio',
'requivsumfrac',
'esinw',
'ecosw',
'q',
'asini@binary',
'vgamma',
]
labels = [r'$P$',
r'$i$',
r'$T_{\mathrm{eff},2}/T_{\mathrm{eff},1}$',
r'$r_1+r_2$',
r'$e\sin\omega$',
r'$e\cos\omega$',
r'$q$',
r'$a\sin i$',
r'$v_{\gamma}$'
]
fig, axes = plt.subplots(nrows = 1, ncols = len(labels), figsize=(10,1.5))
fig.subplots_adjust(hspace=0, wspace=0.1)
models = [lc_bls_periodogram_results, lc_ls_periodogram_results, rv_periodogram_results, ebai_mlp_results, ebai_knn_results, lc_geometry_results, rv_geometry_results]
model_labels = [r'lc_periodogram (BLS)', r'lc_periodogram (LS)', r'rv_periodogram', r'EBAI_mlp', r'EBAI_knn', r'lc_geometry', r'rv_geometry']
colors = [phoebe_c['black'], phoebe_c['black'], phoebe_c['black'], phoebe_c['orange'], phoebe_c['blue'], phoebe_c['green'], phoebe_c['purple']]
# import cmasher as cmr
# colors = cmr.take_cmap_colors('cmr.rainforest', len(models), cmap_range=(0.05, 0.85), return_fmt='hex')
print('model, twig, current_solution, previous_lc, previous_rv')
for ax, label, twig in zip(axes, labels, twigs):
ax.set_title(label)
# ax.set_ylabel(ylabel)
# ax.set_yticks([])
ax.margins(0.25)
for i, model in enumerate(models):
ax.axhline(i, linestyle='--', lw=0.5, color='gray')
if twig in model.keys():
# print(model_labels[i], twig, model[twig], b_prev_lc.get_value(twig), b_prev_rv.get_value(twig))
ax.scatter(model[twig], i, c=colors[i], s=50, marker='x', zorder=1)
# ax.scatter(b_prev_lc.get_value(twig), i, marker='o', fc='gray', ec='none')
# ax.scatter(b_prev_rv.get_value(twig), i, marker='o', fc='none', ec='gray')
else:
pass
ax.axvline(x=true_vals[twig], ls=':', lw=1.5, c=phoebe_c['red'], zorder=0)
ax.set_ylim(-0.5, len(models)-1+0.5)
for i,ax in enumerate(axes):
# ax.grid(visible=True, which='major', axis='y', linestyle='--')
if i==0:
ax.set_yticks(np.arange(0,len(model_labels),1),model_labels)
else:
ax.yaxis.set_ticklabels([])
# fig.tight_layout()
fig.savefig('figs/3_estimators_vals.pdf', dpi=300)
Explanation: parameter values
End of explanation |
10,073 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Burgers' equation
Step1: In this chapter, we study a simple scalar nonlinear conservation law
Step2: Notice that at first $q$ remains single-valued for every $x$. However, after some time the crest of the wave overtakes the leading edge. After this time, we obtain a triple-valued solution for certain values of $x$. The first time this overtaking happens is referred to as the breaking time -- a reference to waves breaking on the beach. It is also the point where the conservation law, in its differential form, breaks down and where the characteristics cross each other for the first time. When characteristics cross, a shock wave, or discontinuity, forms. Mathematically, replacing the triple-valued region with a discontinuity will avoid the problem of the solution being multivalued. Where should the shock be located?
If we replace part of the multivalued solution interval with a shock, some mass will be removed (area $A_1$ in the figure below) and some mass will be added (area $A_2$ in the figure below). In the live notebook, the figure below shows possible locations for the shock at a given time; in order to maintain conservation of the integral of $q$, the shock must be placed so that these two areas are equal.
Step3: This geometric reasoning provides a nice intuition for the shock location, but is cumbersome in practice. To determine the location of the shock we can use the Rankine-Hugoniot jump condition, which we will derive in Traffic_flow, and which requires that the jump in flux across a propagating shock must be related to the jump in $q$ by
$$
s(q_r-q_\ell) = f(q_r) - f(q_\ell),
$$
at each instant in time, where $s$ is the shock speed at this time.
Since the flux for Burgers' equation is $f(q) = q^2/2$, this gives
\begin{align}
s(q_r-q_\ell) & = \frac{1}{2} (q_r^2 - q_\ell^2) \
\Rightarrow \ \ \ \ s &= \frac{1}{2}(q_\ell + q_r).
\end{align}
In general (as in the image above) the states $q_\ell$ and $q_r$ just to the left and right of the shock will not be constant and the speed of a shock will change in time.
Shock solution
For Burgers' equation, (or any scalar hyperbolic conservation law with a convex flux function, such as the traffic flow problem considered in Traffic_flow), the solution of the Riemann problem consists of a single wave, which may be either a shock or a rarefaction, separating regions of constant value $q_\ell$ and $q_r$. Here convexity of the flux $f(q)$ means that $f'(q)$ is either monotonically increasing or monotonically decreasing as $q$ varies. If $f(q)$ is not convex then the solution can be more complicated; see the section on Convexity below and in Nonconvex_scalar.
The analysis above already yields the solution to the Riemann problem in the case that the resulting wave is a shock. The entropy condition, as we will explain below, indicates that a shock will form when $f'(q_\ell) > f'(q_r)$; i.e. when $q_\ell>q_r$. In this case, the solution is
\begin{align}
q(x,t) =
\begin{cases}
q_\ell \quad \text{if} \ \ x/t<s \\
q_r \quad \text{if} \ \ x/t>s.
\end{cases}
\end{align}
Below, we plot the solution of Burgers' equation for an initial condition that leads to a shock (i.e. with $q_\ell>q_r$).
Step4: Rarefaction wave
In the previous figure with the hump as the initial condition, we observed that a shock formed on the right side of the hump. However, on the left side, the characteristics spread out and will never cross. This part of the solution is called a rarefaction wave. This is the kind of behavior we will observe in the solution of the Riemann problem when $q_\ell<q_r$.
In the next figure, we consider such a Riemann problem. Although the initial data is discontinuous at $x=0$, we can think of smearing it out slightly to a continuous function so that all values between $q_\ell$ and $q_r$ are taken over a very narrow region around $x=0$. As time evolves, each value of $q$ propagates according to the advection speed $q$ given by the quasi-linear equation. Therefore, after time $t$, each value $q$ must propagate a distance $x=qt$, so the solution along the rarefaction is then $q = x/t$. As $q_\ell<q_r$, the smallest and largest displacements are given by $q_\ell t$ and $q_r t$, respectively.
Step5: The full rarefaction solution for the Burgers' Riemann problem is then simply given by
\begin{align}
q(x,t) =
\begin{cases}
q_\ell, \quad \text{for} \ \ x<q_\ell t \\
x/t, \quad \text{for} \ \ q_\ell t \le x \le q_r t \\
q_r, \quad \text{for} \ \ x>q_r t.
\end{cases}
\end{align}
As we will see in Traffic_flow, the rarefaction solution is always a self-similar solution. This means that it can be expressed as a function of the ratio between position and time $q(x,t) = \tilde{q}(x/t)$, so it remains the same when rescaling both $x$ and $t$ by the same factor. In Burgers' equation, the form of the rarefaction is particularly simple since the advection speed is simply $q$ and so the rarefaction wave is linear in $x$ with slope $1/t$ at time $t$.
Rarefaction solution
In the figure below, we plot a solution of the Riemann problem with $q_\ell<q_r$, exhibiting a rarefaction.
Step6: Weak solutions
As we mentioned before, the differential form of the equation breaks down in the presence of shocks/discontinuities. However, the integral form of the conservation law remains valid.
Let's integrate the general conservation law $q_t+f(q)x=0$ from $x=x_1$ to $x=x_2$ and $t=t_1$ to $t=t_2$
Step7: Note the behavior of the characteristics with respect to the shock. The fact that the characteristics are spreading away from the shock rather than converging on it indicates that the correct solution should instead be a rarefaction. In order to be able to specify which of the weak solutions is physically correct, we need to derive a mathematical condition from our physical intuition gained from observing the behavior of the characteristics. This condition is referred to as the entropy condition.
The entropy condition
A given initial value problem for a hyperbolic PDE may have many weak solutions.
A condition that selects a unique physically correct solution out of these weak solutions is called an admissibility condition or more often an entropy condition. This name comes from gas dynamics, where the physical entropy must increase in the gas passing through a shock, according to the second law of thermodynamics. A discontinuity that violates this is non-physical. A mathematical "entropy function" with a similar property can often be defined for other conservation laws. More discussion of this and several other formulations of admissibility conditions, with more detailed explanations, are available in the literature, for example in many of the books cited in the Preface.
In the context of the Riemann problem, the entropy condition allows us to determine if the physical solution should involve a shock or a rarefaction. In the case of scalar conservation laws, the entropy condition is quite simple and can be formulated in terms of the flux function of the conservation law as follows
Step8: Animations of more complex solutions
We can now explore a few more examples that are representative of phenomena we will observe in more complicated systems, with more interesting initial data that the single jump discontinuity of a Riemann problem. For the animations shown below, the solution is computed numerically using PyClaw.
Bump initial conditions
The animation below in the notebook, or on this webpage, shows the solution of Burgers' equation for an initial Gaussian hump, as discussed at the beginning of this chapter. A shock forms on the leading edge and the trailing edge spreads out as a rarefaction.
Step9: Three-state data with merging shocks
For an initial condition that is piecewise-constant with three values, one can solve Burgers' equation by solving a Riemann problem locally for each discontinuity and then considering how the resulting waves interact. For instance, consider the initial condition
\begin{align}
q_0(x) =
\begin{cases}
q_\ell \quad \text{if} \ \ x < -1 \
q_m \quad \text{if} \ \ -1\le x \le 1 \
q_r \quad \text{if} \ \ x > 1.
\end{cases}
\end{align}
We can decompose this into two Riemann problems
Step10: Three-state data with interacting shock and rarefaction
As one would expect, this can become more complicated when one or both of the waves generated are given by rarefactions. The animation below, or on this webpage, shows how a right-propagating shock can overtake a rarefaction. The shock speed changes as it passes through the rarefaction, since the state just to the left of the shock is changing. This is seen clearly in the $x-t$ plane, where the slope of the shock is changed after interacting with the rarefaction. Note the shock is again marked as a wider line.
Step11: Analogously, one can also have a rarefaction overtaking a shock. As seen in the animation below, or this webpage, this can even change the direction in which the shock is propagating. This happens because the velocity of the shock depends on the slopes of the impinging characteristics. When a shock interacts with a rarefaction, the varying slopes of the characteristic curves produce a constant change of the shock propagation velocity.
In the notebook, try modifying the values of $q_\ell$, $q_m$ and $q_r$ in order to see what other behavior can be observed. Is it possible to observe two interacting rarefactions? | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
from ipywidgets import interact
from ipywidgets import widgets
from ipywidgets import FloatSlider, fixed
from exact_solvers import burgers
from exact_solvers import burgers_demos
from IPython.display import HTML
Explanation: Burgers' equation
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
from ipywidgets import interact
from ipywidgets import widgets
from ipywidgets import FloatSlider, fixed
from exact_solvers import burgers
from exact_solvers import burgers_demos
interact(burgers_demos.multivalued_solution,
t=FloatSlider(min=0.,max=9.,value=7.75), fig=fixed(0));
Explanation: In this chapter, we study a simple scalar nonlinear conservation law: Burgers' equation. Burgers' equation models momentum transport in a fluid of uniform density and pressure, and it is the simplest equation that captures some key features of gas dynamics or water waves.
To examine the Python code for this chapter, see:
exact_solvers/burgers.py ...
on github,
exact_solvers/burgers_demos.py...
on github.
Burgers' equation has been used extensively for developing both theory and numerical methods, and it will allow us to explore the Riemann problem for a nonlinear conservation law. Burgers' equation is a scalar conservation law with flux $f(q)=q^2/2$:
\begin{align}
q_t + \left(\frac{1}{2}q^2\right)_x = 0.
\label{burgers0}
\end{align}
The quasilinear form is obtained by applying the chain rule to the flux term:
\begin{align}
q_t + qq_x = 0.
\end{align}
This equation looks very similar to the advection equation, with the difference that the advection speed at each point is given by the solution $q$. Burgers' equation is often viewed as a simplified version of equations in fluid dynamics or water waves that also have nonlinear fluxes. Our study of the dynamics of equation (\ref{burgers0}) is a step toward understanding these more complex nonlinear systems, in Shallow_water and Euler. In Traffic_flow we consider another scalar nonlinear conservation law that has a similar structure to Burgers' equation and a simpler physical interpretation.
The characteristic speed for Burgers' equation is $f'(q) = q$. As long as the solution is smooth, the solution is constant along characteristic curves $X(t)$ satisfying $X'(t) = f'(q(X(t),t)$ since then
$$
\frac{d}{dt} q(X(t),t) = q_x(X(t),t)X'(t) + q_t(X(t),t) = 0.
$$
Because of this, the characteristics are straight lines.
However, since the charactistic speed depends on the solution, these lines are not parallel and characterstics may converge or spread out.
Shock formation
In the figure below we consider Burgers' equation with a Gaussian hump as the initial data. Since the characteristic speed in Burgers' equation is given by $q$ itself, the peak of the hump travels faster than the rest, and characteristics are converging at the front of the traveling wave (where $f'(q)$ decreases with $x$) while they are spreading out behind the peak (where $f'(q)$ increases with $x$). The dashed line shows the initial condition while the solid lines show the solution at later times.
End of explanation
interact(burgers_demos.shock_location,
xshock = FloatSlider(min=6,max=10,step=0.25,value=7.75),
fig=fixed(0));
Explanation: Notice that at first $q$ remains single-valued for every $x$. However, after some time the crest of the wave overtakes the leading edge. After this time, we obtain a triple-valued solution for certain values of $x$. The first time this overtaking happens is referred to as the breaking time -- a reference to waves breaking on the beach. It is also the point where the conservation law, in its differential form, breaks down and where the characteristics cross each other for the first time. When characteristics cross, a shock wave, or discontinuity, forms. Mathematically, replacing the triple-valued region with a discontinuity will avoid the problem of the solution being multivalued. Where should the shock be located?
If we replace part of the multivalued solution interval with a shock, some mass will be removed (area $A_1$ in the figure below) and some mass will be added (area $A_2$ in the figure below). In the live notebook, the figure below shows possible locations for the shock at a given time; in order to maintain conservation of the integral of $q$, the shock must be placed so that these two areas are equal.
End of explanation
interact(burgers_demos.shock(),
t=widgets.FloatSlider(min=0,max=1.0,value=0.5),
which_char=widgets.Checkbox(value=True,
description='Show characteristics'));
Explanation: This geometric reasoning provides a nice intuition for the shock location, but is cumbersome in practice. To determine the location of the shock we can use the Rankine-Hugoniot jump condition, which we will derive in Traffic_flow, and which requires that the jump in flux across a propagating shock must be related to the jump in $q$ by
$$
s(q_r-q_\ell) = f(q_r) - f(q_\ell),
$$
at each instant in time, where $s$ is the shock speed at this time.
Since the flux for Burgers' equation is $f(q) = q^2/2$, this gives
\begin{align}
s(q_r-q_\ell) & = \frac{1}{2} (q_r^2 - q_\ell^2) \
\Rightarrow \ \ \ \ s &= \frac{1}{2}(q_\ell + q_r).
\end{align}
In general (as in the image above) the states $q_\ell$ and $q_r$ just to the left and right of the shock will not be constant and the speed of a shock will change in time.
Shock solution
For Burgers' equation, (or any scalar hyperbolic conservation law with a convex flux function, such as the traffic flow problem considered in Traffic_flow), the solution of the Riemann problem consists of a single wave, which may be either a shock or a rarefaction, separating regions of constant value $q_\ell$ and $q_r$. Here convexity of the flux $f(q)$ means that $f'(q)$ is either monotonically increasing or monotonically decreasing as $q$ varies. If $f(q)$ is not convex then the solution can be more complicated; see the section on Convexity below and in Nonconvex_scalar.
The analysis above already yields the solution to the Riemann problem in the case that the resulting wave is a shock. The entropy condition, as we will explain below, indicates that a shock will form when $f'(q_\ell) > f'(q_r)$; i.e. when $q_\ell>q_r$. In this case, the solution is
\begin{align}
q(x,t) =
\begin{cases}
q_\ell \quad \text{if} \ \ x/t<s \\
q_r \quad \text{if} \ \ x/t>s.
\end{cases}
\end{align}
Below, we plot the solution of Burgers' equation for an initial condition that leads to a shock (i.e. with $q_\ell>q_r$).
End of explanation
interact(burgers_demos.rarefaction_figure,
t = FloatSlider(min=0,max=9,value=5));
Explanation: Rarefaction wave
In the previous figure with the hump as the initial condition, we observed that a shock formed on the right side of the hump. However, on the left side, the characteristics spread out and will never cross. This part of the solution is called a rarefaction wave. This is the kind of behavior we will observe in the solution of the Riemann problem when $q_\ell<q_r$.
In the next figure, we consider such a Riemann problem. Although the initial data is discontinuous at $x=0$, we can think of smearing it out slightly to a continuous function so that all values between $q_\ell$ and $q_r$ are taken over a very narrow region around $x=0$. As time evolves, each value of $q$ propagates according to the advection speed $q$ given by the quasi-linear equation. Therefore, after time $t$, each value $q$ must propagate a distance $x=qt$, so the solution along the rarefaction is then $q = x/t$. As $q_\ell<q_r$, the smallest and largest displacements are given by $q_\ell t$ and $q_r t$, respectively.
End of explanation
interact(burgers_demos.rarefaction(),
t=widgets.FloatSlider(min=0,max=1.0,value=0.5),
which_char=widgets.Checkbox(value=True,
description='Show characteristics'));
Explanation: The full rarefaction solution for the Burgers' Riemann problem is then simply given by
\begin{align}
q(x,t) =
\begin{cases}
q_\ell, \quad \text{for} \ \ x<q_\ell t \\
x/t, \quad \text{for} \ \ q_\ell t \le x \le q_r t \\
q_r, \quad \text{for} \ \ x>q_r t.
\end{cases}
\end{align}
As we will see in Traffic_flow, the rarefaction solution is always a self-similar solution. This means that it can be expressed as a function of the ratio between position and time $q(x,t) = \tilde{q}(x/t)$, so it remains the same when rescaling both $x$ and $t$ by the same factor. In Burgers' equation, the form of the rarefaction is particularly simple since the advection speed is simply $q$ and so the rarefaction wave is linear in $x$ with slope $1/t$ at time $t$.
Rarefaction solution
In the figure below, we plot a solution of the Riemann problem with $q_\ell<q_r$, exhibiting a rarefaction.
End of explanation
interact(burgers_demos.unphysical(),
t=widgets.FloatSlider(min=0,max=1.0,value=0.5),
which_char=widgets.Checkbox(value=True,
description='Show characteristics'));
Explanation: Weak solutions
As we mentioned before, the differential form of the equation breaks down in the presence of shocks/discontinuities. However, the integral form of the conservation law remains valid.
Let's integrate the general conservation law $q_t+f(q)x=0$ from $x=x_1$ to $x=x_2$ and $t=t_1$ to $t=t_2$:
\begin{align}
\int_{t_1}^{t_2}\int_{x_1}^{x_2} [q_t+f(q)_x] dx dt = 0.
\end{align}
This integral can be rewritten in terms of an indicator function $\phi(x,t)$ in $[x_1,x_2]\times[t_1,t_2]$ (defined to be 1 in this region, zero elsewhere):
\begin{align}
\int{0}^{\infty}\int_{-\infty}^{\infty} [q_t+f(q)_x]\phi(x,t) dx dt = 0.
\label{eq:Burgersintclaw2}
\end{align}
We can further replace $\phi(x,t)$ by a smooth function with compact support on some region of the $x-t$ plane (zero outside a closed and bounded region). Assuming $t=0$ is in the support of $\phi(x,t)$, integration by parts yields
\begin{align}
\int_{0}^{\infty}\int_{-\infty}^{\infty} [q\phi_t+f(q)\phi_x] dx dt = -\int_{-\infty}^{\infty}q(x,0)\phi(x,0)dx,
\label{eq:Burgersintclaw3}
\end{align}
where now the derivatives are on $\phi(x,t)$ and not on $q$ or $f(q)$, so the equation still makes sense for discontinuous $q$. Note that we only obtain one boundary term along $t=0$ since $\phi(x,t)$ vanishes at infinity. The function $q(x,t)$ is called a weak solution of the conservation law if (\ref{eq:Burgersintclaw3}) holds for all functions $\phi(x,t)$ that are continuously differentiable and have compact support (bump functions). However, the function $\phi(x,t)$ chosen in (\ref{eq:Burgersintclaw2}) does not satisfy these conditions since it is not smooth. Nonetheless, we can approximate this function arbitrarily well by a smooth function. Note that any weak solution is a solution of the integral conservation law and vice versa.
Weak solutions are thus allowed to include discontinuities. The shock solution presented above, for instance, is a weak solution of Burgers' equation. After characteristics cross, there is no strong solution of the conservation law, and we must resort to weak solutions.
An unphysical weak solution
A potential problem with the notion of weak solutions is that in some cases the weak solution is not unique. For example, consider the Riemann problem for Burgers' equation with $q_\ell < q_r$. As mentioned already we expect a rarefaction to be the correct solution. This is because if we smeared out the initial data just slightly then there would be smoothly varying characteristics emerging from each point $x$ and these characteristics would spread out. However, for exactly discontinuous initial data, there exists another weak solution in which the initial discontinuity propagates as a shock wave with the Rankine-Hugoniot speed $(q_\ell + q_r)/2$:
\begin{align}
q(x,t) =
\begin{cases}
q_\ell \quad \text{if} \ \ x/t<s \\
q_r \quad \text{if} \ \ x/t>s.
\end{cases}
\end{align}
This unphysical solution is also referred to as an expansion shock, and it is also a weak solution to Burgers' equation. In the next figure, we plot this weak solution.
End of explanation
burgers.riemann_solution_interact()
Explanation: Note the behavior of the characteristics with respect to the shock. The fact that the characteristics are spreading away from the shock rather than converging on it indicates that the correct solution should instead be a rarefaction. In order to be able to specify which of the weak solutions is physically correct, we need to derive a mathematical condition from our physical intuition gained from observing the behavior of the characteristics. This condition is referred to as the entropy condition.
The entropy condition
A given initial value problem for a hyperbolic PDE may have many weak solutions.
A condition that selects a unique physically correct solution out of these weak solutions is called an admissibility condition or more often an entropy condition. This name comes from gas dynamics, where the physical entropy must increase in the gas passing through a shock, according to the second law of thermodynamics. A discontinuity that violates this is non-physical. A mathematical "entropy function" with a similar property can often be defined for other conservation laws. More discussion of this and several other formulations of admissibility conditions, with more detailed explanations, are available in the literature, for example in many of the books cited in the Preface.
In the context of the Riemann problem, the entropy condition allows us to determine if the physical solution should involve a shock or a rarefaction. In the case of scalar conservation laws, the entropy condition is quite simple and can be formulated in terms of the flux function of the conservation law as follows: the solution of a scalar Riemann problem will consist of a shock only if
$$
f'(q_\ell) > f'(q_r).
$$
In other words, the solution is a shock only if nearby characteristics from the left and right approach the shock as time progresses. This is often called the Lax Entropy Condition. If nearby characteristics are spreading out, as in the last example, the correct solution is instead a rarefaction. In the case of Burgers' equation, the flux function is $f(q)=q^2/2$, so the correct solution is a shock only if
$$
q_\ell > q_r,
$$
which can be clearly observed in the interactive solution below (in the notebook). In later chapters, we will see how this condition generalizes to systems of conservation laws.
Convexity
The flux $f(q) = q^2/2$ for Burgers' equation is a convex function, since $f''(q) = 1 > 0$ for all values of $q$. This means that as $q$ varies between $q_\ell$ and $q_r$ the the characteristic speed $f'(q) = q$ is either monotonically increasing (if $q_\ell < q_r$) or monotonically decreasing (if $q_\ell > q_r$). Hence for any Riemann problem the characteristic speeds for intermediate states are either purely converging or purely diverging, and the Riemann solution is always either a single rarefaction wave or a single shock wave.
The flux $f(\rho) = \rho(1-\rho)$ of the simple traffic flow model that we will consider in Traffic_flow also has the property that $f'(\rho)$ varies monotonically with $\rho$ (in the context of hyperbolic PDEs, a flux function is referred to as convex if either $f''(q)\ge0$ for all $q$ or $f''(q)\le 0$ for all $q$). Since $f''$ is always negative for the traffic flow model, the correct Riemann solution is a shock if $\rho_\ell < \rho_r$, or a rarefaction wave if $\rho_\ell > \rho_r$.
The solution to the Riemann problem can be much more complicated if $f'(q)$ is not monotonically varying between $q_\ell$ and $q_r$, i.e. if $f''(q)$ changes sign. In this case the Riemann solution can consist of multiple shock and rarefaction waves. The nonconvex case is explored further in Nonconvex_scalar.
Interactive solution and examples
In the live notebook, the figure below is an interactive solution of the Riemann problem for Burgers' equation. The values of the initial conditions and the time can be modified to observe their effect on the characteristic structure and the solution.
End of explanation
anim = burgers_demos.bump_animation(numframes = 50)
HTML(anim)
Explanation: Animations of more complex solutions
We can now explore a few more examples that are representative of phenomena we will observe in more complicated systems, with more interesting initial data that the single jump discontinuity of a Riemann problem. For the animations shown below, the solution is computed numerically using PyClaw.
Bump initial conditions
The animation below in the notebook, or on this webpage, shows the solution of Burgers' equation for an initial Gaussian hump, as discussed at the beginning of this chapter. A shock forms on the leading edge and the trailing edge spreads out as a rarefaction.
End of explanation
anim = burgers_demos.triplestate_animation(ql = 4.0, qm = 2.0,
qr = 0.0, numframes = 50)
HTML(anim)
Explanation: Three-state data with merging shocks
For an initial condition that is piecewise-constant with three values, one can solve Burgers' equation by solving a Riemann problem locally for each discontinuity and then considering how the resulting waves interact. For instance, consider the initial condition
\begin{align}
q_0(x) =
\begin{cases}
q_\ell \quad \text{if} \ \ x < -1 \
q_m \quad \text{if} \ \ -1\le x \le 1 \
q_r \quad \text{if} \ \ x > 1.
\end{cases}
\end{align}
We can decompose this into two Riemann problems: one at $x=-1$ and another at $x=1$. Each of these two Riemann problems will yield either a shock or a rarefaction. The interesting part is what happens when the generated shocks or rarefactions interact. In the scenario shown below, let us assume that both Riemann problems produce a right-propagating shock. However, the shock generated at $x=-1$ propagates faster, so it will eventually reach the shock originally generated at $x=1$. At the point in time when the shocks collide, one can simply restate the problem again as a Riemann problem and solve the whole problem analytically. In this animation, the plot on the left shows the solution $q(x)$ evolving through time, while the one on the right shows the characteristic structure in the $x-t$ plane, with the shocks shown as wide lines and the characteristics as thin lines, and with time is marked by a horizontal dashed line. In this animation, one can clearly see the two shocks merge to become a single shock at later times. This animation can be viewed in the live notebook on or this webpage.
End of explanation
anim = burgers_demos.triplestate_animation(ql = 4.0, qm = -1.5,
qr = 0.5, numframes = 50)
HTML(anim)
Explanation: Three-state data with interacting shock and rarefaction
As one would expect, this can become more complicated when one or both of the waves generated are given by rarefactions. The animation below, or on this webpage, shows how a right-propagating shock can overtake a rarefaction. The shock speed changes as it passes through the rarefaction, since the state just to the left of the shock is changing. This is seen clearly in the $x-t$ plane, where the slope of the shock is changed after interacting with the rarefaction. Note the shock is again marked as a wider line.
End of explanation
anim = burgers_demos.triplestate_animation(ql = -1, qm = 3.0,
qr = -2, numframes = 50)
HTML(anim)
Explanation: Analogously, one can also have a rarefaction overtaking a shock. As seen in the animation below, or this webpage, this can even change the direction in which the shock is propagating. This happens because the velocity of the shock depends on the slopes of the impinging characteristics. When a shock interacts with a rarefaction, the varying slopes of the characteristic curves produce a constant change of the shock propagation velocity.
In the notebook, try modifying the values of $q_\ell$, $q_m$ and $q_r$ in order to see what other behavior can be observed. Is it possible to observe two interacting rarefactions?
End of explanation |
10,074 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Comparing 2 Stacks of Catalogues
Environment
Step1: Need more packages?
Step2: Main()
Run legacy-zeropoints-qa.py like this "python legacy-zeropoints-qa.py" to analyze everything.
See below to walk through it step by step.
Step3: MZLS v2, v3 tractor cats
Step4: Cosmos comparison | Python Code:
import sys
print sys.executable
# Hack!, this avoids messing with NERSC's config file for jupyter hub
sys.path.append('/global/homes/k/kaylanb/repos/astrometry.net')
sys.path.append('/global/homes/k/kaylanb/repos/tractor')
sys.path
print sys.path
Explanation: Comparing 2 Stacks of Catalogues
Environment: anaconda-2.7
End of explanation
# Easy if pip, conda installable
#!/anaconda2/bin/pip install ...
#!/anaconda2/bin/conda install ...
Explanation: Need more packages?
End of explanation
%matplotlib inline
%load_ext autoreload
%autoreload 2
import numpy as np
import matplotlib.pyplot as plt
import fitsio
import glob
import os
import matplotlib.patches as mpatches
from matplotlib.collections import PatchCollection
from astropy import units
from astropy.coordinates import SkyCoord
from astrometry.util.fits import fits_table, merge_tables
from tractor.brightness import NanoMaggies
import catalogues as cat
Explanation: Main()
Run legacy-zeropoints-qa.py like this "python legacy-zeropoints-qa.py" to analyze everything.
See below to walk through it step by step.
End of explanation
cut= np.all((np.all(v2.get('decam_nobs')[:,[4]] == 1,axis=1),\
),axis=0)
v2.get('decam_nobs')[:,4][cut]
np.where(v2.get('decam_nobs')[:,4] == 1)
fits_funcs= cat.CatalogueFuncs()
v2=fits_funcs.stack('v2_cats.txt')
v3=fits_funcs.stack('v3_cats.txt')
cut= np.all((np.all(v2.get('decam_nobs')[:,[4]] == 1,axis=1),\
),axis=0)
v2.cut(cut)
cut= np.all((np.all(v3.get('decam_nobs')[:,[4]] == 1,axis=1),\
),axis=0)
v3.cut(cut)
mat=cat.Matcher()
imatch,imiss,d2d= mat.match_within(v2,v3) #,dist=1./3600)
fits_funcs.set_mags(v2)
fits_funcs.set_mags(v3)
import plots
k=plots.Kaylans(v2,v3,imatch,\
ref_name='v2',obs_name='v3',savefig=True)
d=plots.Dustins(v2,v3,imatch,d2d,\
ref_name='v2',obs_name='v3',savefig=True)
import plots
d=plots.Dustins(v2,v3,imatch,d2d,plot_all=False)
mat.match_within(self,ref,obs,dist=1./3600)
# Take v2 and shift everything by 1/3 pix in dec
fits_funcs= cat.CatalogueFuncs()
v4=fits_funcs.stack('v2_cats.txt')
third_pix= (1./3)*0.262/3600
v4.set('dec',v4.get('dec')+third_pix)
fits_funcs.set_extra_data(v4)
mat=cat.Matcher()
imatch,imiss,d2d= mat.match_within(v2,v4) #,dist=1./3600)
a=plt.hist(d2d * 3600., 100,range=(0.01,0.2))
import plots
k=plots.Kaylans(v2,v3,imatch,plot_all=False)
for mytype in ['PSF','SIMP','DEV','COMP']:
k.stacked_confusion_matrix(v2[ imatch['ref'] ],v3[ imatch['obs'] ],\
ref_name='v2',obs_name='v3',savefig=True,\
band='z',mytype=mytype)
import plots
k=plots.Kaylans(v2,v3,imatch,plot_all=False)
# k.barchart(v2,v3,\
# ref_name='v2',obs_name='v3',savefig=True,prefix='alldata')
# k.barchart(v2[ imatch['ref']],v3[imatch['obs']],\
# ref_name='v2',obs_name='v3',savefig=True,prefix='matchedonly')
k.delta_mag_vs_mag(v2[ imatch['ref']],v3[imatch['obs']],\
ref_name='v2',obs_name='v3',savefig=True,ylim=[-0.2,0.2])
import plots
d=plots.Dustins(v2,v3,imatch,d2d,plot_all=False)
d.match_distance(d2d,range=(0,0.2),prefix='',savefig=True)
# imatch_2,imiss_2,d2d_2= mat.match_within(v3,v2)
# d.match_/distance(d2d_2,range=(0,0.2),savefig=True,prefix='2_')
import catalogues
mat=catalogues.Matcher()
d={}
for key in ['v2_auto','v3_auto','v2v3_cross']:
d[key]={}
if key == 'v2v3_cross':
tup= mat.nearest_neighbors_within(v2,v3,within=1./3600,min_nn=1,max_nn=3)
elif key == 'v2_auto':
tup= mat.nearest_neighbors_within(v2,v2,within=1./3600,min_nn=2,max_nn=4)
elif key == 'v3_auto':
tup= mat.nearest_neighbors_within(v3,v3,within=1./3600,min_nn=2,max_nn=4)
d[key]['ref'],d[key]['obs'],d[key]['dist']= tup
d[key]['dist']
key='v2v3_cross'
dists={}
for cnt,nn in enumerate(np.sort(d[key]['ref'].keys())):
if cnt == 0:
dists[nn]= d[key]['dist'][nn]
else:
dists[nn]= np.concatenate((d[key]['dist'][nn], dists[str(int(nn)-1)]))
print('nn=%s, len(dists[nn])=%d' % (nn,len(dists[nn])))
# if str(int(nn)-1) in d[key]['ref'].keys():
# distsplt.hist(d[key]['dist'][nn]*3600,20,range=(0.01,0.1),normed=True)
print dists['1']
np.histogram(dists['1']*3600,bins=bins,normed=True)
key='v2v3_cross'
bins=np.linspace(0.01,0.1,num=20)
print bins
for nn in np.sort(d[key]['ref'].keys())[::-1]:
print nn
hist,bj= np.histogram(dists[nn]*3600,bins=bins,normed=True)
binc= (bins[1:]+bins[:-1])/2
plt.step(binc,hist, where='mid')
print binc,hist
plt.xlim((bins[0],bins[-1]))
plt.ylim()
# plt.hist(dists[nn]*3600,20,range=(0.01,0.1),normed=True)
fig,ax=plt.subplots(1,3) #,figsize=(4,8),sharex=True)
plt.subplots_adjust(hspace=0)
# for cnt,key in zip(range(3),['v2_auto','v3_auto','v2v3_cross']):
cnt=0
key='v2_auto'
for nn in np.sort(d[key]['ref'].keys()):
ax[cnt].hist(d[key]['dist'][nn]*3600,100,range=(0,0.1),label='nn='+nn,\
normed=True)
ax[cnt].legend(loc='upper right')
# hist[band],bins= np.histogram(chi[band][imag],\
# range=(low,hi),bins=50,normed=True)
# db= (bins[1:]-bins[:-1])/2
# binc[band]= bins[:-1]+db
# ax[cnt].step(binc[band],hist[band], where='mid',c='b',lw=2) #label="%.1f < mag < %.1f" % (b_low,b_hi))
d.match_distance(d2d,range=(0,0.15),savefig=False)
k.
Explanation: MZLS v2, v3 tractor cats
End of explanation
fits_funcs= cat.CatalogueFuncs()
c40=fits_funcs.stack('cosmos_40_tractor_list.txt')
c41=fits_funcs.stack('cosmos_41_tractor_list.txt')
mat=cat.Matcher()
cmatch,imiss,c_d2d= mat.match_within(c40,c41) #,dist=1./3600)
fits_funcs.set_extra_data(c40)
fits_funcs.set_extra_data(c41)
import plots
e=plots.EnriqueCosmos(c40,c41,cmatch,ref_name='40',obs_name='41',savefig=False)
import plots
Explanation: Cosmos comparison
End of explanation |
10,075 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Convolutional Autoencoder
Sticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
Step1: Network Architecture
The encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.
<img src='assets/convolutional_autoencoder.png' width=500px>
Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data.
What's going on with the decoder
Okay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers aren't. Usually, you'll see transposed convolution layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, tf.nn.conv2d_transpose.
However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with tf.image.resize_images, followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.
Exercise
Step2: Training
As before, here wi'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
Step3: Denoising
As I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.
Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.
Exercise
Step4: Checking out the performance
Here I'm adding noise to the test images and passing them through the autoencoder. It does a suprising great job of removing the noise, even though it's sometimes difficult to tell what the original number is. | Python Code:
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
Explanation: Convolutional Autoencoder
Sticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
End of explanation
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same')
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same')
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same')
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
decoded = tf.nn.sigmoid(logits, name='decoded')
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(0.001).minimize(cost)
Explanation: Network Architecture
The encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.
<img src='assets/convolutional_autoencoder.png' width=500px>
Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data.
What's going on with the decoder
Okay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers aren't. Usually, you'll see transposed convolution layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, tf.nn.conv2d_transpose.
However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with tf.image.resize_images, followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.
Exercise: Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by 2. Odena et al claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in tf.image.resize_images or use tf.image.resize_nearest_neighbor.
End of explanation
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
Explanation: Training
As before, here wi'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
End of explanation
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same')
# Now 14x14x32
conv2 = tf.layers.conv2d(maxpool1, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same')
# Now 7x7x32
conv3 = tf.layers.conv2d(maxpool2, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same')
# Now 4x4x16
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x16
conv4 = tf.layers.conv2d(upsample1, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x16
conv5 = tf.layers.conv2d(upsample2, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x32
conv6 = tf.layers.conv2d(upsample3, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
decoded = tf.nn.sigmoid(logits, name='decoded')
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(0.001).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
Explanation: Denoising
As I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.
Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.
Exercise: Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
End of explanation
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
Explanation: Checking out the performance
Here I'm adding noise to the test images and passing them through the autoencoder. It does a suprising great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
End of explanation |
10,076 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Quick Intro to Keras Functional API
Preamble
Step1: Step 2 | Python Code:
# let's load MNIST data as we did in the exercise on MNIST with FC Nets
# %load ../solutions/sol_52.py
Explanation: Quick Intro to Keras Functional API
Preamble: All models (layers) are callables
```python
from keras.layers import Input, Dense
from keras.models import Model
this returns a tensor
inputs = Input(shape=(784,))
a layer instance is callable on a tensor, and returns a tensor
x = Dense(64, activation='relu')(inputs)
x = Dense(64, activation='relu')(x)
predictions = Dense(10, activation='softmax')(x)
this creates a model that includes
the Input layer and three Dense layers
model = Model(input=inputs, output=predictions)
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(data, labels) # starts training
```
Multi-Input Networks
Keras Merge Layer
Here's a good use case for the functional API: models with multiple inputs and outputs.
The functional API makes it easy to manipulate a large number of intertwined datastreams.
Let's consider the following model.
```python
from keras.layers import Dense, Input
from keras.models import Model
from keras.layers.merge import concatenate
left_input = Input(shape=(784, ), name='left_input')
left_branch = Dense(32, input_dim=784, name='left_branch')(left_input)
right_input = Input(shape=(784,), name='right_input')
right_branch = Dense(32, input_dim=784, name='right_branch')(right_input)
x = concatenate([left_branch, right_branch])
predictions = Dense(10, activation='softmax', name='main_output')(x)
model = Model(inputs=[left_input, right_input], outputs=predictions)
```
Resulting Model will look like the following network:
<img src="../imgs/multi_input_model.png" />
Such a two-branch model can then be trained via e.g.:
python
model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit([input_data_1, input_data_2], targets) # we pass one data array per model input
Try yourself
Step 1: Get Data - MNIST
End of explanation
## try yourself
## `evaluate` the model on test data
Explanation: Step 2: Create the Multi-Input Network
End of explanation |
10,077 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This script checks if all the ROI atlases in functional space have all the ROIs
Step1: As seen from histogram most of the corrupted ROIs are in cerebellum.
Therefore I have chosen to consider only the ROIs that are in brainetome.
Moreover in some of the subjects, ROIs belonging to Brainnetome atlas are also missing.
Step2: Missing ROIs(Brainnetome)
ROI number 94 is missing in 12 subjects
Check if these 12 subjects are included in analysis, if they are then ignore the ROI
Some others were also missing but in very few subjects -- Check
ROI number 255 (of Cerebellum) is missing in all 1102 subjects
Step3: Results
Step4: Results
Step5: Major realization | Python Code:
import numpy as np
import nibabel as nib
atlas_path = '/home1/varunk/results_again_again/ABIDE1_Preprocess_Datasink/atlas_paths/atlas_file_list.npy'
atlas_files = np.load(atlas_path)
atlas_files[41]
in_file = atlas_files[40]
atlas_values_list = nib.load(in_file).get_data().ravel()
universe = set(np.arange(274))
universe - set(atlas_values_list)
set(atlas_values_list);
# Now the real code:
import re
universe = set(np.arange(274))
missing_ROIs = []
for index,in_file in enumerate(atlas_files):
sub_id_extracted = re.search('.+_subject_id_(\d+)', in_file).group(1)
print(sub_id_extracted)
atlas_values = nib.load(in_file).get_data()
atlas_values_list = atlas_values.ravel()
atlas_values_max = max(atlas_values_list)
missing_ROIs.append((index,sub_id_extracted,list(universe - set(atlas_values_list)),atlas_values_max))
missing_ROIs
miss_refined = []
for miss in missing_ROIs:
if miss[3] == 274.0 and len(miss[2]) == 3:
miss_refined.append((miss[1],miss))
miss_refined
bad_subjects = []
for i in miss_refined:
bad_subjects.append(i[0])
bad_subjects = list(map(int, bad_subjects))
bad_subjects
bad_rois = []
for sub in missing_ROIs:
bad_rois.extend(sub[2])
# (bad_rois)
%matplotlib inline
import matplotlib.pyplot as plt
bins = np.arange(275)
count_roi = plt.hist(bad_rois, bins = bins)
# plt.ylim([0, 10])
Explanation: This script checks if all the ROI atlases in functional space have all the ROIs
End of explanation
for i in zip(list(count_roi[0]),list(count_roi[1])):
print(i)
# (Number of Subjects that doesnot have the ROI, ROI)
Explanation: As seen from histogram most of the corrupted ROIs are in cerebellum.
Therefore I have chosen to consider only the ROIs that are in brainetome.
Moreover in some of the subjects, ROIs belonging to Brainnetome atlas are also missing.
End of explanation
# Runall:
import pandas as pd
df = pd.read_csv('/home1/varunk/data/ABIDE1/RawDataBIDs/composite_phenotypic_file.csv') # , index_col='SUB_ID'
df = df.sort_values(['SUB_ID'])
# df = df.sort_values(['SUB+AF8-ID'])
bugs = ['51232','51233','51242','51243','51244','51245','51246','51247','51270','51310', '50045']
# sub 50045 has fewer ROIs coz of more shift of head
# '0051242' in bugs
# df
# selecting Autistic males(DSM IV) of age <= 18 years
df_aut_lt18_m = df.loc[(df['SEX'] == 1) & (df['AGE_AT_SCAN'] <=18) & (df['DSM_IV_TR'] == 1)]
df_aut_lt18_m.shape
df_aut_lt18_m_eyesopen = df.loc[(df['SEX'] == 1) & (df['AGE_AT_SCAN'] <=18) & (df['DSM_IV_TR'] == 1) & (df['EYE_STATUS_AT_SCAN'] == 1)]
df_aut_lt18_m_eyesopen;
df_td_lt18_m_eyesopen = df.loc[(df['SEX'] == 1) & (df['AGE_AT_SCAN'] <=18) & (df['DSM_IV_TR'] == 0) & (df['EYE_STATUS_AT_SCAN'] == 1)]
df_td_lt18_m_eyesopen;
df_aut_lt18_m_eyesopen_subid = df_aut_lt18_m_eyesopen.as_matrix(['SUB_ID']).squeeze()
df_td_lt18_m_eyesopen_subid = df_td_lt18_m_eyesopen.as_matrix(['SUB_ID']).squeeze()
# Sanity checks
set(df_aut_lt18_m_eyesopen_subid) - (set(df_aut_lt18_m_eyesopen_subid) - set(df_td_lt18_m_eyesopen_subid))
df_aut_lt18_m_eyesclosed = df.loc[(df['SEX'] == 1) & (df['AGE_AT_SCAN'] <=18) & (df['DSM_IV_TR'] == 1) & (df['EYE_STATUS_AT_SCAN'] == 2)]
df_aut_lt18_m_eyesclosed;
df_td_lt18_m_eyesclosed = df.loc[(df['SEX'] == 1) & (df['AGE_AT_SCAN'] <=18) & (df['DSM_IV_TR'] == 0) & (df['EYE_STATUS_AT_SCAN'] == 2)]
df_td_lt18_m_eyesclosed;
df_aut_lt18_m_eyesclosed_subid = df_aut_lt18_m_eyesclosed.as_matrix(['SUB_ID']).squeeze()
df_td_lt18_m_eyesclosed_subid = df_td_lt18_m_eyesclosed.as_matrix(['SUB_ID']).squeeze()
# Sanity checks
set(df_aut_lt18_m_eyesclosed_subid) - (set(df_aut_lt18_m_eyesclosed_subid) - set(df_td_lt18_m_eyesclosed_subid))
df_td_lt18_m_eyesclosed_subid
# df_aut_lt18_m_eyesopen_subid
# set(bad_subjects) - (set(bad_subjects) - set(df_aut_lt18_m_eyesopen_subid) )
# - set(bad_subjects);
sub_lt_246 = []
for sub in missing_ROIs:
# missing_flag = False
# for roi in sub[2]: # get the roi list for a subject
if any(roi < 246 for roi in sub[2]): # missing ROI is < 246 i.e belongs to brainettome
sub_lt_246.append(sub)
len(sub_lt_246)
sub_lt_246
# Now to find the Subjects that have corrupted ROIs < 246 with modified list
corrupted_rois_lt_246 = []
subj = []
for sub in sub_lt_246:
sub_2 = np.array(sub[2])
modified_list = np.sort(sub_2[np.where(sub_2 < 246)[0]])
corrupted_rois_lt_246.append((sub[1],modified_list))
subj.append(int(sub[1]))
corrupted_rois_lt_246, subj
subj = set(subj)
# Subjects that are in the problematic set that are in the AUTISTIC SET
set(subj) - (set(subj) - set(df_aut_lt18_m_eyesopen_subid))
# Subjects that are in the problematic set that are in the TD SET
set(subj) - (set(subj) - set(df_td_lt18_m_eyesopen_subid))
Explanation: Missing ROIs(Brainnetome)
ROI number 94 is missing in 12 subjects
Check if these 12 subjects are included in analysis, if they are then ignore the ROI
Some others were also missing but in very few subjects -- Check
ROI number 255 (of Cerebellum) is missing in all 1102 subjects
End of explanation
# Subjects that are in the problematic set that are in the AUTISTIC SET
set(subj) - (set(subj) - set(df_aut_lt18_m_eyesclosed_subid))
# Subjects that are in the problematic set that are in the TD SET
set(subj) - (set(subj) - set(df_td_lt18_m_eyesclosed_subid))
Explanation: Results: Remove & Ignore (Eyes Open):
Remove sub-51276 from the TD as it contains a lot of Corrupted ROIs
Ignore ROI 94 as sub-50279 belonging to Autistic group has corrupted ROI
End of explanation
# SUB - ROI
# 0050279 - 94
# 0050286 - 94
# 0050643 - 69,70,94,109,110,117,118
# 0050648 - 93,94
# 0050651 - 94,109,110,117,118
# 0050652 - 110,118,94
# 0050653 - 93,94,118
# 0050655 - 93,94,109,110,112,118
# 0050658 - 94,110,118
Explanation: Results: Remove & Ignore (Eyes Closed)
Remove sub - 50746 as it contains a lot of corrupted ROIs
Remove sub-50727 as it contains a lot of corrupted ROIs
Ignore ROI 126 as it is corrupted in sub-51472
Final Result:
ROIs to ignore in Brainnetome - 126, 94
Subjects to ignore 51276, 50746, 50727
End of explanation
sub_gt_255 = []
for sub in missing_ROIs:
# missing_flag = False
# for roi in sub[2]: # get the roi list for a subject
if any(roi > 255 for roi in sub[2]): # missing ROI is < 246 i.e belongs to brainettome
sub_gt_255.append(sub)
sub_gt_255
# Now to find the Subjects that have corrupted ROIs > 254 with modified list
corrupted_rois_gt_255 = []
subj = []
for sub in sub_gt_255:
sub_2 = np.array(sub[2])
modified_list = np.sort(sub_2[np.where(np.logical_and((sub_2 > 255),(sub_2 != 261)))[0]])
if len(modified_list != 0):
corrupted_rois_gt_255.append((sub[1],modified_list))
subj.append(int(sub[1]))
subj, corrupted_rois_gt_255
# Subjects that are in the problematic set that are in the AUTISTIC SET
set(subj) - (set(subj) - set(df_aut_lt18_m_eyesopen_subid))
# Subjects that are in the problematic set that are in the TD SET
set(subj) - (set(subj) - set(df_td_lt18_m_eyesopen_subid))
Explanation: Major realization:
As many of the cerebellum ROIs were missing how could I compute the brain maps of cerebellum ?
I think I sould look at the cerebellum ROIs once again to see which are the major brain areas that are missing in the cerebellum
Cerebellum regions reported in OHBM 2018:
Right Crus I
Left Crus I
Right VI
Left V
Left VI
Lets see which were wrong.
I have - df_aut_lt18_m_eyesopen_subid that denotes the sub ids of the bin that I have considered.
Cerebellum ROIs are corrrupted from ROI number 254 onwards.
I have to find the set diff between
End of explanation |
10,078 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Whoosh
Step1: Means the sound made by something that is moving quickly
Whoosh, so fast and easy that even a lawyer could manage it
What is Whoosh?
Whoosh is a library of classes and functions for indexing text and then searching the index. It allows you to develop custom search engines for your content.
Whoosh is fast, but uses only pure Python, so it will run anywhere Python runs, without requiring a compiler.
It’s a programmer library for creating a search engine
Allows indexing, choose the level of information stored for each term in each field, parsing search queries, choose scoring algorithms, etc.
but...
All indexed text in Whoosh must be unicode.
Only runs in 2.7 NOT in python 3
Why Whoosh instead Elastic Search?
Why I personally choose whoosh instead other high performance solutions
Step2: TAGS
Document
Step3: Other ideas
Use the search engine as tagger
e.g. all products with the word "kids" will be tagged as "child" ("niños" o "infantil")
Use the database as tagger
e.g. all smartphones below 150€ tagged as "cheap"
* Teamwork is always better *
*I had to collaborate with other departments, SEO and Cataloging *
Schema
Types of fields
Step4: Index
Whoosh allows you to
Step5: 12k documents aprox stored in 10mb, index created in less than 15 seconds, depending on the computer
Step6: Search
Parsing
Scoring
Step7: Sorting
We can sort by any field that is previously marked as sortable in the schema.
python
PVP=NUMERIC(sortable=True) | Python Code:
from IPython.display import Image
Image(filename='files/screenshot.png')
from IPython.display import Image
Image(filename='files/whoosh.jpg')
Explanation: Whoosh: a fast pure-Python search engine library
Pydata Madrid
2016.04.10
Who am I?
Claudia Guirao Fernández
@claudiaguirao
Background: Double degree in Law and Business Administration
Data Scientist at PcComponentes.com
Professional learning enthusiast
End of explanation
import csv
catalog = csv.DictReader(open('files/catalogo_head.csv'))
print list(catalog)[0].keys()
catalog = csv.DictReader(open('files/catalogo_head.csv'))
for product in catalog:
print product["Codigo"] + ' - ' + product["Articulo"] + ' - ' + product["Categoria"]
Explanation: Means the sound made by something that is moving quickly
Whoosh, so fast and easy that even a lawyer could manage it
What is Whoosh?
Whoosh is a library of classes and functions for indexing text and then searching the index. It allows you to develop custom search engines for your content.
Whoosh is fast, but uses only pure Python, so it will run anywhere Python runs, without requiring a compiler.
It’s a programmer library for creating a search engine
Allows indexing, choose the level of information stored for each term in each field, parsing search queries, choose scoring algorithms, etc.
but...
All indexed text in Whoosh must be unicode.
Only runs in 2.7 NOT in python 3
Why Whoosh instead Elastic Search?
Why I personally choose whoosh instead other high performance solutions:
I was mainly focused on index / search definition
12k documents aprox, mb instead gb, "small data"
Fast development
No compilers, no java
If your are a begginer, you have no team, you need a fast solution, you need to work isolated or you have a small project this is your solution otherwise Elastic Search might be your tech.
Development stages
Data treatment
Schema
Index
Search
Other stuff
Data Treatment
Data set is available in csv format at www.pccomponentes.com > mi panel de cliente > descargar tarifa
It is in latin, it has special characters and missing values
No tags, emphasis and laboured phrasing, lots of irrelevant information mixed with the relevant information.
TONS OF FUN!
End of explanation
Image(filename='files/tf.png')
Image(filename='files/idf.png')
Image(filename='files/tfidf.png')
from nltk.corpus import stopwords
import csv
stop_words_spa = stopwords.words("spanish")
stop_words_eng = stopwords.words("english")
with open('files/adjetivos.csv', 'rb') as f:
reader = csv.reader(f)
adjetivos=[]
for row in reader:
for word in row:
adjetivos.append(word)
import math
#tf-idf functions:
def tf(word, blob):
return float(blob.words.count(word))/float(len(blob.words))
def idf(word, bloblist):
return (float(math.log(len(bloblist)))/float(1 + n_containing(word, bloblist)))
def n_containing(word, bloblist):
return float(sum(1 for blob in bloblist if word in blob))
def tfidf(word, blob, bloblist):
return float(tf(word, blob)) * float(idf(word, bloblist))
import csv
from textblob import TextBlob as tb
catalog = csv.DictReader(open('files/catalogo_head.csv'))
bloblist = []
for product in catalog:
text =unicode(product["Articulo"], encoding="utf-8", errors="ignore").lower()
text = ' '.join([word for word in text.split() if word not in stop_words_spa]) # remove Spanish stopwords
text = ' '.join([word for word in text.split() if word not in stop_words_eng]) #remove English stopwords
text = ' '.join([word for word in text.split() if word not in adjetivos]) # remove meaningless adjectives
value = tb(text) # bag of words
bloblist.append(value)
tags = []
for blob in bloblist:
scores = {word: tfidf(word, blob, bloblist) for word in blob.words}
sorted_words = sorted(scores.items(), key=lambda x: x[1], reverse=True)
terms = ''
for word, score in sorted_words[:3]:
terms = terms+word+' '
tags.append(terms)
for t in tags:
print unicode(t)
Explanation: TAGS
Document: each product
Corpus: catalog
TF-IDF
term frequency–inverse document frequency, reflects how important a word is to a document in a collection or corpus
End of explanation
from whoosh.lang.porter import stem
print "stemming: "+stem("analyse")
from whoosh.lang.morph_en import variations
print "variations: "
print list(variations("analyse"))[0:5]
import csv
catalog = csv.DictReader(open('files/catalogo_contags.csv'))
print list(catalog)[0].keys()
from whoosh.index import create_in
from whoosh.analysis import StemmingAnalyzer
from whoosh.fields import *
catalog = csv.DictReader(open('files/catalogo_contags.csv'))
data_set = []
for row in catalog:
row["Categoria"] = unicode(row["Categoria"], encoding="utf-8", errors="ignore").lower()
row["Articulo"] =unicode(row["Articulo"], encoding="utf-8", errors="ignore").lower()
row["Articulo"] = ' '.join([word for word in row["Articulo"].split() if word not in stop_words_spa])
row["Articulo"] = ' '.join([word for word in row["Articulo"].split() if word not in stop_words_eng])
row["Articulo"] = ' '.join([word for word in row["Articulo"].split() if word not in adjetivos])
row["tags"] = unicode(row["tags"], encoding="utf-8", errors="ignore")
row["Ean"] = unicode(row["Ean"], encoding="utf-8", errors="ignore")
row["Codigo"] = unicode(row["Codigo"], encoding="utf-8", errors="ignore")
row["PVP"] = float(row["PVP"])
row["Plazo"] = unicode(row["Plazo"], encoding="utf-8", errors="ignore")
data_set.append(row)
print str(len(data_set)) + ' products'
schema = Schema(Codigo=ID(stored=True),
Ean=TEXT(stored=True),
Categoria=TEXT(analyzer=StemmingAnalyzer(minsize=3),
stored=True),
Articulo=TEXT(analyzer=StemmingAnalyzer(minsize=3),
field_boost=2.0, stored=True),
Tags=KEYWORD(field_boost=1.0, stored=True),
PVP=NUMERIC(sortable = True),
Plazo = TEXT(stored=True))
Explanation: Other ideas
Use the search engine as tagger
e.g. all products with the word "kids" will be tagged as "child" ("niños" o "infantil")
Use the database as tagger
e.g. all smartphones below 150€ tagged as "cheap"
* Teamwork is always better *
*I had to collaborate with other departments, SEO and Cataloging *
Schema
Types of fields:
TEXT : for body text, allows phrase searching.
KEYWORD: space- or comma-separated keywords, tags
ID: single unit, e.g. prod
NUMERIC: int, long, or float, sortable format
DATETIME: sortable
BOOLEAN: users to search for yes, no, true, false, 1, 0, t or f.
Field boosting. * Is a multiplier applied to the score of any term found in the field.*
Form diversity
Stemming (great if you are working English)
Removes suffixes
Variation (great if you are working English)
Encodes the words in the index in a base form
End of explanation
from whoosh import index
from datetime import datetime
start = datetime.now()
ix = create_in("indexdir", schema) #clears the index
#on a directory with an existing index will clear the current contents of the index
writer = ix.writer()
for product in data_set:
writer.add_document(Codigo=unicode(product["Codigo"]),
Ean=unicode(product["Ean"]),
Categoria=unicode(product["Categoria"]),
Articulo=unicode(product["Articulo"]),
Tags=unicode(product["tags"]),
PVP=float(product["PVP"]))
writer.commit()
finish = datetime.now()
time = finish-start
print time
Explanation: Index
Whoosh allows you to:
- Create an index object in accordance with the schema
- Merge segments: an efficient way to add documents
- Delete documents in index: writer.delete_document(docnum)
- Update documents: writer.update_document
- Incremental index
End of explanation
Image(filename='files/screenshot_files.png')
Explanation: 12k documents aprox stored in 10mb, index created in less than 15 seconds, depending on the computer
End of explanation
from whoosh.qparser import MultifieldParser, OrGroup
qp = MultifieldParser(["Categoria",
"Articulo",
"Tags",
"Ean",
"Codigo",
"Tags"], # all selected fields
schema=ix.schema, # with my schema
group=OrGroup) # OR instead AND
user_query = 'Cargador de coche USB'
user_query = unicode(user_query, encoding="utf-8", errors="ignore")
user_query = user_query.lower()
user_query = ' '.join([word for word in user_query.split() if word not in stop_words_spa])
user_query = ' '.join([word for word in user_query.split() if word not in stop_words_eng])
print "this is our query: " + user_query
q = qp.parse(user_query)
print "this is our parsed query: " + str(q)
with ix.searcher() as searcher:
results = searcher.search(q)
print str(len(results))+' hits'
print results[0]["Codigo"]+' - '+results[0]["Articulo"]+' - '+results[0]["Categoria"]
Explanation: Search
Parsing
Scoring: The default is BM25F, but you can change it. myindex.searcher(weighting=scoring.TF_IDF())
Sorting: by scoring, by relevance, custom metrics
Filtering: e.g. by category
Paging: let you set up number of articles by page and ask for a specific page number
Parsing
Convert a query string submitted by a user into query objects
Default parser:
QueryParser("content", schema=myindex.schema)
MultifieldParser: Returns a QueryParser configured to search in multiple fields
Whoosh also allows you to customize your parser.
End of explanation
with ix.searcher() as searcher:
print '''
----------- word-scoring sorting ------------
'''
results = searcher.search(q)
for hit in results:
print hit["Articulo"]+' - '+str(hit["PVP"])+' eur'
print '''
--------------- PVP sorting -----------------
'''
results = searcher.search(q, sortedby="PVP")
for hit in results:
print hit["Articulo"]+' - '+str(hit["PVP"])+' eur'
Explanation: Sorting
We can sort by any field that is previously marked as sortable in the schema.
python
PVP=NUMERIC(sortable=True)
End of explanation |
10,079 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 7 Regularization for Deep Learning
the best fitting model is a large model that has been regularized appropriately.
7.1 Parameter Norm Penalties
\begin{equation}
\tilde{J}(\theta; X, y) = J(\theta; X, y) + \alpha \Omega(\theta)
\end{equation}
where $\Omega(\theta)$ is a paramter norm penalty.
typically, penalizes only the weights of the affine transformation at each layer and leaves the biases unregularized.
7.1.1 $L^2$ Parameter Regularization
7.1.2 $L^1$ Regularization
The sparsity property induced by $L^1$ regularization => feature selection
7.2 Norm Penalties as Constrained Optimization
constrain $\Omega(\theta)$ to be less than some constant $k$
Step1: 7.8 Early Stopping
run it until the ValidationSetError has not imporved for some amount of time.
Use the parameters of the lowest ValidationSetError during the whole train.
Step2: 7.9 Parameter Tying adn Parameter Sharing
regularized the paramters of one model (supervised) to be close to model (unsupervised)
to force sets of parameters to be equal | Python Code:
show_image("fig7_2.png")
Explanation: Chapter 7 Regularization for Deep Learning
the best fitting model is a large model that has been regularized appropriately.
7.1 Parameter Norm Penalties
\begin{equation}
\tilde{J}(\theta; X, y) = J(\theta; X, y) + \alpha \Omega(\theta)
\end{equation}
where $\Omega(\theta)$ is a paramter norm penalty.
typically, penalizes only the weights of the affine transformation at each layer and leaves the biases unregularized.
7.1.1 $L^2$ Parameter Regularization
7.1.2 $L^1$ Regularization
The sparsity property induced by $L^1$ regularization => feature selection
7.2 Norm Penalties as Constrained Optimization
constrain $\Omega(\theta)$ to be less than some constant $k$:
\begin{equation}
\mathcal{L}(\theta, \alpha; X, y) = J(\theta; X, y) + \alpha(\Omega(\theta) - k)
\end{equation}
In practice, column norm limitation is always implemented as an explicit constraint with reprojection.
7.3 Regularization and Under-Constrained Problems
regularized matrix is guarantedd to be invertible.
7.4 Dataset Augmentation
create fake data:
transform
inject noise
7.5 Noise Robustness
add noise to data
add noise to weight (Bayesian: variable distributaion):
is equivalent with an additional regularization term.
add noise to output target: label smooothing
7.6 Semi-Supervised Learning
Goal: learn a representation so that example from the same class have similar representations.
7.7 Multi-Task Learning
Task-specific paramters
Generic parameters
End of explanation
show_image("fig7_3.png", figsize=[10, 8])
Explanation: 7.8 Early Stopping
run it until the ValidationSetError has not imporved for some amount of time.
Use the parameters of the lowest ValidationSetError during the whole train.
End of explanation
show_image("fig7_8.png", figsize=[10, 8])
Explanation: 7.9 Parameter Tying adn Parameter Sharing
regularized the paramters of one model (supervised) to be close to model (unsupervised)
to force sets of parameters to be equal: parameter sharing => convolutional neural networks.
7.10 Sparse Representations
place a penalty on the activations of the units in a neural network, encouraging their activations to be sparse.
7.11 Bagging and Other Ensemble Methods
7.12 Dropout
increase the size of the model when using dropout.
small samples, dropout is less effective.
7.13 Adversarial Training
End of explanation |
10,080 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Les tables concernant l'âge et le sexe
Output
Step1: Les tables concernant le statut d'actif.ve
Output
Step2: Les tables concernant le statut d'actif.ve occupée
Taux de chômage trimestriel au sens du BIT en France (hors Mayotte)
Output
Step3: Tables concernant les pensions de retraite
Output
Step4: Tables concernant le statut d'étudiant
Output
Step5: Tables concernant le handicap
Output
Step6: Tables concernant le satut marital
Output
Step7: Tables concernant le fait d'avoir un enfant
Output
Step8: Tables concernant le nombre d'enfants
Output | Python Code:
pd.read_csv("data/demographie/pop_age_sexe_2016.csv").head()
Explanation: Les tables concernant l'âge et le sexe
Output : pop_age_sexe_2016.csv
Input :
Table générée à partir de pop-1janvier-fe.xls (https://www.insee.fr/fr/statistiques/1892086)
Source : Insee, estimations de population (résultats provisoires arrêtés à fin 2015).
Champ : France y compris Mayotte.
Warning : âge = 100 correspond à 100 et plus
End of explanation
pd.read_csv("data/travail/activite_2015.csv")
Explanation: Les tables concernant le statut d'actif.ve
Output: activite_2015.csv
Input : Table générée à partir de activite.xls, https://www.insee.fr/fr/statistiques/2045153?sommaire=2045174
Champ : France métropolitaine, population des ménages, actifs de 15 ans ou plus.
Source : Insee, enquête Emploi.
2015
End of explanation
pd.read_csv("data/travail/chomage.csv")
Explanation: Les tables concernant le statut d'actif.ve occupée
Taux de chômage trimestriel au sens du BIT en France (hors Mayotte)
Output : chomage.csv
Input : Tableau généré à partir de sl_chomage.xls, https://www.insee.fr/fr/statistiques/2045144?sommaire=2045174
Données CVS en moyenne trimestrielle, en %
Champ : France (hors Mayotte), population des ménages, personnes de 15 ans ou plus
Source : Insee, enquête Emploi, 2016(T1)
End of explanation
pd.read_csv("data/travail/retraite_2012.csv")
Explanation: Tables concernant les pensions de retraite
Output : retraite_2012.csv
Input : Table générée à partir de NATCCF04564.xls.
Montant moyen mensuel de la retraite globale
- Champ : retraités de droit direct, âgés de 65 ans ou plus,
nés en France ou à l'étranger, résidents en France ou à l'étranger.
Les retraités ne percevant qu'une pension de réversion sont exclus.
- Source : Drees, échantillon interrégimes de retraités 2012.
End of explanation
pd.read_csv("data/demographie/etudes.csv")
Explanation: Tables concernant le statut d'étudiant
Output : etudes.csv
Input : Table générée à partir de NATTEF07116.xls
Champ : France (hors Mayotte), enseignement public et privé, y c. scolarisation en apprentissage.
Source : Depp.
2013
End of explanation
pd.read_csv("data/demographie/handicap_pop.csv")
Explanation: Tables concernant le handicap
Output: handicap_pop.csv
Input: Table générée à partir de handicap.ods
Champ : population âgée de 15 à 64 ans en France métropolitaine vivant en ménage ordinaire (collectivités exclues).
Source : Dares, enquête complémentaire à l'enquête Emploi 2007.
Personnes ayant une reconnaissance administrative du handicap.
http://www.insee.fr/fr/themes/document.asp?ref_id=T11F037
End of explanation
reference_marital = dict()
for sexe in ['homme', 'femme']:
reference_marital[sexe] = pd.read_csv("data/menages/statut_marital_{0}.csv".format(sexe))
reference_marital['femme'].head()
reference_marital['homme'].head()
Explanation: Tables concernant le satut marital
Output : statut_marital_femme.csv et statut_marital_homme.csv
Input : Tables générées à partir de irsocsd2014_fe_t6.xls
N.B. La répartition par état matrimonial est provisoire ; elle n'est pas disponible à partir de 90 ans.
Champ : France inclus Mayotte
Source : Insee, estimations de population
1er janvier 2015
End of explanation
pd.read_csv('data/menages/enfants/type_famille.csv')
Explanation: Tables concernant le fait d'avoir un enfant
Output : type_famille.csv
Input: AMFd3.xls pour la structure des familles avec enfants en 2012
- Champ : France, population des ménages, familles avec au moins un enfant.
- Source : Insee, RP1990 sondage au 1/4, RP1999 à RP2012 exploitations complémentaires.
irsocsd2012_t6_fe.xls pour récupérer les effectifs de référence
- Champ : France, territoire au 31 décembre 2010
- Source : Insee, estimations de population
- 2012
End of explanation
pd.read_csv('data/menages/enfants/nbr_enfant.csv')
Explanation: Tables concernant le nombre d'enfants
Output: nbr_enfant.csv
Input : Table générée à partir de rp2013_td_fam1.xls
Source : Insee
RP2013 exploitation complémentaire.
End of explanation |
10,081 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<!--BOOK_INFORMATION-->
<a href="https
Step1: Although the original images consisted of 92 x 112 pixel images, the version available
through scikit-learn contains images downscaled to 64 x 64 pixels.
To get a sense of the dataset, we can plot some example images. Let's pick eight indices
from the dataset in a random order
Step2: We can plot these example images using Matplotlib, but we need to make sure we reshape
the column vectors to 64 x 64 pixel images before plotting
Step3: You can see how all the faces are taken against a dark background and are upright. The
facial expression varies drastically from image to image, making this an interesting
classification problem. Try not to laugh at some of them!
Preprocessing the dataset
Before we can pass the dataset to the classifier, we need to preprocess it following the best
practices from Chapter 4, Representing Data and Engineering Features.
Specifically, we want to make sure that all example images have the same mean grayscale
level
Step4: We repeat this procedure for every image to make sure the feature values of every data
point (that is, a row in X) are centered around zero
Step5: The preprocessed data can be visualized using the preceding code
Step6: Training and testing the random forest
We continue to follow our best practice to split the data into training and test sets
Step7: Then we are ready to apply a random forest to the data
Step8: Here we want to create an ensemble with 50 decision trees
Step9: Because we have a large number of categories (that is, 40), we want to make sure the
random forest is set up to handle them accordingly
Step10: We can play with other optional arguments, such as the number of data points required in a
node before it can be split
Step11: However, we might not want to limit the depth of each tree. This is again, a parameter we
will have to experiment with in the end. But for now, let's set it to a large integer value,
making the depth effectively unconstrained
Step12: Then we can fit the classifier to the training data
Step13: We can check the resulting depth of the tree using the following function
Step14: This means that although we allowed the tree to go up to depth 1000, in the end only 25
layers were needed.
The evaluation of the classifier is done once again by predicting the labels first (y_hat) and
then passing them to the accuracy_score function
Step15: We find 87% accuracy, which turns out to be much better than with a single decision tree
Step16: Not bad! We can play with the optional parameters to see if we get better. The most
important one seems to be the number of trees in the forest. We can repeat the experiment
with a forest made from 100 trees | Python Code:
from sklearn.datasets import fetch_olivetti_faces
dataset = fetch_olivetti_faces()
X = dataset.data
y = dataset.target
Explanation: <!--BOOK_INFORMATION-->
<a href="https://www.packtpub.com/big-data-and-business-intelligence/machine-learning-opencv" target="_blank"><img align="left" src="data/cover.jpg" style="width: 76px; height: 100px; background: white; padding: 1px; border: 1px solid black; margin-right:10px;"></a>
This notebook contains an excerpt from the book Machine Learning for OpenCV by Michael Beyeler.
The code is released under the MIT license,
and is available on GitHub.
Note that this excerpt contains only the raw code - the book is rich with additional explanations and illustrations.
If you find this content useful, please consider supporting the work by
buying the book!
<!--NAVIGATION-->
< Combining Decision Trees Into a Random Forest | Contents | Implementing AdaBoost >
Using Random Forests for Face Recognition
A popular dataset that we haven't talked much about yet is the Olivetti face dataset.
The Olivetti face dataset was collected in 1990 by AT&T Laboratories Cambridge. The
dataset comprises facial images of 40 distinct subjects, taken at different times and under
different lighting conditions. In addition, subjects varied their facial expression
(open/closed eyes, smiling/not smiling) and their facial details (glasses/no glasses).
Images were then quantized to 256 grayscale levels and stored as unsigned 8-bit integers.
Because there are 40 distinct subjects, the dataset comes with 40 distinct target labels.
Recognizing faces thus constitutes an example of a multiclass classification task.
Loading the dataset
Like many other classic datasets, the Olivetti face dataset can be loaded using scikit-learn:
End of explanation
import numpy as np
np.random.seed(21)
idx_rand = np.random.randint(len(X), size=8)
Explanation: Although the original images consisted of 92 x 112 pixel images, the version available
through scikit-learn contains images downscaled to 64 x 64 pixels.
To get a sense of the dataset, we can plot some example images. Let's pick eight indices
from the dataset in a random order:
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure(figsize=(14, 8))
for p, i in enumerate(idx_rand):
plt.subplot(2, 4, p + 1)
plt.imshow(X[i, :].reshape((64, 64)), cmap='gray')
plt.axis('off')
Explanation: We can plot these example images using Matplotlib, but we need to make sure we reshape
the column vectors to 64 x 64 pixel images before plotting:
End of explanation
n_samples, n_features = X.shape
X -= X.mean(axis=0)
Explanation: You can see how all the faces are taken against a dark background and are upright. The
facial expression varies drastically from image to image, making this an interesting
classification problem. Try not to laugh at some of them!
Preprocessing the dataset
Before we can pass the dataset to the classifier, we need to preprocess it following the best
practices from Chapter 4, Representing Data and Engineering Features.
Specifically, we want to make sure that all example images have the same mean grayscale
level:
End of explanation
X -= X.mean(axis=1).reshape(n_samples, -1)
Explanation: We repeat this procedure for every image to make sure the feature values of every data
point (that is, a row in X) are centered around zero:
End of explanation
plt.figure(figsize=(14, 8))
for p, i in enumerate(idx_rand):
plt.subplot(2, 4, p + 1)
plt.imshow(X[i, :].reshape((64, 64)), cmap='gray')
plt.axis('off')
plt.savefig('olivetti-pre.png')
Explanation: The preprocessed data can be visualized using the preceding code:
End of explanation
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
X, y, random_state=21
)
Explanation: Training and testing the random forest
We continue to follow our best practice to split the data into training and test sets:
End of explanation
import cv2
rtree = cv2.ml.RTrees_create()
Explanation: Then we are ready to apply a random forest to the data:
End of explanation
num_trees = 50
eps = 0.01
criteria = (cv2.TERM_CRITERIA_MAX_ITER + cv2.TERM_CRITERIA_EPS,
num_trees, eps)
rtree.setTermCriteria(criteria)
Explanation: Here we want to create an ensemble with 50 decision trees:
End of explanation
rtree.setMaxCategories(len(np.unique(y)))
Explanation: Because we have a large number of categories (that is, 40), we want to make sure the
random forest is set up to handle them accordingly:
End of explanation
rtree.setMinSampleCount(2)
Explanation: We can play with other optional arguments, such as the number of data points required in a
node before it can be split:
End of explanation
rtree.setMaxDepth(1000)
Explanation: However, we might not want to limit the depth of each tree. This is again, a parameter we
will have to experiment with in the end. But for now, let's set it to a large integer value,
making the depth effectively unconstrained:
End of explanation
rtree.train(X_train, cv2.ml.ROW_SAMPLE, y_train);
Explanation: Then we can fit the classifier to the training data:
End of explanation
rtree.getMaxDepth()
Explanation: We can check the resulting depth of the tree using the following function:
End of explanation
_, y_hat = rtree.predict(X_test)
from sklearn.metrics import accuracy_score
accuracy_score(y_test, y_hat)
Explanation: This means that although we allowed the tree to go up to depth 1000, in the end only 25
layers were needed.
The evaluation of the classifier is done once again by predicting the labels first (y_hat) and
then passing them to the accuracy_score function:
End of explanation
from sklearn.tree import DecisionTreeClassifier
tree = DecisionTreeClassifier(random_state=21, max_depth=25)
tree.fit(X_train, y_train)
tree.score(X_test, y_test)
Explanation: We find 87% accuracy, which turns out to be much better than with a single decision tree:
End of explanation
num_trees = 100
eps = 0.01
criteria = (cv2.TERM_CRITERIA_MAX_ITER + cv2.TERM_CRITERIA_EPS,
num_trees, eps)
rtree.setTermCriteria(criteria)
rtree.train(X_train, cv2.ml.ROW_SAMPLE, y_train);
_, y_hat = rtree.predict(X_test)
accuracy_score(y_test, y_hat)
Explanation: Not bad! We can play with the optional parameters to see if we get better. The most
important one seems to be the number of trees in the forest. We can repeat the experiment
with a forest made from 100 trees:
End of explanation |
10,082 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Repeated measures ANOVA on source data with spatio-temporal clustering
This example illustrates how to make use of the clustering functions
for arbitrary, self-defined contrasts beyond standard t-tests. In this
case we will tests if the differences in evoked responses between
stimulation modality (visual VS auditory) depend on the stimulus
location (left vs right) for a group of subjects (simulated here
using one subject's data). For this purpose we will compute an
interaction effect using a repeated measures ANOVA. The multiple
comparisons problem is addressed with a cluster-level permutation test
across space and time.
Step1: Set parameters
Step2: Read epochs for all channels, removing a bad one
Step3: Transform to source space
Step4: Transform to common cortical space
Normally you would read in estimates across several subjects and morph them
to the same cortical space (e.g. fsaverage). For example purposes, we will
simulate this by just having each "subject" have the same response (just
noisy in source space) here.
We'll only consider the left hemisphere in this tutorial.
Step5: It's a good idea to spatially smooth the data, and for visualization
purposes, let's morph these to fsaverage, which is a grade 5 ICO source space
with vertices 0
Step6: Now we need to prepare the group matrix for the ANOVA statistic. To make the
clustering function work correctly with the ANOVA function X needs to be a
list of multi-dimensional arrays (one per condition) of shape
Step7: Prepare function for arbitrary contrast
As our ANOVA function is a multi-purpose tool we need to apply a few
modifications to integrate it with the clustering function. This
includes reshaping data, setting default arguments and processing
the return values. For this reason we'll write a tiny dummy function.
We will tell the ANOVA how to interpret the data matrix in terms of
factors. This is done via the factor levels argument which is a list
of the number factor levels for each factor.
Step8: Finally we will pick the interaction effect by passing 'A
Step9: A stat_fun must deal with a variable number of input arguments.
Inside the clustering function each condition will be passed as flattened
array, necessitated by the clustering procedure. The ANOVA however expects an
input array of dimensions
Step10: Compute clustering statistic
To use an algorithm optimized for spatio-temporal clustering, we
just pass the spatial connectivity matrix (instead of spatio-temporal).
Step11: Visualize the clusters
Step12: Finally, let's investigate interaction effect by reconstructing the time
courses | Python Code:
# Authors: Alexandre Gramfort <[email protected]>
# Eric Larson <[email protected]>
# Denis Engemannn <[email protected]>
#
# License: BSD (3-clause)
import os.path as op
import numpy as np
from numpy.random import randn
import matplotlib.pyplot as plt
import mne
from mne.stats import (spatio_temporal_cluster_test, f_threshold_mway_rm,
f_mway_rm, summarize_clusters_stc)
from mne.minimum_norm import apply_inverse, read_inverse_operator
from mne.datasets import sample
print(__doc__)
Explanation: Repeated measures ANOVA on source data with spatio-temporal clustering
This example illustrates how to make use of the clustering functions
for arbitrary, self-defined contrasts beyond standard t-tests. In this
case we will tests if the differences in evoked responses between
stimulation modality (visual VS auditory) depend on the stimulus
location (left vs right) for a group of subjects (simulated here
using one subject's data). For this purpose we will compute an
interaction effect using a repeated measures ANOVA. The multiple
comparisons problem is addressed with a cluster-level permutation test
across space and time.
End of explanation
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
subjects_dir = data_path + '/subjects'
src_fname = subjects_dir + '/fsaverage/bem/fsaverage-ico-5-src.fif'
tmin = -0.2
tmax = 0.3 # Use a lower tmax to reduce multiple comparisons
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
Explanation: Set parameters
End of explanation
raw.info['bads'] += ['MEG 2443']
picks = mne.pick_types(raw.info, meg=True, eog=True, exclude='bads')
# we'll load all four conditions that make up the 'two ways' of our ANOVA
event_id = dict(l_aud=1, r_aud=2, l_vis=3, r_vis=4)
reject = dict(grad=1000e-13, mag=4000e-15, eog=150e-6)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=reject, preload=True)
# Equalize trial counts to eliminate bias (which would otherwise be
# introduced by the abs() performed below)
epochs.equalize_event_counts(event_id)
Explanation: Read epochs for all channels, removing a bad one
End of explanation
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = "dSPM" # use dSPM method (could also be MNE, sLORETA, or eLORETA)
inverse_operator = read_inverse_operator(fname_inv)
# we'll only use one hemisphere to speed up this example
# instead of a second vertex array we'll pass an empty array
sample_vertices = [inverse_operator['src'][0]['vertno'], np.array([], int)]
# Let's average and compute inverse, then resample to speed things up
conditions = []
for cond in ['l_aud', 'r_aud', 'l_vis', 'r_vis']: # order is important
evoked = epochs[cond].average()
evoked.resample(50, npad='auto')
condition = apply_inverse(evoked, inverse_operator, lambda2, method)
# Let's only deal with t > 0, cropping to reduce multiple comparisons
condition.crop(0, None)
conditions.append(condition)
tmin = conditions[0].tmin
tstep = conditions[0].tstep
Explanation: Transform to source space
End of explanation
n_vertices_sample, n_times = conditions[0].lh_data.shape
n_subjects = 7
print('Simulating data for %d subjects.' % n_subjects)
# Let's make sure our results replicate, so set the seed.
np.random.seed(0)
X = randn(n_vertices_sample, n_times, n_subjects, 4) * 10
for ii, condition in enumerate(conditions):
X[:, :, :, ii] += condition.lh_data[:, :, np.newaxis]
Explanation: Transform to common cortical space
Normally you would read in estimates across several subjects and morph them
to the same cortical space (e.g. fsaverage). For example purposes, we will
simulate this by just having each "subject" have the same response (just
noisy in source space) here.
We'll only consider the left hemisphere in this tutorial.
End of explanation
# Read the source space we are morphing to (just left hemisphere)
src = mne.read_source_spaces(src_fname)
fsave_vertices = [src[0]['vertno'], []]
morph_mat = mne.compute_source_morph(
src=inverse_operator['src'], subject_to='fsaverage',
spacing=fsave_vertices, subjects_dir=subjects_dir, smooth=20).morph_mat
morph_mat = morph_mat[:, :n_vertices_sample] # just left hemi from src
n_vertices_fsave = morph_mat.shape[0]
# We have to change the shape for the dot() to work properly
X = X.reshape(n_vertices_sample, n_times * n_subjects * 4)
print('Morphing data.')
X = morph_mat.dot(X) # morph_mat is a sparse matrix
X = X.reshape(n_vertices_fsave, n_times, n_subjects, 4)
Explanation: It's a good idea to spatially smooth the data, and for visualization
purposes, let's morph these to fsaverage, which is a grade 5 ICO source space
with vertices 0:10242 for each hemisphere. Usually you'd have to morph
each subject's data separately, but here since all estimates are on
'sample' we can use one morph matrix for all the heavy lifting.
End of explanation
X = np.transpose(X, [2, 1, 0, 3]) #
X = [np.squeeze(x) for x in np.split(X, 4, axis=-1)]
Explanation: Now we need to prepare the group matrix for the ANOVA statistic. To make the
clustering function work correctly with the ANOVA function X needs to be a
list of multi-dimensional arrays (one per condition) of shape: samples
(subjects) x time x space.
First we permute dimensions, then split the array into a list of conditions
and discard the empty dimension resulting from the split using numpy squeeze.
End of explanation
factor_levels = [2, 2]
Explanation: Prepare function for arbitrary contrast
As our ANOVA function is a multi-purpose tool we need to apply a few
modifications to integrate it with the clustering function. This
includes reshaping data, setting default arguments and processing
the return values. For this reason we'll write a tiny dummy function.
We will tell the ANOVA how to interpret the data matrix in terms of
factors. This is done via the factor levels argument which is a list
of the number factor levels for each factor.
End of explanation
effects = 'A:B'
# Tell the ANOVA not to compute p-values which we don't need for clustering
return_pvals = False
# a few more convenient bindings
n_times = X[0].shape[1]
n_conditions = 4
Explanation: Finally we will pick the interaction effect by passing 'A:B'.
(this notation is borrowed from the R formula language). Without this also
the main effects will be returned.
End of explanation
def stat_fun(*args):
# get f-values only.
return f_mway_rm(np.swapaxes(args, 1, 0), factor_levels=factor_levels,
effects=effects, return_pvals=return_pvals)[0]
Explanation: A stat_fun must deal with a variable number of input arguments.
Inside the clustering function each condition will be passed as flattened
array, necessitated by the clustering procedure. The ANOVA however expects an
input array of dimensions: subjects X conditions X observations (optional).
The following function catches the list input and swaps the first and the
second dimension, and finally calls ANOVA.
<div class="alert alert-info"><h4>Note</h4><p>For further details on this ANOVA function consider the
corresponding
`time-frequency tutorial <tut-timefreq-twoway-anova>`.</p></div>
End of explanation
# as we only have one hemisphere we need only need half the connectivity
print('Computing connectivity.')
connectivity = mne.spatial_src_connectivity(src[:1])
# Now let's actually do the clustering. Please relax, on a small
# notebook and one single thread only this will take a couple of minutes ...
pthresh = 0.0005
f_thresh = f_threshold_mway_rm(n_subjects, factor_levels, effects, pthresh)
# To speed things up a bit we will ...
n_permutations = 128 # ... run fewer permutations (reduces sensitivity)
print('Clustering.')
T_obs, clusters, cluster_p_values, H0 = clu = \
spatio_temporal_cluster_test(X, connectivity=connectivity, n_jobs=1,
threshold=f_thresh, stat_fun=stat_fun,
n_permutations=n_permutations,
buffer_size=None)
# Now select the clusters that are sig. at p < 0.05 (note that this value
# is multiple-comparisons corrected).
good_cluster_inds = np.where(cluster_p_values < 0.05)[0]
Explanation: Compute clustering statistic
To use an algorithm optimized for spatio-temporal clustering, we
just pass the spatial connectivity matrix (instead of spatio-temporal).
End of explanation
print('Visualizing clusters.')
# Now let's build a convenient representation of each cluster, where each
# cluster becomes a "time point" in the SourceEstimate
stc_all_cluster_vis = summarize_clusters_stc(clu, tstep=tstep,
vertices=fsave_vertices,
subject='fsaverage')
# Let's actually plot the first "time point" in the SourceEstimate, which
# shows all the clusters, weighted by duration
subjects_dir = op.join(data_path, 'subjects')
# The brighter the color, the stronger the interaction between
# stimulus modality and stimulus location
brain = stc_all_cluster_vis.plot(subjects_dir=subjects_dir, views='lat',
time_label='Duration significant (ms)',
clim=dict(kind='value', lims=[0, 1, 40]))
brain.save_image('cluster-lh.png')
brain.show_view('medial')
Explanation: Visualize the clusters
End of explanation
inds_t, inds_v = [(clusters[cluster_ind]) for ii, cluster_ind in
enumerate(good_cluster_inds)][0] # first cluster
times = np.arange(X[0].shape[1]) * tstep * 1e3
plt.figure()
colors = ['y', 'b', 'g', 'purple']
event_ids = ['l_aud', 'r_aud', 'l_vis', 'r_vis']
for ii, (condition, color, eve_id) in enumerate(zip(X, colors, event_ids)):
# extract time course at cluster vertices
condition = condition[:, :, inds_v]
# normally we would normalize values across subjects but
# here we use data from the same subject so we're good to just
# create average time series across subjects and vertices.
mean_tc = condition.mean(axis=2).mean(axis=0)
std_tc = condition.std(axis=2).std(axis=0)
plt.plot(times, mean_tc.T, color=color, label=eve_id)
plt.fill_between(times, mean_tc + std_tc, mean_tc - std_tc, color='gray',
alpha=0.5, label='')
ymin, ymax = mean_tc.min() - 5, mean_tc.max() + 5
plt.xlabel('Time (ms)')
plt.ylabel('Activation (F-values)')
plt.xlim(times[[0, -1]])
plt.ylim(ymin, ymax)
plt.fill_betweenx((ymin, ymax), times[inds_t[0]],
times[inds_t[-1]], color='orange', alpha=0.3)
plt.legend()
plt.title('Interaction between stimulus-modality and location.')
plt.show()
Explanation: Finally, let's investigate interaction effect by reconstructing the time
courses
End of explanation |
10,083 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Classification 2
Step2: Exercise 7.2
Show that a classifier $\hat y_i = \text{sign}(\beta^\top x_i)$ is defined by a separating hyperplane. Assume that $\beta \in \mathbb R^{p+1}$ and $x_{i0} = 1$. Specifically, convince yourself that there is a hyperplane such that $\hat y_i = 1$ is on one side of the hyperplane and $\hat y_i = -1$ is on the other side. What is a normal vector to this hyperplane?
Answer to Exercise 7.2
A hyperplane is an equation of the form,
$$ b^\top x = a $$
and denoting $x',\beta'$ to be the coordinates $1,\ldots,p$ of the vectors,
$$ \beta^\top x_i = \beta_0 + \beta'^\top x'_i.$$
So the equation $\beta^\top x_i = 0$ is equivalent to
$$-\beta_0 = \beta'^\top x'_i $$
And to be on one side of the hyperplane means that
$$-\beta_0 < \beta'^\top x'_i$$
and $>$ for the other side.
Also, $\beta'$ is the normal vector.
Surrogate losses
$$
\min_{\beta \in \mathbb R^{p+1}}.\frac{1}{n_0} \sum_{i=1}^{n_0} 1 { \textrm{sign}(\beta^\top x_i) \ne y_i }.
\tag{0-1 min}
$$
Rewrite the 0-1 loss for a linear classifier,
$$
\ell_{0/1}(\beta,x_i,y_i) = 1 { y_i \beta^\top x_i < 0 }.
$$
The logistic loss and the hinge loss are also functions of $y_i \beta^\top x_i$,
$$
\ell_{L} (\beta, x_i, y_i) = \log(1 + \exp(-y_i \beta^\top x_i))
\tag{logistic}
$$
$$
\ell_{H} (\beta, x_i, y_i) = (1 - y_i \beta^\top x_i))+
\tag{hinge}
$$
where $a+ = a 1{ a > 0}$ is the positive part of the real number $a$.
Step3: Square error loss?
If we are free to select training loss functions, then why not square error loss?
$$
\ell_{S} (\beta, x_i, y_i) = (y_i - \beta^\top x_i)^2 = (1 - y_i \beta^\top x_i)^2.
\tag{square error}
$$
Step4: Ridge terms
Ridge regularization can be added to these programs,
$$
\min_{\beta \in \mathbb R^{p+1}}. \frac 1n \sum_{i=1}^n \ell(\beta,x_i,y_i) + \lambda \sum_{j=1}^p \beta_j^2.
$$
SVMs have non-unique solutions without the ridge terms, it always appears
Logistic regression often runs without ridge penalty
Step6: Other narratives
Step7: How to choose the separator line?
The margin is a pair of lines,
$$
x^\top \beta = \pm 1
$$
for a linearly separable dataset there is a margin such that
$$
y = +1 \Rightarrow x^\top \beta \ge 1
$$
$$
y = -1 \Rightarrow x^\top \beta \le -1
$$
Can write separation of data as
$$
y_i x_i^\top \beta \ge 1
$$
Step8: Large-margin classifier
The margin width is smallest distance between these points,
$$
x_\alpha = \alpha \frac{\beta}{\| \beta\|},
$$
then on one side of margin
$$
x_\alpha^\top \beta = \alpha \| \beta \| = 1
$$
and on other side
$$
x_\alpha^\top \beta = \alpha \| \beta \| = -1.
$$
So $\alpha = \pm 1/ \| \beta \|$, and margin width is $2 / \| \beta \|$.
Step9: Large-margin classifier
Maximizing margin is equivalent to minimizing
$$
\min \| \beta \|\quad \textrm{s.t.}\quad y_i x_i^\top \beta \ge 1
$$
Suppose that not actually separable, then introduce slack variables
$$
\xi_i = \left{ \begin{array}{ll} 1 - y_i x_i^\top \beta, &y_i x_i^\top \beta \le 1\
0, &\rm{otherwise} \end{array}\right.
$$
Then also add sum of slacks to objective
$$
\min \| \beta \|^2 + C \sum_i \xi_i \equiv \min \sum_i \xi_i + \lambda \| \beta \|^2
$$ | Python Code:
def lm_sim(N = 100):
simulate a binary response and two predictors
X1 = (np.random.randn(N*2)).reshape((N,2)) + np.array([2,3])
X0 = (np.random.randn(N*2)).reshape((N,2)) + np.array([.5,1.5])
y = - np.ones(N*2)
y[:N]=1
X = np.vstack((X1,X0))
return X, y, X0, X1
X_sim,y_sim,X0,X1 = lm_sim()
plt.scatter(X0[:,0],X0[:,1],c='b',label='neg')
plt.scatter(X1[:,0],X1[:,1],c='r',label='pos')
plt.title("Two dimensional classification simulation")
_ = plt.legend(loc=2)
lr_sim = linear_model.LogisticRegression()
lr_sim.fit(X_sim,y_sim)
beta1 = lr_sim.coef_[0,0]
beta2 = lr_sim.coef_[0,1]
beta0 = lr_sim.intercept_
mults=0.8
T = np.linspace(-1,4,100)
x2hat = -(beta0 + beta1*T) / beta2
line1 = -(beta0 + np.random.randn(1)*2 +
(beta1 + np.random.randn(1)*mults) *T) / (beta2 + np.random.randn(1)*mults)
line2 = -(beta0 + np.random.randn(1)*2 +
(beta1 + np.random.randn(1)*mults) *T) / (beta2 + np.random.randn(1)*mults)
line3 = -(beta0 + np.random.randn(1)*2 +
(beta1 + np.random.randn(1)*mults) *T) / (beta2 + np.random.randn(1)*mults)
def plt_tmp():
plt.scatter(X0[:,0],X0[:,1],c='b',label='neg')
plt.scatter(X1[:,0],X1[:,1],c='r',label='pos')
plt.plot(T,line3,c='k')
plt.plot(T,line1,c='k')
plt.plot(T,line2,c='k')
plt.ylim([-1,7])
plt.title("Three possible separator lines")
_ = plt.legend(loc=2)
plt_tmp()
plt.scatter(X0[:,0],X0[:,1],c='b',label='neg')
plt.scatter(X1[:,0],X1[:,1],c='r',label='pos')
plt.plot(T,x2hat,c='k')
plt.title("Logistic regression separator line")
_ = plt.legend(loc=2)
N = 100
y_hat = lr_sim.predict(X_sim)
plt.scatter(X0[y_hat[N:] == 1,0],X0[y_hat[N:] == 1,1],c='b',label='neg')
plt.scatter(X1[y_hat[:N] == -1,0],X1[y_hat[:N] == -1,1],c='r',label='pos')
plt.plot(T,x2hat,c='k')
plt.title("Points classified incorrectly")
_ = plt.legend(loc=2)
Explanation: Classification 2: Discriminative methods
StatML: Lecture 7
Prof. James Sharpnack
Some content and images are from "The Elements of Statistical Learning" by Hastie, Tibshirani, Friedman
Reading ESL Chapter 4
Binary Classification
Given $x \in \mathbb R^p$, predict $y \in {-1,1}$.
For notational convenience, we will re-encode ${0,1}$ as ${-1,1}$.
Could simply train a regression $\hat f$ which predicts $y$ as if it was continuous, $\hat f(x) \in \mathbb R$, then use
$$ \hat y = \text{sign}(\hat f(x)). $$
Exercise 7.1
Why might it not be a good idea to simply threshold a regression for binary classification?
Linear discriminative classifiers
Output is a linear function of $x$ (with intercept):
$$\hat f(x) = \hat \beta_0 + \hat \beta^\top x$$
Predicts:
$$\hat y(x) = \textrm{sign} (\hat f(x)) = \left{ \begin{array}{ll} 1, &\hat f(x) \ge 0\ -1, &\textrm{otherwise} \end{array}\right.$$
at $0$ you may flip a coin as a tie breaker, this happens with KNN with even K
fit method will fit $\hat \beta, \hat \beta_0$
typically, I will let $x_0 = 1$ and $\hat \beta \in \mathbb R^{p+1}$
Loss function measures the success of predict
Recall, loss function determines how we evaluate
$$
\ell_{0/1} (\hat y_{i}, y_i) = \left{ \begin{array}{ll}
1,& \textrm{ if } \hat y_i \ne y_i\
0,& \textrm{ if } \hat y_i = y_i
\end{array} \right.
$$
Correct or not
Test error is then misclassification rate
What we want to do
Suppose that we wanted to minimize training error with a 0-1 training loss. Then this could be written as the optimization program,
$$
\min_{\beta \in \mathbb R^{p+1}}.\frac{1}{n_0} \sum_{i=1}^{n_0} 1 { \textrm{sign}(\beta^\top x_i) \ne y_i }.
\tag{0-1 min}
$$
- discontinuous because it is the sum of discontinuous functions
- hard to optimize in most situations, often NP-hard
End of explanation
def plt_tmp():
z_range = np.linspace(-5,5,200)
zoloss = z_range < 0
hingeloss = (1 - z_range) * (z_range < 1)
logisticloss = np.log(1 + np.exp(-z_range))
plt.plot(z_range, logisticloss + 1 - np.log(2.),label='logistic')
plt.plot(z_range, zoloss,label='0-1')
plt.plot(z_range, hingeloss,label='hinge')
plt.ylim([-.2,5])
plt.xlabel(r'$y_i \beta^\top x_i$')
plt.ylabel('loss')
plt.title('A comparison of classification loss functions')
_ = plt.legend()
plt_tmp()
z_log = y_sim*lr_sim.decision_function(X_sim)
def plt_tmp():
logisticloss = np.log(1 + np.exp(-z_log))
plt.scatter(X0[:,0],X0[:,1],s=logisticloss[N:]*30.,c='b',label='neg')
plt.scatter(X1[:,0],X1[:,1],s=logisticloss[:N]*30.,c='r',label='pos')
plt.plot(T,x2hat,c='k')
plt.xlim([-1,3])
plt.ylim([0,4])
plt.title("Points weighted by logistic loss")
_ = plt.legend(loc=2)
plt_tmp()
def plt_tmp():
hingeloss = (1-z_log)*(z_log < 1)
plt.scatter(X0[:,0],X0[:,1],s=hingeloss[N:]*30.,c='b',label='neg')
plt.scatter(X1[:,0],X1[:,1],s=hingeloss[:N]*30.,c='r',label='pos')
plt.plot(T,x2hat,c='k')
plt.xlim([-1,3])
plt.ylim([0,4])
plt.title("Points weighted by hinge loss")
_ = plt.legend(loc=2)
plt_tmp()
def plt_tmp():
l2loss = (1-z_log)**2.
plt.scatter(X0[:,0],X0[:,1],s=l2loss[N:]*10.,c='b',label='neg')
plt.scatter(X1[:,0],X1[:,1],s=l2loss[:N]*10.,c='r',label='pos')
plt.plot(T,x2hat,c='k')
plt.xlim([-1,3])
plt.ylim([0,4])
plt.title("Points weighted by sqr. loss")
_ = plt.legend(loc=2)
plt_tmp()
Explanation: Exercise 7.2
Show that a classifier $\hat y_i = \text{sign}(\beta^\top x_i)$ is defined by a separating hyperplane. Assume that $\beta \in \mathbb R^{p+1}$ and $x_{i0} = 1$. Specifically, convince yourself that there is a hyperplane such that $\hat y_i = 1$ is on one side of the hyperplane and $\hat y_i = -1$ is on the other side. What is a normal vector to this hyperplane?
Answer to Exercise 7.2
A hyperplane is an equation of the form,
$$ b^\top x = a $$
and denoting $x',\beta'$ to be the coordinates $1,\ldots,p$ of the vectors,
$$ \beta^\top x_i = \beta_0 + \beta'^\top x'_i.$$
So the equation $\beta^\top x_i = 0$ is equivalent to
$$-\beta_0 = \beta'^\top x'_i $$
And to be on one side of the hyperplane means that
$$-\beta_0 < \beta'^\top x'_i$$
and $>$ for the other side.
Also, $\beta'$ is the normal vector.
Surrogate losses
$$
\min_{\beta \in \mathbb R^{p+1}}.\frac{1}{n_0} \sum_{i=1}^{n_0} 1 { \textrm{sign}(\beta^\top x_i) \ne y_i }.
\tag{0-1 min}
$$
Rewrite the 0-1 loss for a linear classifier,
$$
\ell_{0/1}(\beta,x_i,y_i) = 1 { y_i \beta^\top x_i < 0 }.
$$
The logistic loss and the hinge loss are also functions of $y_i \beta^\top x_i$,
$$
\ell_{L} (\beta, x_i, y_i) = \log(1 + \exp(-y_i \beta^\top x_i))
\tag{logistic}
$$
$$
\ell_{H} (\beta, x_i, y_i) = (1 - y_i \beta^\top x_i))+
\tag{hinge}
$$
where $a+ = a 1{ a > 0}$ is the positive part of the real number $a$.
End of explanation
def plt_tmp():
z_range = np.linspace(-5,5,200)
zoloss = z_range < 0
l2loss = (1-z_range)**2.
hingeloss = (1 - z_range) * (z_range < 1)
logisticloss = np.log(1 + np.exp(-z_range))
plt.plot(z_range, logisticloss + 1 - np.log(2.),label='logistic')
plt.plot(z_range, zoloss,label='0-1')
plt.plot(z_range, hingeloss,label='hinge')
plt.plot(z_range, l2loss,label='sq error')
plt.ylim([-.2,5])
plt.xlabel(r'$y_i \beta^\top x_i$')
plt.ylabel('loss')
plt.title('A comparison of classification loss functions')
_ = plt.legend()
plt_tmp()
Explanation: Square error loss?
If we are free to select training loss functions, then why not square error loss?
$$
\ell_{S} (\beta, x_i, y_i) = (y_i - \beta^\top x_i)^2 = (1 - y_i \beta^\top x_i)^2.
\tag{square error}
$$
End of explanation
lamb = 1.
lr = linear_model.LogisticRegression(penalty='l2', C = 1/lamb)
lr.fit(X_tr,y_tr)
yhat = lr.predict(X_te)
(yhat != y_te).mean()
score_lr = X_te @ lr.coef_[0,:]
fpr_lr, tpr_lr, threshs = metrics.roc_curve(y_te,score_lr)
prec_lr, rec_lr, threshs = metrics.precision_recall_curve(y_te,score_lr)
lamb = 1.
svc = svm.SVC(C = 1/lamb,kernel='linear')
svc.fit(X_tr,y_tr)
yhat = svc.predict(X_te)
score_svc = X_te @ svc.coef_[0,:]
fpr_svc, tpr_svc, threshs = metrics.roc_curve(y_te,score_svc)
prec_svc, rec_svc, threshs = metrics.precision_recall_curve(y_te,score_svc)
(yhat != y_te).mean()
plt.figure(figsize=(6,6))
plt.plot(fpr_lr,tpr_lr,label='logistic')
plt.plot(fpr_svc,tpr_svc,label='svm')
plt.plot(fpr_dur,tpr_dur,label='duration')
plt.xlabel('FPR')
plt.ylabel('TPR')
plt.legend()
plt.title("ROC curve comparison")
plt.figure(figsize=(6,6))
plt.plot(rec_lr,prec_lr,label='logistic')
plt.plot(rec_svc,prec_svc,label='svm')
plt.plot(rec_dur,prec_dur,label='duration')
plt.xlabel('recall')
plt.ylabel('precision')
plt.legend()
plt.title("PR curve comparison")
Explanation: Ridge terms
Ridge regularization can be added to these programs,
$$
\min_{\beta \in \mathbb R^{p+1}}. \frac 1n \sum_{i=1}^n \ell(\beta,x_i,y_i) + \lambda \sum_{j=1}^p \beta_j^2.
$$
SVMs have non-unique solutions without the ridge terms, it always appears
Logistic regression often runs without ridge penalty
End of explanation
def lm_sim(N = 100,sig=.25):
simulate a binary response and two predictors
X1 = 2*sig*(np.random.randn(N*2)).reshape((N,2)) + np.array([2,3])
X0 = sig*(np.random.randn(N*2)).reshape((N,2)) + np.array([.5,1.5])
y = - np.ones(N*2)
y[:N]=1
X = np.vstack((X1,X0))
return X, y, X0, X1
X_sim,y_sim,X0,X1 = lm_sim()
plt.scatter(X0[:,0],X0[:,1],c='b',label='neg')
plt.scatter(X1[:,0],X1[:,1],c='r',label='pos')
plt.title("Two dimensional classification simulation")
_ = plt.legend(loc=2)
lr_sim = linear_model.LogisticRegression()
lr_sim.fit(X_sim,y_sim)
beta1 = lr_sim.coef_[0,0]
beta2 = lr_sim.coef_[0,1]
beta0 = lr_sim.intercept_
x2hat = -(beta0 + beta1*T) / beta2
T = np.linspace(-.2,3,100)
plt.scatter(X0[:,0],X0[:,1],c='b',label='neg')
plt.scatter(X1[:,0],X1[:,1],c='r',label='pos')
plt.plot(T,x2hat,c='k')
plt.ylim([.9,5])
plt.title("Logistic regression separator line")
_ = plt.legend(loc=2)
Explanation: Other narratives: Logistic model
Logistic loss is -log-likelihood for logistic model
$$
\log \frac{\mathbb P{Y = 1 | X = x }}{\mathbb P{ Y = -1 | X = x }} = x^\top \beta
$$
for some $\beta \in \mathbb R^{p+1}$.
$$
-\log \mathbb P{ Y = y | X = x } = \ell_{L} (\beta, x, y) = \log(1 + \exp(-y \beta^\top x))
$$
To verify,
$$
\log \frac{\exp (- \ell_{L} (\beta, x, 1))}{\exp(- \ell_{L} (\beta, x, -1))} = -\log(1 + \exp(-\beta^\top x)) + \log(1 + \exp(\beta^\top x))
$$
$$
= \log\left( \frac{1 + \exp(\beta^\top x)}{1 + \exp(-\beta^\top x)}\right) = \log \exp (x^\top \beta) = x^\top \beta
$$
Other narratives: Support vector machines
Suppose that our data was linearly separable:
End of explanation
svm_sim = svm.SVC(kernel="linear")
svm_sim.fit(X_sim,y_sim)
T = np.linspace(-.2,3,100)
beta1 = svm_sim.coef_[0,0]
beta2 = svm_sim.coef_[0,1]
beta0 = np.array([-3.975, -4.425, -4.875])
x2hat = np.zeros((100,3))
for j, b0 in enumerate(beta0):
x2hat[:,j] = -(b0 + beta1*T) / beta2
def plt_tmp():
plt.scatter(X0[:,0],X0[:,1],c='b',label='neg')
plt.scatter(X1[:,0],X1[:,1],c='r',label='pos')
plt.plot(T,x2hat[:,2],'k:')
plt.plot(T,x2hat[:,1],'k')
plt.plot(T,x2hat[:,0],'k:')
plt.ylim([.9,5])
plt.title("Logistic regression separator line")
_ = plt.legend(loc=2)
plt_tmp()
Explanation: How to choose the separator line?
The margin is a pair of lines,
$$
x^\top \beta = \pm 1
$$
for a linearly separable dataset there is a margin such that
$$
y = +1 \Rightarrow x^\top \beta \ge 1
$$
$$
y = -1 \Rightarrow x^\top \beta \le -1
$$
Can write separation of data as
$$
y_i x_i^\top \beta \ge 1
$$
End of explanation
X_sim,y_sim,X0,X1 = lm_sim(sig=.8)
plt.scatter(X0[:,0],X0[:,1],c='b',label='neg')
plt.scatter(X1[:,0],X1[:,1],c='r',label='pos')
plt.title("Two dimensional classification simulation")
_ = plt.legend(loc=2)
Explanation: Large-margin classifier
The margin width is smallest distance between these points,
$$
x_\alpha = \alpha \frac{\beta}{\| \beta\|},
$$
then on one side of margin
$$
x_\alpha^\top \beta = \alpha \| \beta \| = 1
$$
and on other side
$$
x_\alpha^\top \beta = \alpha \| \beta \| = -1.
$$
So $\alpha = \pm 1/ \| \beta \|$, and margin width is $2 / \| \beta \|$.
End of explanation
z_log = y_sim*svm_sim.decision_function(X_sim)
def plt_tmp():
hingeloss = (1-z_log)*(z_log < 1)
plt.scatter(X0[:,0],X0[:,1],s=hingeloss[N:]*30.,c='b',label='neg')
plt.scatter(X1[:,0],X1[:,1],s=hingeloss[:N]*30.,c='r',label='pos')
plt.plot(T,x2hat[:,[0,2]],'k:')
plt.plot(T,x2hat[:,1],c='k')
plt.xlim([-2,4])
plt.ylim([-1,5])
plt.title("Points weighted by hinge loss")
_ = plt.legend(loc=2)
T = np.linspace(-.2,3,100)
beta1 = svm_sim.coef_[0,0]
beta2 = svm_sim.coef_[0,1]
mid, mar = -4.6,1.
beta0 = np.array([mid - mar, mid, mid + mar])
x2hat = np.zeros((100,3))
for j, b0 in enumerate(beta0):
x2hat[:,j] = -(b0 + beta1*T) / beta2
plt_tmp()
Explanation: Large-margin classifier
Maximizing margin is equivalent to minimizing
$$
\min \| \beta \|\quad \textrm{s.t.}\quad y_i x_i^\top \beta \ge 1
$$
Suppose that not actually separable, then introduce slack variables
$$
\xi_i = \left{ \begin{array}{ll} 1 - y_i x_i^\top \beta, &y_i x_i^\top \beta \le 1\
0, &\rm{otherwise} \end{array}\right.
$$
Then also add sum of slacks to objective
$$
\min \| \beta \|^2 + C \sum_i \xi_i \equiv \min \sum_i \xi_i + \lambda \| \beta \|^2
$$
End of explanation |
10,084 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Getting started - a single particle
In this tutorial, we'll simulate the stochastic dynamics of a single nanoparticle. We model clusters of nanoparticles using the magpy.Model class. In this case we only have a single particle in our cluster. The first step is to import magpy.
Step1: To create our model, we need to specify the geometry and material properties of the system. The units and purpose of each property is defined below.
| Name | Description | Units |
|------|-------------|-------|
| Radius | The radius of the spherical particle | m |
| Anisotropy | Magnitude of the anisotropy | J/m$^3$ |
| Anisotropy axis | Unit vector indicating the direction of the anisotropy | - |
| Magnetisation | Saturation magnetisation of every particle in the cluster | A/m |
| Magnetisation direction | Unit vector indicating the initial direction of the magnetisation | - |
| Location | Location of the particle within the cluster | m |
| Damping | The damping constant of every particle in the cluster | - |
| Temperature | The ambient temperature of the cluster (fixed) | K |
Note
Step2: Simulate
A simulation in magpy consists of simulating the magnetisation vector of the particle in time. In the model above we specified the initial magnetisation vector along the $x$-axis and the anisotropy along the $z$-axis. Since it is energetically favourable for the magnetisation to align with its anisotropy axis, we should expect the magnetisation to move toward the $z$-axis. With some random fluctuations.
The simulate function is called with the following parameters
Step3: The x,y,z components of the magnetisation can be visualised with the .plot() function.
Step4: We can also access this data directly and plot it however we like! In this example, we normalise the magnetisation and plot it in 3d space. | Python Code:
import magpy as mp
Explanation: Getting started - a single particle
In this tutorial, we'll simulate the stochastic dynamics of a single nanoparticle. We model clusters of nanoparticles using the magpy.Model class. In this case we only have a single particle in our cluster. The first step is to import magpy.
End of explanation
single_particle = mp.Model(
radius = [12e-9],
anisotropy = [4e4],
anisotropy_axis = [[0., 0., 1.]],
magnetisation_direction = [[1., 0., 0.]],
location = [[0., 0., 0.]],
damping = 0.1,
temperature = 300.,
magnetisation = 400e3
)
Explanation: To create our model, we need to specify the geometry and material properties of the system. The units and purpose of each property is defined below.
| Name | Description | Units |
|------|-------------|-------|
| Radius | The radius of the spherical particle | m |
| Anisotropy | Magnitude of the anisotropy | J/m$^3$ |
| Anisotropy axis | Unit vector indicating the direction of the anisotropy | - |
| Magnetisation | Saturation magnetisation of every particle in the cluster | A/m |
| Magnetisation direction | Unit vector indicating the initial direction of the magnetisation | - |
| Location | Location of the particle within the cluster | m |
| Damping | The damping constant of every particle in the cluster | - |
| Temperature | The ambient temperature of the cluster (fixed) | K |
Note: radius, anisotropy, anisotropy_axis, magnetisation_direction, and location vary for each particle and must be specified as a list.
End of explanation
results = single_particle.simulate(
end_time = 5e-9,
time_step = 1e-14,
max_samples=1000,
seed = 1001
)
Explanation: Simulate
A simulation in magpy consists of simulating the magnetisation vector of the particle in time. In the model above we specified the initial magnetisation vector along the $x$-axis and the anisotropy along the $z$-axis. Since it is energetically favourable for the magnetisation to align with its anisotropy axis, we should expect the magnetisation to move toward the $z$-axis. With some random fluctuations.
The simulate function is called with the following parameters:
- end_time the length of the simulation in seconds
- time_step the time step of the integrator in seconds
- max_samples in order to save memory, the output is down/upsampled as required. So if you simulate a billion steps, you can only save the state at 1000 regularly spaced intervals.
- seed for reproducible simulations you should always choose the seed.
End of explanation
results.plot()
Explanation: The x,y,z components of the magnetisation can be visualised with the .plot() function.
End of explanation
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
%matplotlib inline
Ms = 400e3
time = results.time
mx = results.x[0] / Ms # particle 0
my = results.y[0] / Ms # particle 0
mz = results.z[0] / Ms # particle 0
fg = plt.figure()
ax = fg.add_subplot(111, projection='3d')
ax.plot3D(mx, my, mz)
Explanation: We can also access this data directly and plot it however we like! In this example, we normalise the magnetisation and plot it in 3d space.
End of explanation |
10,085 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<!--BOOK_INFORMATION-->
<img align="left" style="padding-right
Step1: The plt interface is what we will use most often, as we shall see throughout this chapter.
Setting Styles
We will use the plt.style directive to choose appropriate aesthetic styles for our figures.
Here we will set the classic style, which ensures that the plots we create use the classic Matplotlib style
Step2: Throughout this section, we will adjust this style as needed.
Note that the stylesheets used here are supported as of Matplotlib version 1.5; if you are using an earlier version of Matplotlib, only the default style is available.
For more information on stylesheets, see Customizing Matplotlib
Step3: After running this command (it needs to be done only once per kernel/session), any cell within the notebook that creates a plot will embed a PNG image of the resulting graphic
Step4: Saving Figures to File
One nice feature of Matplotlib is the ability to save figures in a wide variety of formats.
Saving a figure can be done using the savefig() command.
For example, to save the previous figure as a PNG file, you can run this
Step5: We now have a file called my_figure.png in the current working directory
Step6: To confirm that it contains what we think it contains, let's use the IPython Image object to display the contents of this file
Step7: In savefig(), the file format is inferred from the extension of the given filename.
Depending on what backends you have installed, many different file formats are available.
The list of supported file types can be found for your system by using the following method of the figure canvas object
Step8: Note that when saving your figure, it's not necessary to use plt.show() or related commands discussed earlier.
Two Interfaces for the Price of One
A potentially confusing feature of Matplotlib is its dual interfaces
Step9: It is important to note that this interface is stateful | Python Code:
import matplotlib as mpl
import matplotlib.pyplot as plt
Explanation: <!--BOOK_INFORMATION-->
<img align="left" style="padding-right:10px;" src="figures/PDSH-cover-small.png">
This notebook contains an excerpt from the Python Data Science Handbook by Jake VanderPlas; the content is available on GitHub.
The text is released under the CC-BY-NC-ND license, and code is released under the MIT license. If you find this content useful, please consider supporting the work by buying the book!
No changes were made to the contents of this notebook from the original.
<!--NAVIGATION-->
< Further Resources | Contents | Simple Line Plots >
Visualization with Matplotlib
We'll now take an in-depth look at the Matplotlib package for visualization in Python.
Matplotlib is a multi-platform data visualization library built on NumPy arrays, and designed to work with the broader SciPy stack.
It was conceived by John Hunter in 2002, originally as a patch to IPython for enabling interactive MATLAB-style plotting via gnuplot from the IPython command line.
IPython's creator, Fernando Perez, was at the time scrambling to finish his PhD, and let John know he wouldn’t have time to review the patch for several months.
John took this as a cue to set out on his own, and the Matplotlib package was born, with version 0.1 released in 2003.
It received an early boost when it was adopted as the plotting package of choice of the Space Telescope Science Institute (the folks behind the Hubble Telescope), which financially supported Matplotlib’s development and greatly expanded its capabilities.
One of Matplotlib’s most important features is its ability to play well with many operating systems and graphics backends.
Matplotlib supports dozens of backends and output types, which means you can count on it to work regardless of which operating system you are using or which output format you wish.
This cross-platform, everything-to-everyone approach has been one of the great strengths of Matplotlib.
It has led to a large user base, which in turn has led to an active developer base and Matplotlib’s powerful tools and ubiquity within the scientific Python world.
In recent years, however, the interface and style of Matplotlib have begun to show their age.
Newer tools like ggplot and ggvis in the R language, along with web visualization toolkits based on D3js and HTML5 canvas, often make Matplotlib feel clunky and old-fashioned.
Still, I'm of the opinion that we cannot ignore Matplotlib's strength as a well-tested, cross-platform graphics engine.
Recent Matplotlib versions make it relatively easy to set new global plotting styles (see Customizing Matplotlib: Configurations and Style Sheets), and people have been developing new packages that build on its powerful internals to drive Matplotlib via cleaner, more modern APIs—for example, Seaborn (discussed in Visualization With Seaborn), ggpy, HoloViews, Altair, and even Pandas itself can be used as wrappers around Matplotlib's API.
Even with wrappers like these, it is still often useful to dive into Matplotlib's syntax to adjust the final plot output.
For this reason, I believe that Matplotlib itself will remain a vital piece of the data visualization stack, even if new tools mean the community gradually moves away from using the Matplotlib API directly.
General Matplotlib Tips
Before we dive into the details of creating visualizations with Matplotlib, there are a few useful things you should know about using the package.
Importing Matplotlib
Just as we use the np shorthand for NumPy and the pd shorthand for Pandas, we will use some standard shorthands for Matplotlib imports:
End of explanation
plt.style.use('classic')
Explanation: The plt interface is what we will use most often, as we shall see throughout this chapter.
Setting Styles
We will use the plt.style directive to choose appropriate aesthetic styles for our figures.
Here we will set the classic style, which ensures that the plots we create use the classic Matplotlib style:
End of explanation
%matplotlib inline
Explanation: Throughout this section, we will adjust this style as needed.
Note that the stylesheets used here are supported as of Matplotlib version 1.5; if you are using an earlier version of Matplotlib, only the default style is available.
For more information on stylesheets, see Customizing Matplotlib: Configurations and Style Sheets.
show() or No show()? How to Display Your Plots
A visualization you can't see won't be of much use, but just how you view your Matplotlib plots depends on the context.
The best use of Matplotlib differs depending on how you are using it; roughly, the three applicable contexts are using Matplotlib in a script, in an IPython terminal, or in an IPython notebook.
Plotting from a script
If you are using Matplotlib from within a script, the function plt.show() is your friend.
plt.show() starts an event loop, looks for all currently active figure objects, and opens one or more interactive windows that display your figure or figures.
So, for example, you may have a file called myplot.py containing the following:
```python
------- file: myplot.py ------
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(0, 10, 100)
plt.plot(x, np.sin(x))
plt.plot(x, np.cos(x))
plt.show()
```
You can then run this script from the command-line prompt, which will result in a window opening with your figure displayed:
$ python myplot.py
The plt.show() command does a lot under the hood, as it must interact with your system's interactive graphical backend.
The details of this operation can vary greatly from system to system and even installation to installation, but matplotlib does its best to hide all these details from you.
One thing to be aware of: the plt.show() command should be used only once per Python session, and is most often seen at the very end of the script.
Multiple show() commands can lead to unpredictable backend-dependent behavior, and should mostly be avoided.
Plotting from an IPython shell
It can be very convenient to use Matplotlib interactively within an IPython shell (see IPython: Beyond Normal Python).
IPython is built to work well with Matplotlib if you specify Matplotlib mode.
To enable this mode, you can use the %matplotlib magic command after starting ipython:
```ipython
In [1]: %matplotlib
Using matplotlib backend: TkAgg
In [2]: import matplotlib.pyplot as plt
```
At this point, any plt plot command will cause a figure window to open, and further commands can be run to update the plot.
Some changes (such as modifying properties of lines that are already drawn) will not draw automatically: to force an update, use plt.draw().
Using plt.show() in Matplotlib mode is not required.
Plotting from an IPython notebook
The IPython notebook is a browser-based interactive data analysis tool that can combine narrative, code, graphics, HTML elements, and much more into a single executable document (see IPython: Beyond Normal Python).
Plotting interactively within an IPython notebook can be done with the %matplotlib command, and works in a similar way to the IPython shell.
In the IPython notebook, you also have the option of embedding graphics directly in the notebook, with two possible options:
%matplotlib notebook will lead to interactive plots embedded within the notebook
%matplotlib inline will lead to static images of your plot embedded in the notebook
For this book, we will generally opt for %matplotlib inline:
End of explanation
import numpy as np
x = np.linspace(0, 10, 100)
fig = plt.figure()
plt.plot(x, np.sin(x), '-')
plt.plot(x, np.cos(x), '--');
Explanation: After running this command (it needs to be done only once per kernel/session), any cell within the notebook that creates a plot will embed a PNG image of the resulting graphic:
End of explanation
fig.savefig('my_figure.png')
Explanation: Saving Figures to File
One nice feature of Matplotlib is the ability to save figures in a wide variety of formats.
Saving a figure can be done using the savefig() command.
For example, to save the previous figure as a PNG file, you can run this:
End of explanation
!ls -lh my_figure.png
Explanation: We now have a file called my_figure.png in the current working directory:
End of explanation
from IPython.display import Image
Image('my_figure.png')
Explanation: To confirm that it contains what we think it contains, let's use the IPython Image object to display the contents of this file:
End of explanation
fig.canvas.get_supported_filetypes()
Explanation: In savefig(), the file format is inferred from the extension of the given filename.
Depending on what backends you have installed, many different file formats are available.
The list of supported file types can be found for your system by using the following method of the figure canvas object:
End of explanation
plt.figure() # create a plot figure
# create the first of two panels and set current axis
plt.subplot(2, 1, 1) # (rows, columns, panel number)
plt.plot(x, np.sin(x))
# create the second panel and set current axis
plt.subplot(2, 1, 2)
plt.plot(x, np.cos(x));
Explanation: Note that when saving your figure, it's not necessary to use plt.show() or related commands discussed earlier.
Two Interfaces for the Price of One
A potentially confusing feature of Matplotlib is its dual interfaces: a convenient MATLAB-style state-based interface, and a more powerful object-oriented interface. We'll quickly highlight the differences between the two here.
MATLAB-style Interface
Matplotlib was originally written as a Python alternative for MATLAB users, and much of its syntax reflects that fact.
The MATLAB-style tools are contained in the pyplot (plt) interface.
For example, the following code will probably look quite familiar to MATLAB users:
End of explanation
# First create a grid of plots
# ax will be an array of two Axes objects
fig, ax = plt.subplots(2)
# Call plot() method on the appropriate object
ax[0].plot(x, np.sin(x))
ax[1].plot(x, np.cos(x));
Explanation: It is important to note that this interface is stateful: it keeps track of the "current" figure and axes, which are where all plt commands are applied.
You can get a reference to these using the plt.gcf() (get current figure) and plt.gca() (get current axes) routines.
While this stateful interface is fast and convenient for simple plots, it is easy to run into problems.
For example, once the second panel is created, how can we go back and add something to the first?
This is possible within the MATLAB-style interface, but a bit clunky.
Fortunately, there is a better way.
Object-oriented interface
The object-oriented interface is available for these more complicated situations, and for when you want more control over your figure.
Rather than depending on some notion of an "active" figure or axes, in the object-oriented interface the plotting functions are methods of explicit Figure and Axes objects.
To re-create the previous plot using this style of plotting, you might do the following:
End of explanation |
10,086 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
人生苦短,我用python
python第四课
课程安排
1、numpy
2、pandas
3、matplotlib
numpy
数组跟列表,列表可以存储任意类型的数据,而数组只能存储一种类型数据
Step1: 从原有列表转换为数组
Step2: 生成数组
Step3: random
Step4: 范围取值
Step5: | Data type | Description |
|
Step6: 数组属性
Step7: 运算
Step8: | Operator | Equivalent ufunc | Description |
|---------------|---------------------|---------------------------------------|
|+ |np.add |Addition (e.g., 1 + 1 = 2) |
|- |np.subtract |Subtraction (e.g., 3 - 2 = 1) |
|- |np.negative |Unary negation (e.g., -2) |
|* |np.multiply |Multiplication (e.g., 2 * 3 = 6) |
|/ |np.divide |Division (e.g., 3 / 2 = 1.5) |
|// |np.floor_divide |Floor division (e.g., 3 // 2 = 1) |
|** |np.power |Exponentiation (e.g., 2 ** 3 = 8) |
|% |np.mod |Modulus/remainder (e.g., 9 % 4 = 1)|
Step9: 统计类型
Step10: notebook使用小技巧
%timeit 代码 ; 此方法来判断程序的执行效率
Step11: 由上代码可已看出np.sum的执行效率高,推荐使用
比较
Step12: | Operator | Equivalent ufunc || Operator | Equivalent ufunc |
|---------------|---------------------||---------------|---------------------|
|== |np.equal ||!= |np.not_equal |
|< |np.less ||<= |np.less_equal |
|> |np.greater ||>= |np.greater_equal |
Step13: 变形
Step14: 排序
Step15: 拼接 | Python Code:
import array
a = array.array('i', range(10))
# 数据类型必须统一
a[1] = 's'
a
import numpy as np
Explanation: 人生苦短,我用python
python第四课
课程安排
1、numpy
2、pandas
3、matplotlib
numpy
数组跟列表,列表可以存储任意类型的数据,而数组只能存储一种类型数据
End of explanation
a_list = list(range(10))
b = np.array(a_list)
type(b)
Explanation: 从原有列表转换为数组
End of explanation
a = np.zeros(10, dtype=int)
print(type(a))
# 查看数组类型
a.dtype
a = np.zeros((4,4), dtype=int)
print(type(a))
# 查看数组类型
print(a.dtype)
a
np.ones((4,4), dtype=float)
np.full((3,3), 3.14)
a
np.zeros_like(a)
np.ones_like(a)
np.full_like(a, 4.12, dtype=float)
Explanation: 生成数组
End of explanation
print(random.randint(5,10))
print(random.random())
np.random.random((3,3))
# 经常会用到
np.random.randint(0,10, (5,5))
Explanation: random
End of explanation
list(range(0,10,2))
np.arange(0,3,2)
# 经常用到
np.linspace(0, 3, 10)
# n维的单位矩阵
np.eye(5)
Explanation: 范围取值
End of explanation
# 嵌套列表的元素访问
var = [[1,2,3], [3,4,5], [5,6,7]]
var[0][0]
# 数组中元素的访问
a = np.array(var)
a[-1][0]
a
# 这两种访问方式是等价的
a[2, 0], a[2][0]
# 数组切片
a[:2, :2]
# 同上边的方式是不等价的
a[:2][:2]
Explanation: | Data type | Description |
|:---------------|:-------------|
| bool_ | Boolean (True or False) stored as a byte |
| int_ | Default integer type (same as C long; normally either int64 or int32)|
| intc | Identical to C int (normally int32 or int64)|
| intp | Integer used for indexing (same as C ssize_t; normally either int32 or int64)|
| int8 | Byte (-128 to 127)|
| int16 | Integer (-32768 to 32767)|
| int32 | Integer (-2147483648 to 2147483647)|
| int64 | Integer (-9223372036854775808 to 9223372036854775807)|
| uint8 | Unsigned integer (0 to 255)|
| uint16 | Unsigned integer (0 to 65535)|
| uint32 | Unsigned integer (0 to 4294967295)|
| uint64 | Unsigned integer (0 to 18446744073709551615)|
| float_ | Shorthand for float64.|
| float16 | Half precision float: sign bit, 5 bits exponent, 10 bits mantissa|
| float32 | Single precision float: sign bit, 8 bits exponent, 23 bits mantissa|
| float64 | Double precision float: sign bit, 11 bits exponent, 52 bits mantissa|
| complex_ | Shorthand for complex128.|
| complex64 | Complex number, represented by two 32-bit floats|
| complex128| Complex number, represented by two 64-bit floats|
访问数组中元素
End of explanation
a
# 维度
print(a.ndim)
# shape
print(a.shape)
# size
print(a.size)
# dtype
print(a.dtype)
# a.itemsize
print(a.itemsize)
# nbytes
print(a.nbytes)
Explanation: 数组属性
End of explanation
a = np.array(list(range(10)))
a
print(a + 10)
print(a - 10)
print(a * 100)
a = np.full((3,3), 1.0, dtype=float)
a + 10 # 等价于 np.add(a, 10)
Explanation: 运算
End of explanation
a = np.linspace(0, np.pi, 5)
b = np.sin(a)
print(a)
print(b)
Explanation: | Operator | Equivalent ufunc | Description |
|---------------|---------------------|---------------------------------------|
|+ |np.add |Addition (e.g., 1 + 1 = 2) |
|- |np.subtract |Subtraction (e.g., 3 - 2 = 1) |
|- |np.negative |Unary negation (e.g., -2) |
|* |np.multiply |Multiplication (e.g., 2 * 3 = 6) |
|/ |np.divide |Division (e.g., 3 / 2 = 1.5) |
|// |np.floor_divide |Floor division (e.g., 3 // 2 = 1) |
|** |np.power |Exponentiation (e.g., 2 ** 3 = 8) |
|% |np.mod |Modulus/remainder (e.g., 9 % 4 = 1)|
End of explanation
# 求和
print(sum([1,2,3,4,5,6]))
# 数组一维求和
a = np.full(10, 2.3)
print(sum(a))
# 数组多维求和
a = np.array([[1,2],[3,4]])
print(sum(a))
# np.sum 求和
np.sum(a)
np.sum(a, axis=1)
np.max(a, axis=1)
n = np.random.rand(10000)
Explanation: 统计类型
End of explanation
%timeit sum(n)
%timeit np.sum(n)
Explanation: notebook使用小技巧
%timeit 代码 ; 此方法来判断程序的执行效率
End of explanation
a = np.array(range(10))
a
a > 3
a != 3
a == a
Explanation: 由上代码可已看出np.sum的执行效率高,推荐使用
比较
End of explanation
np.all(a>-1)
np.any(a>-1)
Explanation: | Operator | Equivalent ufunc || Operator | Equivalent ufunc |
|---------------|---------------------||---------------|---------------------|
|== |np.equal ||!= |np.not_equal |
|< |np.less ||<= |np.less_equal |
|> |np.greater ||>= |np.greater_equal |
End of explanation
a = np.full((2,10), 1, dtype=float)
a
a.reshape(4, 5)
Explanation: 变形
End of explanation
l = [
[1,2,3],
[34,12,4],
[32,2,33]
]
a = np.array(l)
a
np.sort(a)
a.sort(axis=0)
a
Explanation: 排序
End of explanation
a = np.array([1, 2, 3])
b = np.array([[0, 2, 4], [1, 3, 5]])
# 按行去连接
np.concatenate([b,b,b], axis=0)
# 按列去连接
np.concatenate([b,b,b], axis=1)
Explanation: 拼接
End of explanation |
10,087 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
We covered a lot of information today and I'd like you to practice developing classification trees on your own. For each exercise, work through the problem, determine the result, and provide the requested interpretation in comments along with the code. The point is to build classifiers, not necessarily good classifiers (that will hopefully come later)
1. Load the iris dataset and create a holdout set that is 50% of the data (50% in training and 50% in test). Output the results (don't worry about creating the tree visual unless you'd like to) and discuss them briefly (are they good or not?)
Step1: 2. Redo the model with a 75% - 25% training/test split and compare the results. Are they better or worse than before? Discuss why this may be.
Step2: 3. Load the breast cancer dataset (datasets.load_breast_cancer()) and perform basic exploratory analysis. What attributes to we have? What are we trying to predict?
For context of the data, see the documentation here
Step3: 4. Using the breast cancer data, create a classifier to predict the type of seed. Perform the above hold out evaluation (50-50 and 75-25) and discuss the results. | Python Code:
import pandas as pd
%matplotlib inline
from sklearn import datasets
from pandas.tools.plotting import scatter_matrix
import matplotlib.pyplot as plt
from sklearn import tree
iris = datasets.load_iris()
x = iris.data[:,2:]
y = iris.target
plt.figure(2, figsize=(8, 6))
plt.scatter(x[:, 0], x[:, 1], c=y, cmap=plt.cm.CMRmap)
plt.xlabel('Petal length (cm)')
plt.ylabel('Petal width (cm)')
dt = tree.DecisionTreeClassifier()
dt = dt.fit(x,y)
from sklearn.cross_validation import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.5,train_size=0.5)
dt = dt.fit(x_train,y_train)
from sklearn import metrics
import numpy as np
def measure_performance(X,y,clf, show_accuracy=True, show_classification_report=True, show_confussion_matrix=True):
y_pred=clf.predict(X)
if show_accuracy:
print("Accuracy:{0:.5f}".format(metrics.accuracy_score(y, y_pred)),"\n")
if show_classification_report:
print("Classification report")
print(metrics.classification_report(y,y_pred),"\n")
if show_confussion_matrix:
print("Confusion matrix")
print(metrics.confusion_matrix(y,y_pred),"\n")
measure_performance(x_test,y_test,dt)
Explanation: We covered a lot of information today and I'd like you to practice developing classification trees on your own. For each exercise, work through the problem, determine the result, and provide the requested interpretation in comments along with the code. The point is to build classifiers, not necessarily good classifiers (that will hopefully come later)
1. Load the iris dataset and create a holdout set that is 50% of the data (50% in training and 50% in test). Output the results (don't worry about creating the tree visual unless you'd like to) and discuss them briefly (are they good or not?)
End of explanation
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.25,train_size=0.75)
dt = dt.fit(x_train,y_train)
def measure_performance(X,y,clf, show_accuracy=True, show_classification_report=True, show_confussion_matrix=True):
y_pred=clf.predict(X)
if show_accuracy:
print("Accuracy:{0:.75f}".format(metrics.accuracy_score(y, y_pred)),"\n")
if show_classification_report:
print("Classification report")
print(metrics.classification_report(y,y_pred),"\n")
if show_confussion_matrix:
print("Confusion matrix")
print(metrics.confusion_matrix(y,y_pred),"\n")
measure_performance(x_test,y_test,dt)
Explanation: 2. Redo the model with a 75% - 25% training/test split and compare the results. Are they better or worse than before? Discuss why this may be.
End of explanation
breast = datasets.load_breast_cancer()
breast.data
x = breast.data[:,:] # the attributes
y = breast.target
plt.figure(2, figsize=(8, 6))
plt.scatter(x[:, 0], x[:, 1], c=y, cmap=plt.cm.CMRmap)
Explanation: 3. Load the breast cancer dataset (datasets.load_breast_cancer()) and perform basic exploratory analysis. What attributes to we have? What are we trying to predict?
For context of the data, see the documentation here: https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+%28Diagnostic%29
End of explanation
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.5,train_size=0.5)
dt = dt.fit(x_train,y_train)
def measure_performance(X,y,clf, show_accuracy=True, show_classification_report=True, show_confussion_matrix=True):
y_pred=clf.predict(X)
if show_accuracy:
print("Accuracy:{0:.5f}".format(metrics.accuracy_score(y, y_pred)),"\n")
if show_classification_report:
print("Classification report")
print(metrics.classification_report(y,y_pred),"\n")
if show_confussion_matrix:
print("Confusion matrix")
print(metrics.confusion_matrix(y,y_pred),"\n")
measure_performance(x_test,y_test,dt)
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.25,train_size=0.75)
dt = dt.fit(x_train,y_train)
def measure_performance(X,y,clf, show_accuracy=True, show_classification_report=True, show_confussion_matrix=True):
y_pred=clf.predict(X)
if show_accuracy:
print("Accuracy:{0:.75f}".format(metrics.accuracy_score(y, y_pred)),"\n")
if show_classification_report:
print("Classification report")
print(metrics.classification_report(y,y_pred),"\n")
if show_confussion_matrix:
print("Confusion matrix")
print(metrics.confusion_matrix(y,y_pred),"\n")
measure_performance(x_test,y_test,dt)
Explanation: 4. Using the breast cancer data, create a classifier to predict the type of seed. Perform the above hold out evaluation (50-50 and 75-25) and discuss the results.
End of explanation |
10,088 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
Step1: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
Step2: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
Step3: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
Step4: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
Step5: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters
Step8: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
Step9: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
Step10: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly. | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
rides[:24*10].plot(x='dteday', y='cnt')
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
Explanation: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
(self.input_nodes, self.hidden_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
#set all weights to 1 for testing
self.weights_input_to_hidden /= self.weights_input_to_hidden
self.weights_hidden_to_output /= self.weights_hidden_to_output
self.lr = learning_rate
# Set self.activation_function to sigmoid function
self.activation_function = lambda x : 1 / (1 + np.exp(-x))
def train(self, features, targets):
''' Train the network on batch of features and targets.
Arguments
---------
features: 2D array, each row is one data record, each column is a feature
targets: 1D array of target values
'''
n_records = features.shape[0]
delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)
delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)
for X, y in zip(features, targets):
### Forward pass ###
print("X:", X)
print("y:", y)
hidden_inputs = np.dot(X, self.weights_input_to_hidden)
print("hidden_inputs:", hidden_inputs)
hidden_outputs = self.activation_function(hidden_inputs)
print("hidden_outputs:", hidden_outputs)
# Output layer
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output)
print("final_inputs:", final_inputs)
final_outputs = final_inputs # f(x) = y
print("final_outputs:", final_outputs)
### Backward pass ###
error = y - final_outputs
print("error:", error)
output_error_term = error * 1 # f'(x) = 1
print("output_error_term:", output_error_term)
# hidden layer
hidden_error = np.dot(self.weights_hidden_to_output, output_error_term)
hidden_error_term = hidden_error * hidden_output * (1 - hidden_output)
## Weight step ##
# hidden to output
delta_weights_h_o += output_error_term * hidden_output
print("delta_weights_h_o:", delta_weights_h_o)
#input to hidden
delta_weights_i_h += hidden_error_term * X
print("delta_weights_i_h:", delta_weights_i_h)
# Update the Weights
self.weights_hidden_to_output += self.lr * delta_weights_h_o / n_records
print("weights_hidden_to_output:", weights_hidden_to_output)
self.weights_input_to_hidden += self.lr* delta_weights_i_h / n_records
print("weights_input_to_hidden:", weights_input_to_hidden)
def run(self, features):
''' Run a forward pass through the network with input features
Arguments
---------
features: 1D array of feature values
'''
#### Implement the forward pass here ####
# hidden layer
hidden_inputs = np.dot(features, self.weights_input_to_hidden)
hidden_outputs = self.activation_function(hidden_inputs)
# Output layer
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output)
final_outputs = final_inputs # f(x) = y
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
network = NeuralNetwork(3, 2, 1, 0.5)
feature_batch = train_features.values[0:1, 0:3]
target_batch = train_targets.values[0:1, 0:1]
network.train(feature_batch, target_batch)
network.run(feature_batch)
Explanation: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
<img src="assets/neural_network.png" width=300px>
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
print("this one)")
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
print("that one")
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
Explanation: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
End of explanation
import sys
### Set the hyperparameters here ###
iterations = 100
learning_rate = 0.1
hidden_nodes = 2
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim()
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
End of explanation
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation |
10,089 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Authors.
Step1: 과대적합과 과소적합
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: IMDB 데이터셋 다운로드
이전 노트북에서처럼 임베딩을 사용하지 않고 여기에서는 문장을 멀티-핫 인코딩(multi-hot encoding)으로 변환하겠습니다. 이 모델은 훈련 세트에 빠르게 과대적합될 것입니다. 과대적합을 발생시키기고 어떻게 해결하는지 보이기 위해 선택했습니다.
멀티-핫 인코딩은 정수 시퀀스를 0과 1로 이루어진 벡터로 변환합니다. 정확하게 말하면 시퀀스 [3, 5]를 인덱스 3과 5만 1이고 나머지는 모두 0인 10,000 차원 벡터로 변환한다는 의미입니다.
Step3: 만들어진 멀티-핫 벡터 중 하나를 살펴 보죠. 단어 인덱스는 빈도 순으로 정렬되어 있습니다. 그래프에서 볼 수 있듯이 인덱스 0에 가까울수록 1이 많이 등장합니다
Step4: 과대적합 예제
과대적합을 막는 가장 간단한 방법은 모델의 규모를 축소하는 것입니다. 즉, 모델에 있는 학습 가능한 파라미터의 수를 줄입니다(모델 파라미터는 층(layer)의 개수와 층의 유닛(unit) 개수에 의해 결정됩니다). 딥러닝에서는 모델의 학습 가능한 파라미터의 수를 종종 모델의 "용량"이라고 말합니다. 직관적으로 생각해 보면 많은 파라미터를 가진 모델이 더 많은 "기억 용량"을 가집니다. 이런 모델은 훈련 샘플과 타깃 사이를 일반화 능력이 없는 딕셔너리와 같은 매핑으로 완벽하게 학습할 수 있습니다. 하지만 이전에 본 적 없는 데이터에서 예측을 할 땐 쓸모가 없을 것입니다.
항상 기억해야 할 점은 딥러닝 모델이 훈련 세트에는 학습이 잘 되는 경향이 있지만 진짜 해결할 문제는 학습이 아니라 일반화라는 것입니다.
반면에 네트워크의 기억 용량이 부족하다면 이런 매핑을 쉽게 학습할 수 없을 것입니다. 손실을 최소화하기 위해서는 예측 성능이 더 많은 압축된 표현을 학습해야 합니다. 또한 너무 작은 모델을 만들면 훈련 데이터를 학습하기 어렵울 것입니다. "너무 많은 용량"과 "충분하지 않은 용량" 사이의 균형을 잡아야 합니다.
안타깝지만 어떤 모델의 (층의 개수나 뉴런 개수에 해당하는) 적절한 크기나 구조를 결정하는 마법같은 공식은 없습니다. 여러 가지 다른 구조를 사용해 실험을 해봐야만 합니다.
알맞은 모델의 크기를 찾으려면 비교적 적은 수의 층과 파라미터로 시작해서 검증 손실이 감소할 때까지 새로운 층을 추가하거나 층의 크기를 늘리는 것이 좋습니다. 영화 리뷰 분류 네트워크를 사용해 이를 실험해 보죠.
Dense 층만 사용하는 간단한 기준 모델을 만들고 작은 규모의 버전와 큰 버전의 모델을 만들어 비교하겠습니다.
기준 모델 만들기
Step5: 작은 모델 만들기
앞서 만든 기준 모델과 비교하기 위해 적은 수의 은닉 유닛을 가진 모델을 만들어 보죠
Step6: 같은 데이터를 사용해 이 모델을 훈련합니다
Step7: 큰 모델 만들기
아주 큰 모델을 만들어 얼마나 빠르게 과대적합이 시작되는지 알아 볼 수 있습니다. 이 문제에 필요한 것보다 훨씬 더 큰 용량을 가진 네트워크를 추가해서 비교해 보죠
Step8: 역시 같은 데이터를 사용해 모델을 훈련합니다
Step9: 훈련 손실과 검증 손실 그래프 그리기
<!--TODO(markdaoust)
Step10: 큰 네트워크는 거의 바로 첫 번째 에포크 이후에 과대적합이 시작되고 훨씬 더 심각하게 과대적합됩니다. 네트워크의 용량이 많을수록 훈련 세트를 더 빠르게 모델링할 수 있습니다(훈련 손실이 낮아집니다). 하지만 더 쉽게 과대적합됩니다(훈련 손실과 검증 손실 사이에 큰 차이가 발생합니다).
전략
가중치를 규제하기
아마도 오캄의 면도날(Occam's Razor) 이론을 들어 보았을 것입니다. 어떤 것을 설명하는 두 가지 방법이 있다면 더 정확한 설명은 최소한의 가정이 필요한 가장 "간단한" 설명일 것입니다. 이는 신경망으로 학습되는 모델에도 적용됩니다. 훈련 데이터와 네트워크 구조가 주어졌을 때 이 데이터를 설명할 수 있는 가중치의 조합(즉, 가능한 모델)은 많습니다. 간단한 모델은 복잡한 것보다 과대적합되는 경향이 작을 것입니다.
여기서 "간단한 모델"은 모델 파라미터의 분포를 봤을 때 엔트로피(entropy)가 작은 모델입니다(또는 앞 절에서 보았듯이 적은 파라미터를 가진 모델입니다). 따라서 과대적합을 완화시키는 일반적인 방법은 가중치가 작은 값을 가지도록 네트워크의 복잡도에 제약을 가하는 것입니다. 이는 가중치 값의 분포를 좀 더 균일하게 만들어 줍니다. 이를 "가중치 규제"(weight regularization)라고 부릅니다. 네트워크의 손실 함수에 큰 가중치에 해당하는 비용을 추가합니다. 이 비용은 두 가지 형태가 있습니다
Step11: l2(0.001)는 네트워크의 전체 손실에 층에 있는 가중치 행렬의 모든 값이 0.001 * weight_coefficient_value**2만큼 더해진다는 의미입니다. 이런 페널티(penalty)는 훈련할 때만 추가됩니다. 따라서 테스트 단계보다 훈련 단계에서 네트워크 손실이 훨씬 더 클 것입니다.
L2 규제의 효과를 확인해 보죠
Step12: 결과에서 보듯이 모델 파라미터의 개수는 같지만 L2 규제를 적용한 모델이 기본 모델보다 과대적합에 훨씬 잘 견디고 있습니다.
드롭아웃 추가하기
드롭아웃(dropout)은 신경망에서 가장 효과적이고 널리 사용하는 규제 기법 중 하나입니다. 토론토(Toronto) 대학의 힌튼(Hinton)과 그의 제자들이 개발했습니다. 드롭아웃을 층에 적용하면 훈련하는 동안 층의 출력 특성을 랜덤하게 끕니다(즉, 0으로 만듭니다). 훈련하는 동안 어떤 입력 샘플에 대해 [0.2, 0.5, 1.3, 0.8, 1.1] 벡터를 출력하는 층이 있다고 가정해 보죠. 드롭아웃을 적용하면 이 벡터에서 몇 개의 원소가 랜덤하게 0이 됩니다. 예를 들면, [0, 0.5, 1.3, 0, 1.1]가 됩니다. "드롭아웃 비율"은 0이 되는 특성의 비율입니다. 보통 0.2에서 0.5 사이를 사용합니다. 테스트 단계에서는 어떤 유닛도 드롭아웃하지 않습니다. 훈련 단계보다 더 많은 유닛이 활성화되기 때문에 균형을 맞추기 위해 층의 출력 값을 드롭아웃 비율만큼 줄입니다.
tf.keras에서는 Dropout 층을 이용해 네트워크에 드롭아웃을 추가할 수 있습니다. 이 층은 바로 이전 층의 출력에 드롭아웃을 적용합니다.
IMDB 네트워크에 두 개의 Dropout 층을 추가하여 과대적합이 얼마나 감소하는지 알아 보겠습니다 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
import tensorflow.compat.v1 as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
Explanation: 과대적합과 과소적합
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/r1/tutorials/keras/overfit_and_underfit.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />구글 코랩(Colab)에서 실행하기</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/r1/tutorials/keras/overfit_and_underfit.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />깃허브(GitHub) 소스 보기</a>
</td>
</table>
Note: 이 문서는 텐서플로 커뮤니티에서 번역했습니다. 커뮤니티 번역 활동의 특성상 정확한 번역과 최신 내용을 반영하기 위해 노력함에도
불구하고 공식 영문 문서의 내용과 일치하지 않을 수 있습니다.
이 번역에 개선할 부분이 있다면
tensorflow/docs 깃헙 저장소로 풀 리퀘스트를 보내주시기 바랍니다.
문서 번역이나 리뷰에 참여하려면
[email protected]로
메일을 보내주시기 바랍니다.
지금까지 그랬듯이 이 예제의 코드도 tf.keras API를 사용합니다. 텐서플로 케라스 가이드에서 tf.keras API에 대해 더 많은 정보를 얻을 수 있습니다.
앞서 영화 리뷰 분류와 주택 가격 예측의 두 예제에서 일정 에포크 동안 훈련하면 검증 세트에서 모델 성능이 최고점에 도달한 다음 감소하기 시작한 것을 보았습니다.
다른 말로 하면, 모델이 훈련 세트에 과대적합(overfitting)된 것입니다. 과대적합을 다루는 방법은 꼭 배워야 합니다. 훈련 세트에서 높은 성능을 얻을 수 있지만 진짜 원하는 것은 테스트 세트(또는 이전에 본 적 없는 데이터)에 잘 일반화되는 모델입니다.
과대적합의 반대는 과소적합(underfitting)입니다. 과소적합은 테스트 세트의 성능이 향상될 여지가 아직 있을 때 일어납니다. 발생하는 원인은 여러가지입니다. 모델이 너무 단순하거나, 규제가 너무 많거나, 그냥 단순히 충분히 오래 훈련하지 않는 경우입니다. 즉 네트워크가 훈련 세트에서 적절한 패턴을 학습하지 못했다는 뜻입니다.
모델을 너무 오래 훈련하면 과대적합되기 시작하고 테스트 세트에서 일반화되지 못하는 패턴을 훈련 세트에서 학습합니다. 과대적합과 과소적합 사이에서 균형을 잡아야 합니다. 이를 위해 적절한 에포크 횟수동안 모델을 훈련하는 방법을 배워보겠습니다.
과대적합을 막는 가장 좋은 방법은 더 많은 훈련 데이터를 사용하는 것입니다. 많은 데이터에서 훈련한 모델은 자연적으로 일반화 성능이 더 좋습니다. 데이터를 더 준비할 수 없을 때 그다음으로 가장 좋은 방법은 규제(regularization)와 같은 기법을 사용하는 것입니다. 모델이 저장할 수 있는 정보의 양과 종류에 제약을 부과하는 방법입니다. 네트워크가 소수의 패턴만 기억할 수 있다면 최적화 과정 동안 일반화 가능성이 높은 가장 중요한 패턴에 촛점을 맞출 것입니다.
이 노트북에서 널리 사용되는 두 가지 규제 기법인 가중치 규제와 드롭아웃(dropout)을 알아 보겠습니다. 이런 기법을 사용하여 IMDB 영화 리뷰 분류 모델의 성능을 향상시켜 보죠.
End of explanation
NUM_WORDS = 10000
(train_data, train_labels), (test_data, test_labels) = keras.datasets.imdb.load_data(num_words=NUM_WORDS)
def multi_hot_sequences(sequences, dimension):
# 0으로 채워진 (len(sequences), dimension) 크기의 행렬을 만듭니다
results = np.zeros((len(sequences), dimension))
for i, word_indices in enumerate(sequences):
results[i, word_indices] = 1.0 # results[i]의 특정 인덱스만 1로 설정합니다
return results
train_data = multi_hot_sequences(train_data, dimension=NUM_WORDS)
test_data = multi_hot_sequences(test_data, dimension=NUM_WORDS)
Explanation: IMDB 데이터셋 다운로드
이전 노트북에서처럼 임베딩을 사용하지 않고 여기에서는 문장을 멀티-핫 인코딩(multi-hot encoding)으로 변환하겠습니다. 이 모델은 훈련 세트에 빠르게 과대적합될 것입니다. 과대적합을 발생시키기고 어떻게 해결하는지 보이기 위해 선택했습니다.
멀티-핫 인코딩은 정수 시퀀스를 0과 1로 이루어진 벡터로 변환합니다. 정확하게 말하면 시퀀스 [3, 5]를 인덱스 3과 5만 1이고 나머지는 모두 0인 10,000 차원 벡터로 변환한다는 의미입니다.
End of explanation
plt.plot(train_data[0])
Explanation: 만들어진 멀티-핫 벡터 중 하나를 살펴 보죠. 단어 인덱스는 빈도 순으로 정렬되어 있습니다. 그래프에서 볼 수 있듯이 인덱스 0에 가까울수록 1이 많이 등장합니다:
End of explanation
baseline_model = keras.Sequential([
# `.summary` 메서드 때문에 `input_shape`가 필요합니다
keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
baseline_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
baseline_model.summary()
baseline_history = baseline_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
Explanation: 과대적합 예제
과대적합을 막는 가장 간단한 방법은 모델의 규모를 축소하는 것입니다. 즉, 모델에 있는 학습 가능한 파라미터의 수를 줄입니다(모델 파라미터는 층(layer)의 개수와 층의 유닛(unit) 개수에 의해 결정됩니다). 딥러닝에서는 모델의 학습 가능한 파라미터의 수를 종종 모델의 "용량"이라고 말합니다. 직관적으로 생각해 보면 많은 파라미터를 가진 모델이 더 많은 "기억 용량"을 가집니다. 이런 모델은 훈련 샘플과 타깃 사이를 일반화 능력이 없는 딕셔너리와 같은 매핑으로 완벽하게 학습할 수 있습니다. 하지만 이전에 본 적 없는 데이터에서 예측을 할 땐 쓸모가 없을 것입니다.
항상 기억해야 할 점은 딥러닝 모델이 훈련 세트에는 학습이 잘 되는 경향이 있지만 진짜 해결할 문제는 학습이 아니라 일반화라는 것입니다.
반면에 네트워크의 기억 용량이 부족하다면 이런 매핑을 쉽게 학습할 수 없을 것입니다. 손실을 최소화하기 위해서는 예측 성능이 더 많은 압축된 표현을 학습해야 합니다. 또한 너무 작은 모델을 만들면 훈련 데이터를 학습하기 어렵울 것입니다. "너무 많은 용량"과 "충분하지 않은 용량" 사이의 균형을 잡아야 합니다.
안타깝지만 어떤 모델의 (층의 개수나 뉴런 개수에 해당하는) 적절한 크기나 구조를 결정하는 마법같은 공식은 없습니다. 여러 가지 다른 구조를 사용해 실험을 해봐야만 합니다.
알맞은 모델의 크기를 찾으려면 비교적 적은 수의 층과 파라미터로 시작해서 검증 손실이 감소할 때까지 새로운 층을 추가하거나 층의 크기를 늘리는 것이 좋습니다. 영화 리뷰 분류 네트워크를 사용해 이를 실험해 보죠.
Dense 층만 사용하는 간단한 기준 모델을 만들고 작은 규모의 버전와 큰 버전의 모델을 만들어 비교하겠습니다.
기준 모델 만들기
End of explanation
smaller_model = keras.Sequential([
keras.layers.Dense(4, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(4, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
smaller_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
smaller_model.summary()
Explanation: 작은 모델 만들기
앞서 만든 기준 모델과 비교하기 위해 적은 수의 은닉 유닛을 가진 모델을 만들어 보죠:
End of explanation
smaller_history = smaller_model.fit(train_data,
train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
Explanation: 같은 데이터를 사용해 이 모델을 훈련합니다:
End of explanation
bigger_model = keras.models.Sequential([
keras.layers.Dense(512, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(512, activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
bigger_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
bigger_model.summary()
Explanation: 큰 모델 만들기
아주 큰 모델을 만들어 얼마나 빠르게 과대적합이 시작되는지 알아 볼 수 있습니다. 이 문제에 필요한 것보다 훨씬 더 큰 용량을 가진 네트워크를 추가해서 비교해 보죠:
End of explanation
bigger_history = bigger_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
Explanation: 역시 같은 데이터를 사용해 모델을 훈련합니다:
End of explanation
def plot_history(histories, key='binary_crossentropy'):
plt.figure(figsize=(16,10))
for name, history in histories:
val = plt.plot(history.epoch, history.history['val_'+key],
'--', label=name.title()+' Val')
plt.plot(history.epoch, history.history[key], color=val[0].get_color(),
label=name.title()+' Train')
plt.xlabel('Epochs')
plt.ylabel(key.replace('_',' ').title())
plt.legend()
plt.xlim([0,max(history.epoch)])
plot_history([('baseline', baseline_history),
('smaller', smaller_history),
('bigger', bigger_history)])
Explanation: 훈련 손실과 검증 손실 그래프 그리기
<!--TODO(markdaoust): This should be a one-liner with tensorboard -->
실선은 훈련 손실이고 점선은 검증 손실입니다(낮은 검증 손실이 더 좋은 모델입니다). 여기서는 작은 네트워크가 기준 모델보다 더 늦게 과대적합이 시작되었습니다(즉 에포크 4가 아니라 6에서 시작됩니다). 또한 과대적합이 시작되고 훨씬 천천히 성능이 감소합니다.
End of explanation
l2_model = keras.models.Sequential([
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
activation=tf.nn.relu),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
l2_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', 'binary_crossentropy'])
l2_model_history = l2_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
Explanation: 큰 네트워크는 거의 바로 첫 번째 에포크 이후에 과대적합이 시작되고 훨씬 더 심각하게 과대적합됩니다. 네트워크의 용량이 많을수록 훈련 세트를 더 빠르게 모델링할 수 있습니다(훈련 손실이 낮아집니다). 하지만 더 쉽게 과대적합됩니다(훈련 손실과 검증 손실 사이에 큰 차이가 발생합니다).
전략
가중치를 규제하기
아마도 오캄의 면도날(Occam's Razor) 이론을 들어 보았을 것입니다. 어떤 것을 설명하는 두 가지 방법이 있다면 더 정확한 설명은 최소한의 가정이 필요한 가장 "간단한" 설명일 것입니다. 이는 신경망으로 학습되는 모델에도 적용됩니다. 훈련 데이터와 네트워크 구조가 주어졌을 때 이 데이터를 설명할 수 있는 가중치의 조합(즉, 가능한 모델)은 많습니다. 간단한 모델은 복잡한 것보다 과대적합되는 경향이 작을 것입니다.
여기서 "간단한 모델"은 모델 파라미터의 분포를 봤을 때 엔트로피(entropy)가 작은 모델입니다(또는 앞 절에서 보았듯이 적은 파라미터를 가진 모델입니다). 따라서 과대적합을 완화시키는 일반적인 방법은 가중치가 작은 값을 가지도록 네트워크의 복잡도에 제약을 가하는 것입니다. 이는 가중치 값의 분포를 좀 더 균일하게 만들어 줍니다. 이를 "가중치 규제"(weight regularization)라고 부릅니다. 네트워크의 손실 함수에 큰 가중치에 해당하는 비용을 추가합니다. 이 비용은 두 가지 형태가 있습니다:
L1 규제는 가중치의 절댓값에 비례하는 비용이 추가됩니다(즉, 가중치의 "L1 노름(norm)"을 추가합니다).
L2 규제는 가중치의 제곱에 비례하는 비용이 추가됩니다(즉, 가중치의 "L2 노름"의 제곱을 추가합니다). 신경망에서는 L2 규제를 가중치 감쇠(weight decay)라고도 부릅니다. 이름이 다르지만 혼돈하지 마세요. 가중치 감쇠는 수학적으로 L2 규제와 동일합니다.
tf.keras에서는 가중치 규제 객체를 층의 키워드 매개변수에 전달하여 가중치에 규제를 추가합니다. L2 가중치 규제를 추가해 보죠.
End of explanation
plot_history([('baseline', baseline_history),
('l2', l2_model_history)])
Explanation: l2(0.001)는 네트워크의 전체 손실에 층에 있는 가중치 행렬의 모든 값이 0.001 * weight_coefficient_value**2만큼 더해진다는 의미입니다. 이런 페널티(penalty)는 훈련할 때만 추가됩니다. 따라서 테스트 단계보다 훈련 단계에서 네트워크 손실이 훨씬 더 클 것입니다.
L2 규제의 효과를 확인해 보죠:
End of explanation
dpt_model = keras.models.Sequential([
keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
keras.layers.Dropout(0.5),
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dropout(0.5),
keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
dpt_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy','binary_crossentropy'])
dpt_model_history = dpt_model.fit(train_data, train_labels,
epochs=20,
batch_size=512,
validation_data=(test_data, test_labels),
verbose=2)
plot_history([('baseline', baseline_history),
('dropout', dpt_model_history)])
Explanation: 결과에서 보듯이 모델 파라미터의 개수는 같지만 L2 규제를 적용한 모델이 기본 모델보다 과대적합에 훨씬 잘 견디고 있습니다.
드롭아웃 추가하기
드롭아웃(dropout)은 신경망에서 가장 효과적이고 널리 사용하는 규제 기법 중 하나입니다. 토론토(Toronto) 대학의 힌튼(Hinton)과 그의 제자들이 개발했습니다. 드롭아웃을 층에 적용하면 훈련하는 동안 층의 출력 특성을 랜덤하게 끕니다(즉, 0으로 만듭니다). 훈련하는 동안 어떤 입력 샘플에 대해 [0.2, 0.5, 1.3, 0.8, 1.1] 벡터를 출력하는 층이 있다고 가정해 보죠. 드롭아웃을 적용하면 이 벡터에서 몇 개의 원소가 랜덤하게 0이 됩니다. 예를 들면, [0, 0.5, 1.3, 0, 1.1]가 됩니다. "드롭아웃 비율"은 0이 되는 특성의 비율입니다. 보통 0.2에서 0.5 사이를 사용합니다. 테스트 단계에서는 어떤 유닛도 드롭아웃하지 않습니다. 훈련 단계보다 더 많은 유닛이 활성화되기 때문에 균형을 맞추기 위해 층의 출력 값을 드롭아웃 비율만큼 줄입니다.
tf.keras에서는 Dropout 층을 이용해 네트워크에 드롭아웃을 추가할 수 있습니다. 이 층은 바로 이전 층의 출력에 드롭아웃을 적용합니다.
IMDB 네트워크에 두 개의 Dropout 층을 추가하여 과대적합이 얼마나 감소하는지 알아 보겠습니다:
End of explanation |
10,090 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
W6 Lab Assignment
Deep dive into Histogram and boxplot.
Step1: Histogram
Let's revisit the table from the class
| Hours | Frequency |
|-------|-----------|
| 0-1 | 4,300 |
| 1-3 | 6,900 |
| 3-5 | 4,900 |
| 5-10 | 2,000 |
| 10-24 | 2,100 |
You can draw a histogram by just providing bins and counts instead of a list of numbers. So, let's do that for convenience.
Step2: Draw histogram using this data. Useful query
Step3: As you can see, the default histogram does not normalize with binwidth and simply shows the counts! This can be very misleading if you are working with variable bin width. One simple way to fix this is using the option normed.
Step4: IMDB data
How does matplotlib decide the bin width? Let's try with the IMDb data.
Step5: Plot the histogram of movie ratings using the plt.hist() function.
Step6: Have you noticed that this function returns three objects? Take a look at the documentation here to figure out what they are.
To get the returned three objects
Step7: Actually, n_raw contains the values of histograms, i.e., the number of movies in each of the 10 bins. Thus, the sum of the elements in n_raw should be equal to the total number of movies
Step8: The second returned object (bins_raw) is a list containing the edges of the 10 bins
Step9: The above for loop can be conveniently rewritten as the following, using list comprehension and the zip() function. Can you explain what's going on inside the zip?
Step10: Noticed that the width of each bin is the same? This is equal-width binning. We can calculate the width as
Step11: Now, let's plot the histogram where the y axis is normed.
Step12: In this case, the edges of the 10 bins do not change. But now n represents the heights of the bins. Can you verify that matplotlib has correctly normed the heights of the bins?
Hint
Step13: Selecting binsize
A nice to way to explore this is using the "small multiples" with a set of sample bin sizes. In other words, pick some binsizes that you want to see and draw many plots within a single "figure". Read about subplot. For instance, you can do something like
Step14: What does the argument in plt.subplot(1,2,1) mean?
http
Step15: Do you notice weird patterns that emerge from bins=40? Can you guess why do you see such patterns? What are the peaks and what are the empty bars? What do they tell you about choosing the binsize in histograms?
Now, let's try to apply several algorithms for finding the number of bins.
Step16: Investigating the anomalies in the histogram
Let's investigate the anormalies in the histogram.
Step17: We can locate where the empty bins are, by checking whether the value in the n is zero or not.
Step18: One way to identify the peak is comparing the number to the next bin and see whether it is much higher than the next bin.
Step19: Ok. They doesn't necessarilly cover the integer values. Let's see the minimum number of votes.
Step20: Ok, the minimum number of votes is 5 not 1. IMDB may only keep the rating information for movies with at least 5 votes. This may explain why the most frequent ratings are like 6.4 and 6.6. Let's plot the histogram with only the rows with 5 votes. Set the binsize 30.
Step21: Then, print out what are the most frequent rating values. Use value_counts() function for dataframe.
Step22: So, the most frequent values are not "x.0". Let's see the CDF.
Step23: What's going on? The number of votes is heavily skewed and most datapoints are at the left end.
Step24: Draw a histogram focused on the range [0, 10] to just see how many datapoints are there.
Step25: Let's assume that most 5 ratings are from 5 to 8 and see what we'll get. You can use itertools.product function to generate the fake ratings.
Step26: Boxplot
Let's look at the example data that we looked at during the class.
Step27: The numpy.percentile() function provides a way to calculate the percentiles. Note that using the option interpolation, you can specify which value to take when the percentile value lies in between numbers. The default is linear.
Step28: Can you explain why do you get those first and third quartile values? The first quantile value is not 4, not 15, and not 9.5. Why?
Let's draw a boxplot with matplotlib. | Python Code:
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import pandas as pd
sns.set_style('white')
%matplotlib inline
Explanation: W6 Lab Assignment
Deep dive into Histogram and boxplot.
End of explanation
bins = [0, 1, 3, 5, 10, 24]
data = {0.5: 4300, 2: 6900, 4: 4900, 7: 2000, 15: 2100}
Explanation: Histogram
Let's revisit the table from the class
| Hours | Frequency |
|-------|-----------|
| 0-1 | 4,300 |
| 1-3 | 6,900 |
| 3-5 | 4,900 |
| 5-10 | 2,000 |
| 10-24 | 2,100 |
You can draw a histogram by just providing bins and counts instead of a list of numbers. So, let's do that for convenience.
End of explanation
# TODO: draw a histogram with pre-counted data.
#plt.xlabel("Hours")
val, weight = zip(*[(k, v) for k,v in data.items()])
plt.hist(val, weights=weight, bins = bins)
plt.xlabel("Hours")
Explanation: Draw histogram using this data. Useful query: Google search: matplotlib histogram pre-counted
End of explanation
# TODO: fix it with normed option.
plt.hist(val, weights=weight, bins = bins, normed = True)
Explanation: As you can see, the default histogram does not normalize with binwidth and simply shows the counts! This can be very misleading if you are working with variable bin width. One simple way to fix this is using the option normed.
End of explanation
# TODO: Load IMDB data into movie_df using pandas
movie_df = pd.read_csv('imdb.csv', delimiter='\t')
movie_df.head()
Explanation: IMDB data
How does matplotlib decide the bin width? Let's try with the IMDb data.
End of explanation
plt.hist(movie_df['Rating'])
Explanation: Plot the histogram of movie ratings using the plt.hist() function.
End of explanation
n_raw, bins_raw, patches = plt.hist(movie_df['Rating'])
print(n_raw)
print(bins_raw)
Explanation: Have you noticed that this function returns three objects? Take a look at the documentation here to figure out what they are.
To get the returned three objects:
End of explanation
# TODO: test whether the sum of the numbers in n_raw is equal to the number of movies.
sum(n_raw)==len(movie_df)
Explanation: Actually, n_raw contains the values of histograms, i.e., the number of movies in each of the 10 bins. Thus, the sum of the elements in n_raw should be equal to the total number of movies:
End of explanation
# TODO: calculate the width of each bin and print them.
for i in range(len(bins_raw)-1):
print (bins_raw[i+1] - bins_raw[i])
Explanation: The second returned object (bins_raw) is a list containing the edges of the 10 bins: the first bin is [1.0,1.89], the second [1.89,2.78], and so on. We can calculate the width of each bin.
End of explanation
[ j-i for i,j in zip(bins_raw[:-1],bins_raw[1:]) ]
Explanation: The above for loop can be conveniently rewritten as the following, using list comprehension and the zip() function. Can you explain what's going on inside the zip?
End of explanation
min_rating = min(movie_df['Rating'])
max_rating = max(movie_df['Rating'])
print(min_rating, max_rating)
print( (max_rating-min_rating) / 10 )
Explanation: Noticed that the width of each bin is the same? This is equal-width binning. We can calculate the width as:
End of explanation
n, bins, patches = plt.hist(movie_df['Rating'], normed=True)
print(n)
print(bins)
Explanation: Now, let's plot the histogram where the y axis is normed.
End of explanation
# TODO: verify that it is properly normalized.
normalizeList = []
for i in range(len(bins)):
try:
Moviesbins = movie_df[(movie_df['Rating'] >= bins[i]) & (movie_df['Rating'] <= bins[i+1])]
normalizeList.append(round(len(Moviesbins)/len(movie_df), 4))
except IndexError:
pass
print("Bin widths", normalizeList)
print("Data from histogram", n)
Explanation: In this case, the edges of the 10 bins do not change. But now n represents the heights of the bins. Can you verify that matplotlib has correctly normed the heights of the bins?
Hint: the area of each bin should be equal to the fraction of movies in that bin.
End of explanation
plt.figure(figsize=(10,5))
plt.subplot(1,2,1)
movie_df['Rating'].hist(bins=3)
plt.subplot(1,2,2)
movie_df['Rating'].hist(bins=100)
Explanation: Selecting binsize
A nice to way to explore this is using the "small multiples" with a set of sample bin sizes. In other words, pick some binsizes that you want to see and draw many plots within a single "figure". Read about subplot. For instance, you can do something like:
End of explanation
binsizes = [2, 3, 5, 10, 30, 40, 60, 100 ]
plt.figure(1, figsize=(18,8))
for i, bins in enumerate(binsizes):
# TODO: use subplot and hist() function to draw 8 plots
plt.subplot(2, 4, i + 1)
movie_df['Rating'].hist(bins = bins)
plt.title("Bin size " + str(bins))
Explanation: What does the argument in plt.subplot(1,2,1) mean?
http://stackoverflow.com/questions/3584805/in-matplotlib-what-does-the-argument-mean-in-fig-add-subplot111
Ok, so create 8 subplots (2 rows and 4 columns) with the given binsizes.
End of explanation
N = len(movie_df['Rating'])
# TODO: plot three histograms based on three formulae
plt.figure(figsize=(12,4))
# Sqrt
nbins = int(np.sqrt(N))
plt.subplot(1,3,1)
plt.hist(movie_df['Rating'], bins = nbins)
plt.title("SQRT, {0} bins".format(nbins))
# Sturge's formula
plt.subplot(1,3,2)
nbins = int(np.ceil(np.log2(N) + 1))
plt.hist(movie_df['Rating'], bins = nbins)
plt.title("Sturge's, {0} bins".format(nbins))
# Freedman-Diaconis
plt.subplot(1,3,3)
data = movie_df['Rating'].order()
iqr = np.percentile(data, 75) - np.percentile(data, 25)
width = 2*iqr/np.power(N, 1/3)
nbins = int((max(data) - min(data)) / width)
plt.hist(movie_df['Rating'], bins = nbins)
plt.title("Freedman-Diaconis, {0} bins".format(nbins))
Explanation: Do you notice weird patterns that emerge from bins=40? Can you guess why do you see such patterns? What are the peaks and what are the empty bars? What do they tell you about choosing the binsize in histograms?
Now, let's try to apply several algorithms for finding the number of bins.
End of explanation
# TODO: draw the histogram with 120 bins
n, bins, patches = plt.hist(movie_df['Rating'], bins = 120)
plt.title("Histogram with bins 120")
plt.xlabel("Rating")
plt.ylabel("Frequency")
Explanation: Investigating the anomalies in the histogram
Let's investigate the anormalies in the histogram.
End of explanation
# TODO: print out bins that doesn't contain any values. Check whether they fall into range like [1.8XX, 1.8XX]
# useful zip: zip(bins[:-1], bins[1:], n) what does this do?
zip_values = zip(bins[:-1], bins[1:], n)
print("Range with value zero's are as follows")
for i in zip_values:
if i[2] == 0:
print([i[0], i[1]])
if str(i[0])[:3] == str(i[1])[:3]:
print("They fall in range")
# TODO: draw the histogram with 120 bins
n, bins, patches = plt.hist(movie_df['Rating'], bins = 120)
plt.title("Histogram with bins 120")
plt.xlabel("Rating")
plt.ylabel("Frequency")
Explanation: We can locate where the empty bins are, by checking whether the value in the n is zero or not.
End of explanation
# TODO: identify peaks and print the bins with the peaks
# e.g.
# [1.0, 1.1]
# [1.3, 1.4]
# [1.6, 1.7]
# ...
#
# you can use zip again like zip(bins[:-1], bins[1:] ... ) to access the data in two adjacent bins.
values = list(zip(bins[:-1], bins[1:], n))
print("Bin with peaks are as follows")
for i in range(1, len(values)):
try:
if ((values[i][2] > values[i-1][2]) and (values[i][2] > values[i+1][2])):
print([values[i][0], values[i][1]])
except IndexError:
pass
Explanation: One way to identify the peak is comparing the number to the next bin and see whether it is much higher than the next bin.
End of explanation
movie_df.describe()
Explanation: Ok. They doesn't necessarilly cover the integer values. Let's see the minimum number of votes.
End of explanation
# TODO: plot the histogram only with ratings that have the minimum number of votes.
df = movie_df[movie_df['Votes'] == 5]
plt.hist(df['Rating'], bins = 30)
plt.xlabel("Rating")
plt.ylabel("Frequency")
plt.title("Histogram of rating with min no of votes")
Explanation: Ok, the minimum number of votes is 5 not 1. IMDB may only keep the rating information for movies with at least 5 votes. This may explain why the most frequent ratings are like 6.4 and 6.6. Let's plot the histogram with only the rows with 5 votes. Set the binsize 30.
End of explanation
# TODO: filter out the rows with the min number of votes (5) and then `value_counts()` them.
# sort the result to see what are the most common numbers.
df['Rating'].value_counts()
# As you can see in the following output that 6.4 is most common rating.
Explanation: Then, print out what are the most frequent rating values. Use value_counts() function for dataframe.
End of explanation
# Plot the CDF of votes.
Explanation: So, the most frequent values are not "x.0". Let's see the CDF.
End of explanation
# TODO: plot the same thing but limit the xrange (xlim) to [0, 100].
Explanation: What's going on? The number of votes is heavily skewed and most datapoints are at the left end.
End of explanation
# TODO: set the xlim to [0, 10] adjust ylim and bins so that
# we can see how many datapoints are there for each # of votes.
Explanation: Draw a histogram focused on the range [0, 10] to just see how many datapoints are there.
End of explanation
#list(product([5,6,7,8], repeat=5))[:10]
from itertools import product
from collections import Counter
c = Counter()
for x in product([5,6,7,8], repeat=5):
c[str(round(np.mean(x), 1))]+=1
sorted(c.items(), key=lambda x: x[1], reverse=True)
# or sorted(Counter(str(round(np.mean(x), 1)) for x in product([5,6,7,8], repeat=5)).items(), key=lambda x: x[1], reverse=True)
Explanation: Let's assume that most 5 ratings are from 5 to 8 and see what we'll get. You can use itertools.product function to generate the fake ratings.
End of explanation
data = [-1, 3, 3, 4, 15, 16, 16, 17, 23, 24, 24, 25, 35, 36, 37, 46]
Explanation: Boxplot
Let's look at the example data that we looked at during the class.
End of explanation
print(np.percentile(data, 25))
print(np.percentile(data, 50), np.median(data))
print(np.percentile(data, 75))
Explanation: The numpy.percentile() function provides a way to calculate the percentiles. Note that using the option interpolation, you can specify which value to take when the percentile value lies in between numbers. The default is linear.
End of explanation
# TODO: draw a boxplot of the data
plt.boxplot(data)
Explanation: Can you explain why do you get those first and third quartile values? The first quantile value is not 4, not 15, and not 9.5. Why?
Let's draw a boxplot with matplotlib.
End of explanation |
10,091 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
GeoNet FDSN webservice with Obspy demo - Event Service
This demo introduces some simple code that requests data using GeoNet's FDSN webservices and the obspy module in python. This notebook uses Python 3.
Getting Started - Import Modules
Step1: Define GeoNet FDSN client
Step2: Accessing Earthquake Information
Use the event service to access earthquake parameters from the catalogue.
This example requests the Kaikoura earthquake and aftershocks for 24 hours following the event, within a 0.5 degree radius of the epicenter. It then prints a list and plots the locations on a map
Step3: Single events can be requested using their PublicID, which is available from the GeoNet website. This example will demonstrate how to get additional information about the Kaikoura Earthquake.
Step4: Print out a summary of the information for the preferred origin.
Step5: List all available magnitudes and their associated uncertainties
Step6: List all arrivals used to locate the earthquake. | Python Code:
from obspy import UTCDateTime
from obspy.clients.fdsn import Client as FDSN_Client
from obspy import read_inventory
Explanation: GeoNet FDSN webservice with Obspy demo - Event Service
This demo introduces some simple code that requests data using GeoNet's FDSN webservices and the obspy module in python. This notebook uses Python 3.
Getting Started - Import Modules
End of explanation
client = FDSN_Client("GEONET")
Explanation: Define GeoNet FDSN client
End of explanation
starttime = "2016-11-13 11:00:00.000"
endtime = "2016-11-14 11:00:00.000"
cat = client.get_events(starttime=starttime, endtime=endtime,latitude=-42.693,longitude=173.022,maxradius=0.5,minmagnitude=5)
print(cat)
_=cat.plot(projection="local")
Explanation: Accessing Earthquake Information
Use the event service to access earthquake parameters from the catalogue.
This example requests the Kaikoura earthquake and aftershocks for 24 hours following the event, within a 0.5 degree radius of the epicenter. It then prints a list and plots the locations on a map
End of explanation
cat = client.get_events(eventid="2016p858000")
print(cat)
ev = cat[0]
print(ev)
Explanation: Single events can be requested using their PublicID, which is available from the GeoNet website. This example will demonstrate how to get additional information about the Kaikoura Earthquake.
End of explanation
origin = ev.origins[0]
print(origin)
Explanation: Print out a summary of the information for the preferred origin.
End of explanation
for m in range(len(ev.magnitudes)):
if 'uncertainty' in ev.magnitudes[m].mag_errors and ev.magnitudes[m].mag_errors['uncertainty'] != None and ev.magnitudes[m].resource_id == ev.preferred_magnitude_id:
print('%s = %f +/- %f - Preferred magnitude' % (ev.magnitudes[m].magnitude_type, ev.magnitudes[m].mag, ev.magnitudes[m].mag_errors['uncertainty']))
elif 'uncertainty' in ev.magnitudes[m].mag_errors and ev.magnitudes[m].mag_errors['uncertainty'] != None:
print('%s = %f +/- %f' % (ev.magnitudes[m].magnitude_type, ev.magnitudes[m].mag, ev.magnitudes[m].mag_errors['uncertainty']))
else:
print('%s = %f' % (ev.magnitudes[m].magnitude_type, ev.magnitudes[m].mag))
Explanation: List all available magnitudes and their associated uncertainties
End of explanation
print(origin.arrivals[0])
print(ev.picks[0])
for p in range(len(ev.picks)):
for a in range(len(origin.arrivals)):
if ev.picks[p].resource_id == origin.arrivals[a].pick_id:
if origin.arrivals[a].time_weight > 0:
print(ev.picks[p].time, ev.picks[p].waveform_id['station_code'], origin.arrivals[a].distance, origin.arrivals[a].phase, origin.arrivals[a].time_residual)
Explanation: List all arrivals used to locate the earthquake.
End of explanation |
10,092 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
This notebook is an analysis of the Crowdflower labels of 10,000 revisions of Wikipedia talk pages by users who have been blocked for personal harassment. These revisions are chosen from within a neighbourhood of 5 revisions from a block event. This dataset has been cleaned and filtered to remove common administrator messages. These datasets are annotated via crowdflower to measure friendliness, aggressiveness and whether the comment constitutes a personal attack.
On Crowdflower, each revision is rated 7 times. The raters are given three questions
Step1: Plot histogram of average ratings by revision
For each revision, we take the average of all the ratings by level of friendliness/aggressiveness and for each of the answers to Question 3. The histograms of these averages is displayed below.
Step2: Selected harassing and aggressive revisions by quartile
We look at a sample of revisions whose average aggressive score falls into various quantiles. This allows us to subjectively evaluate the quality of the questions that we are asking on Crowdflower.
Step3: Selected revisions on multiple questions
In this section, we examine a selection of revisions by their answer to Question 3 and sorted by aggression score. Again, this allows us to subjectively evaluate the quality of questions and responses that we obtain from Crowdflower.
Step4: Inter-Annotator Agreement
Below, we compute the Krippendorf's Alpha, which is a measure of the inter-annotator agreement of our Crowdflower responses. We achieve an Alpha value of 0.668 on our dataset, which is a relatively good level of inter-annotator agreement for this type of subjective inquiry.
Step5: T-Test of Aggressiveness
We explore whether aggressiveness changes in the tone of comments from immediately before a block event to immediately after. | Python Code:
%matplotlib inline
from __future__ import division
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import time
import datetime
from scipy import stats
import warnings
warnings.filterwarnings('ignore')
pd.set_option('display.max_colwidth', 1000)
# Download data from google drive (Respect Eng / Wiki Collab): wikipdia data/v4_annotated
dat = pd.read_csv('../data/annotated_onion_layer_5_rows_0_to_10000.csv')
# Remove test questions
dat = dat[dat['_golden'] == False]
# Replace missing data with 'False'
dat = dat.replace(np.nan, False, regex=True)
def create_column_of_counts(df, col):
return df.apply(lambda x: col in str(x))
attack_columns = ['not_attack', 'other', 'quoting', 'recipient', 'third_party']
for col in attack_columns:
dat[col] = create_column_of_counts(dat['is_harassment_or_attack'], col)
def create_column_of_counts_from_nums(df, col):
return df.apply(lambda x: int(col) == x)
aggressive_columns = ['-3', '-2', '-1', '0', '1', '2', '3']
for col in aggressive_columns:
dat[col] = create_column_of_counts_from_nums(dat['aggression_score'], col)
dat['not_attack_0'] = 1 - dat['not_attack']
dat['not_attack_1'] = dat['not_attack']
# Group the data
agg_dict = dict.fromkeys(attack_columns, 'mean')
agg_dict.update(dict.fromkeys(aggressive_columns, 'sum'))
agg_dict.update({'clean_diff': 'first', 'na': 'mean', 'aggression_score': 'mean',
'_id':'count', 'not_attack_0':'sum', 'not_attack_1': 'sum',
'block_timestamps': 'first', 'rev_timestamp': 'first'})
grouped_dat = dat.groupby(['rev_id'], as_index=False).agg(agg_dict)
# Get rid of data which the majority thinks is not in English or not readable
grouped_dat = grouped_dat[grouped_dat['na'] < 0.5]
Explanation: Introduction
This notebook is an analysis of the Crowdflower labels of 10,000 revisions of Wikipedia talk pages by users who have been blocked for personal harassment. These revisions are chosen from within a neighbourhood of 5 revisions from a block event. This dataset has been cleaned and filtered to remove common administrator messages. These datasets are annotated via crowdflower to measure friendliness, aggressiveness and whether the comment constitutes a personal attack.
On Crowdflower, each revision is rated 7 times. The raters are given three questions:
Is this comment not English or not human readable?
Column 'na'
How aggressive or friendly is the tone of this comment?
Column 'how_aggressive_or_friendly_is_the_tone_of_this_comment'
Ranges from '---' (Very Aggressive) to '+++' (Very Friendly)
Does the comment contain a personal attack or harassment? Please mark all that apply:
Column 'is_harassment_or_attack'
Users can specify that the attack is:
Targeted at the recipient of the message (i.e. you suck). ('recipent')
Targeted at a third party (i.e. Bob sucks). ('third_party')
Being reported or quoted (i.e. Bob said Henri sucks). ('quoting')
Another kind of attack or harassment. ('other')
This is not an attack or harassment. ('not_attack')
Below, we plot histograms of the units by average rating of each of the questions, examine quantiles of answers, and compute inter-annotator agreement. We also study whether or not there is a change in aggressiveness before and after a block event.
Loading packages and data
End of explanation
def hist_comments(df, bins, plot_by, title):
plt.figure()
sliced_array = df[[plot_by]]
weights = np.ones_like(sliced_array)/len(sliced_array)
sliced_array.plot.hist(bins = bins, legend = False, title = title, weights=weights)
plt.ylabel('Proportion')
plt.xlabel('Average Score')
bins = np.linspace(-3,3,11)
hist_comments(grouped_dat, bins, 'aggression_score', 'Average Aggressiveness Rating for onion_blocked Data')
bins = np.linspace(0,1,9)
for col in attack_columns:
hist_comments(grouped_dat, bins, col, 'Average %s Rating for onion_blocked Data' % col)
Explanation: Plot histogram of average ratings by revision
For each revision, we take the average of all the ratings by level of friendliness/aggressiveness and for each of the answers to Question 3. The histograms of these averages is displayed below.
End of explanation
def sorted_comments(df, sort_by, quartile, num, is_ascending = True):
n = df.shape[0]
start_index = int(quartile*n)
return df[['clean_diff', 'aggression_score',
'not_attack', 'other', 'quoting', 'recipient', 'third_party']].sort_values(
by=sort_by, ascending = is_ascending)[start_index:start_index + num]
# Most aggressive comments
sorted_comments(grouped_dat, 'aggression_score', 0, 5)
# Median aggressive comments
sorted_comments(grouped_dat, 'aggression_score', 0.5, 5)
# Least aggressive comments
sorted_comments(grouped_dat, 'aggression_score', 0, 5, False)
Explanation: Selected harassing and aggressive revisions by quartile
We look at a sample of revisions whose average aggressive score falls into various quantiles. This allows us to subjectively evaluate the quality of the questions that we are asking on Crowdflower.
End of explanation
# Most aggressive comments which are labelled 'This is not an attack or harassment.'
sorted_comments(grouped_dat[grouped_dat['not_attack'] > 0.6], 'aggression_score', 0, 5)
# Most aggressive comments which are labelled 'Being reported or quoted (i.e. Bob said Henri sucks).'
sorted_comments(grouped_dat[grouped_dat['quoting'] > 0.3], 'aggression_score', 0, 5)
# Most aggressive comments which are labelled 'Another kind of attack or harassment.'
sorted_comments(grouped_dat[grouped_dat['other'] > 0.5], 'aggression_score', 0, 5)
# Most aggressive comments which are labelled 'Targeted at a third party (i.e. Bob sucks).'
sorted_comments(grouped_dat[grouped_dat['third_party'] > 0.5], 'aggression_score', 0, 5)
# Least aggressive comments which are NOT labelled 'This is not an attack or harassment.'
sorted_comments(grouped_dat[grouped_dat['not_attack'] < 0.5], 'aggression_score', 0, 5, False)
Explanation: Selected revisions on multiple questions
In this section, we examine a selection of revisions by their answer to Question 3 and sorted by aggression score. Again, this allows us to subjectively evaluate the quality of questions and responses that we obtain from Crowdflower.
End of explanation
def add_row_to_coincidence(o, row, columns):
m_u = row.sum(1)
for i in columns:
for j in columns:
if i == j:
o[i][j] = o[i][j] + row[i]*(row[i]-1)/(m_u-1)
else:
o[i][j] = o[i][j] + row[i]*row[j]/(m_u-1)
return o
def make_coincidence_matrix(df, columns):
df = df[columns]
n = df.shape[0]
num_cols = len(columns)
o = pd.DataFrame(np.zeros((num_cols,num_cols)), index = columns, columns=columns)
for i in xrange(n):
o = add_row_to_coincidence(o, df[i:i+1], columns)
return o
def binary_distance(i,j):
return i!=j
def interval_distance(i,j):
return (int(i)-int(j))**2
def e(n, i, j):
if i == j:
return n[i]*(n[i]-1)/sum(n)-1
else:
return n[i]*n[j]/sum(n)-1
def D_e(o, columns, distance):
n = o.sum(1)
output = 0
for i in columns:
for j in columns:
output = output + e(n,i,j)*distance(i,j)
return output
def D_o(o, columns, distance):
output = 0
for i in columns:
for j in columns:
output = output + o[i][j]*distance(i,j)
return output
def Krippendorf_alpha(df, columns, distance = binary_distance, o = None):
if o is None:
o = make_coincidence_matrix(df, columns)
d_o = D_o(o, columns, distance)
d_e = D_e(o, columns, distance)
return (1 - d_o/d_e)
print "Krippendorf's Alpha for Aggressiveness: "
#Krippendorf_alpha(grouped_dat, aggressive_columns, distance = interval_distance)
print "Krippendorf's Alpha for Attack: "
#Krippendorf_alpha(grouped_dat, ['not_attack_0', 'not_attack_1'])
Explanation: Inter-Annotator Agreement
Below, we compute the Krippendorf's Alpha, which is a measure of the inter-annotator agreement of our Crowdflower responses. We achieve an Alpha value of 0.668 on our dataset, which is a relatively good level of inter-annotator agreement for this type of subjective inquiry.
End of explanation
# Get the timestamps of blocked events
block_timestamps = grouped_dat['block_timestamps'].apply(lambda x: x.split('PIPE'))
num_timestamps = block_timestamps.apply(len)
# Focus on those users who have only been blocked once
block_timestamps = block_timestamps[num_timestamps == 1]
# Convert to datetime
block_timestamps = [datetime.datetime.strptime(t[0], "%Y-%m-%dT%H:%M:%SZ") for t in block_timestamps]
# Get the timestamps and scores of the corresponding revisions
rev_timestamps = [datetime.datetime.strptime(t, "%Y-%m-%dT%H:%M:%SZ") for t in grouped_dat[num_timestamps == 1]['rev_timestamp']]
rev_score = grouped_dat[num_timestamps == 1]['aggression_score']
# Take the difference between the block timestamp and revision timestamp in seconds
diff_timestamps = [np.diff(x)[0].total_seconds() for x in zip(block_timestamps, rev_timestamps)]
x = pd.DataFrame(diff_timestamps)
y = pd.DataFrame(rev_score)
# Plot the aggressiveness score by relative time
plt.figure()
plt.plot(x,y,'bo')
plt.xlim(-1e6, 1e6)
# Seperate the revisions before and after a block event
after_revs = y[x.values > 0]
before_revs = y[x.values < 0]
# The mean aggressiveness of revisions after a block event
np.mean(after_revs)
# The mean aggressiveness of revisions before a block event
np.mean(before_revs)
# A t-test of whether there is a difference in aggressiveness before and after a block event
stats.ttest_ind(before_revs, after_revs, equal_var=False)
Explanation: T-Test of Aggressiveness
We explore whether aggressiveness changes in the tone of comments from immediately before a block event to immediately after.
End of explanation |
10,093 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Validation and Verification of the 25mm collimator simulation, GP3, PhSF
Here we provide code and output which verifies and validates the 25mm collimator simulation. We're using simulation phase space file output and input to check the validity of the result. Geometry of the source is 12mm in length and activity of the source 120Ci. Secondary collimator is shifted relative to primary/source by 0.13 degrees.
Step2: First, set filename to what we want to examine and read PhSF header
Step3: Checking PhSF header parameters
Step4: Convert all spatial coordinates to mm
Step5: Energy Spectrum tests
We expect energy spectrum to be scattering background together with peaks δ(E-1.17) and δ(E-1.33). Below we'trying to prove this statement. We will draw the distributions and histograms to estimate influence of the background scattering and get the data about δ-peaks
We're filling energy histogram now, basic checks
We're building scale with 5 bins in the region between 1.17 and 1.33 MeV, all other bins below 1.17 are of about the same size as those 5
Step6: Underflow bin is close to empty, as well as Overflow bin. This is good because we do not expect events beyond 1.33MeV and below ECUT
Drawing Probability Density Function for 5 bins between 1.33 peak and 1.17 peak.
Step7: Filling energy histogram with double number of bins
We're building scale with 10 bins in the region between 1.17 and 1.33 MeV, all other bins below 1.17 are of about the same size as those 10
Step8: Underflow bin is close to empty, as well as Overflow bin. This is good because we do not expect events beyond 1.33MeV and below ECUT
Drawing Probability Density Function for 10 bins between 1.33 peak and 1.17 peak.
Step9: Filling energy histogram with quadruple number of bins
We're building scale with 20 bins in the region between 1.17 and 1.33 MeV, all other bins below 1.17 are of about the same size as those 20.
Step10: Underflow bin is close to empty, as well as Overflow bin. This is good because we do not expect events beyond 1.33MeV and below ECUT
Drawing Probability Density Function for 10 bins between 1.33 peak and 1.17 peak.
Step11: Comparing peak values
We would compare peak values at 10 bins and at 5 bins. The presence of δ-peaks means that with doubling number of bins we shall expect the roughly doubling the peak values.
Step12: The result is as expected. Only few percent of the values in the 1.33 and 1.17 MeV bins are due to scattered radiation. Most values are coming from primary source and are δ-peaks in energy.
Spatial Distribution tests
Here we will plot spatial distribution of the particles, projected from collimator exit position to the isocenter location at 38cm
Step13: NB
Step14: We find spatial distribution to be consistent with the collimation setup
Angular Distribution tests
Here we plot particles angular distribution for all three directional cosines, at the collimator exit. We expect angular distribution to fill collimation angle which is close to 0.017 radians (0.5x15/380). | Python Code:
import math
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
import BEAMphsf
import beam_loader
import H1Dn
import H1Du
import ListTable
%matplotlib inline
def cm2mm(value):
converts cm to mm
return value*10.0
Explanation: Validation and Verification of the 25mm collimator simulation, GP3, PhSF
Here we provide code and output which verifies and validates the 25mm collimator simulation. We're using simulation phase space file output and input to check the validity of the result. Geometry of the source is 12mm in length and activity of the source 120Ci. Secondary collimator is shifted relative to primary/source by 0.13 degrees.
End of explanation
C = 25
phsfname = "C" + str(C) + "GP3p13" + ".egsphsp1"
phsfname = "../" + phsfname
print ("We're reading the {1}mm phase space file = {0}".format(phsfname, C))
Explanation: First, set filename to what we want to examine and read PhSF header
End of explanation
m, NPPHSP, NPHOTPHSP, EKMAX, EKMIN, NINCP = beam_loader.read_header_byname(phsfname)
print ("We're reading the {1}mm phase space file = {0}".format(phsfname, C))
if m == beam_loader.SHORT:
print("We have short MODE0 phase space file".format(m))
elif m == beam_loader.LONG:
print("We have long MODE2 phase space file".format(m))
print("Total nof of particle records: {0}".format(NPPHSP))
print("Total nof of photon records: {0}".format(NPHOTPHSP))
print("Maximum kinetic energy: {0} MeV".format(EKMAX))
print("Minimum kinetic energy: {0} MeV".format(EKMIN))
print("Number of original particles: {0}".format(NINCP))
print("Yield: {0}".format(NPHOTPHSP/NINCP))
events, nof_photons, nof_electrons, nof_positrons = beam_loader.load_events(phsfname, -1)
print("Number of loaded events: {0}".format(len(events)))
print("Number of loaded photons: {0}".format(nof_photons))
print("Number of loaded electrons: {0}".format(nof_electrons))
print("Number of loaded positrons: {0}".format(nof_positrons))
Explanation: Checking PhSF header parameters
End of explanation
evts = list()
for event in events:
wt = event[0]
e = event[1]
x = cm2mm(event[2])
y = cm2mm(event[3])
zl = cm2mm(event[4])
wx = event[5]
wy = event[6]
wz = event[7]
evts.append(((wt, e, x, y, zl, wx, wy, wz)))
events = evts
Explanation: Convert all spatial coordinates to mm
End of explanation
# make scale with explicit bins at 1.17 MeV and 1.33 MeV
nbins = 5
scale = BEAMphsf.make_energy_scale(nbins, lo = 0.01, me = 1.1700001, hi = 1.3300001)
he = H1Dn.H1Dn(scale)
for e in events:
WT = e[0]
E = e[1]
he.fill(E, WT)
print("Number of events in histogram: {0}".format(he.nof_events()))
print("Integral in histogram: {0}".format(he.integral()))
print("Underflow bin: {0}".format(he.underflow()))
print("Overflow bin: {0}".format(he.overflow()))
Explanation: Energy Spectrum tests
We expect energy spectrum to be scattering background together with peaks δ(E-1.17) and δ(E-1.33). Below we'trying to prove this statement. We will draw the distributions and histograms to estimate influence of the background scattering and get the data about δ-peaks
We're filling energy histogram now, basic checks
We're building scale with 5 bins in the region between 1.17 and 1.33 MeV, all other bins below 1.17 are of about the same size as those 5
End of explanation
X = []
Y = []
W = []
scale = he.x()
n = len(scale)
norm = 1.0/he.integral()
sum = 0.0
for k in range (-1, he.size()+1):
x = 0.0
w = (he.lo() - x)
if k == he.size():
w = (scale[-1]-scale[-2])
x = he.hi()
elif k >= 0:
w = (scale[k+1] - scale[k])
x = scale[k]
d = he[k] # data from bin with index k
y = d[0] / w # first part of bin is collected weights
y = y * norm
X.append(x)
Y.append(y)
W.append(w)
sum += y*w
print("PDF normalization: {0}".format(sum))
E133_5 = Y[-2]
E117_5 = Y[-2-nbins]
p1 = plt.bar(X, Y, W, color='r')
plt.xlabel('Energy(MeV)')
plt.ylabel('PDF of the photons')
plt.title('Energy distribution')
plt.grid(True);
plt.tick_params(axis='x', direction='out')
plt.tick_params(axis='y', direction='out')
plt.show()
# saving peak values
print("Peak PDF value at 1.33 MeV: {0}".format(E133_5))
print("Peak PDF value at 1.17 MeV: {0}".format(E117_5))
Explanation: Underflow bin is close to empty, as well as Overflow bin. This is good because we do not expect events beyond 1.33MeV and below ECUT
Drawing Probability Density Function for 5 bins between 1.33 peak and 1.17 peak.
End of explanation
# make scale with explicit bins at 1.17 MeV and 1.33 MeV
nbins = 10
scale = BEAMphsf.make_energy_scale(nbins, lo = 0.01, me = 1.1700001, hi = 1.3300001)
he = H1Dn.H1Dn(scale)
for e in events:
WT = e[0]
E = e[1]
he.fill(E, WT)
print("Number of events in histogram: {0}".format(he.nof_events()))
print("Integral in histogram: {0}".format(he.integral()))
print("Underflow bin: {0}".format(he.underflow()))
print("Overflow bin: {0}".format(he.overflow()))
Explanation: Filling energy histogram with double number of bins
We're building scale with 10 bins in the region between 1.17 and 1.33 MeV, all other bins below 1.17 are of about the same size as those 10
End of explanation
X = []
Y = []
W = []
scale = he.x()
n = len(scale)
norm = 1.0/he.integral()
sum = 0.0
for k in range (-1, he.size()+1):
x = 0.0
w = (he.lo() - x)
if k == he.size():
w = (scale[-1]-scale[-2])
x = he.hi()
elif k >= 0:
w = (scale[k+1] - scale[k])
x = scale[k]
d = he[k] # data from bin with index k
y = d[0] / w # first part of bin is collected weights
y = y * norm
X.append(x)
Y.append(y)
W.append(w)
sum += y*w
print("PDF normalization: {0}".format(sum))
E133_10 = Y[-2]
E117_10 = Y[-2-nbins]
p1 = plt.bar(X, Y, W, color='r')
plt.xlabel('Energy(MeV)')
plt.ylabel('PDF of the photons')
plt.title('Energy distribution')
plt.grid(True);
plt.tick_params(axis='x', direction='out')
plt.tick_params(axis='y', direction='out')
plt.show()
# saving peak values
print("Peak PDF value at 1.33 MeV: {0}".format(E133_10))
print("Peak PDF value at 1.17 MeV: {0}".format(E117_10))
Explanation: Underflow bin is close to empty, as well as Overflow bin. This is good because we do not expect events beyond 1.33MeV and below ECUT
Drawing Probability Density Function for 10 bins between 1.33 peak and 1.17 peak.
End of explanation
# make scale with explicit bins at 1.17 MeV and 1.33 MeV
nbins = 20
scale = BEAMphsf.make_energy_scale(nbins, lo = 0.01, me = 1.1700001, hi = 1.3300001)
he = H1Dn.H1Dn(scale)
for e in events:
WT = e[0]
E = e[1]
he.fill(E, WT)
print("Number of events in histogram: {0}".format(he.nof_events()))
print("Integral in histogram: {0}".format(he.integral()))
print("Underflow bin: {0}".format(he.underflow()))
print("Overflow bin: {0}".format(he.overflow()))
Explanation: Filling energy histogram with quadruple number of bins
We're building scale with 20 bins in the region between 1.17 and 1.33 MeV, all other bins below 1.17 are of about the same size as those 20.
End of explanation
X = []
Y = []
W = []
scale = he.x()
n = len(scale)
norm = 1.0/he.integral()
sum = 0.0
for k in range (-1, he.size()+1):
x = 0.0
w = (he.lo() - x)
if k == he.size():
w = (scale[-1]-scale[-2])
x = he.hi()
elif k >= 0:
w = (scale[k+1] - scale[k])
x = scale[k]
d = he[k] # data from bin with index k
y = d[0] / w # first part of bin is collected weights
y = y * norm
X.append(x)
Y.append(y)
W.append(w)
sum += y*w
print("PDF normalization: {0}".format(sum))
E133_20 = Y[-2]
E117_20 = Y[-2-nbins]
p1 = plt.bar(X, Y, W, color='r')
plt.xlabel('Energy(MeV)')
plt.ylabel('PDF of the photons')
plt.title('Energy distribution')
plt.grid(True);
plt.tick_params(axis='x', direction='out')
plt.tick_params(axis='y', direction='out')
plt.show()
# saving peak values
print("Peak PDF value at 1.33 MeV: {0}".format(E133_20))
print("Peak PDF value at 1.17 MeV: {0}".format(E117_20))
Explanation: Underflow bin is close to empty, as well as Overflow bin. This is good because we do not expect events beyond 1.33MeV and below ECUT
Drawing Probability Density Function for 10 bins between 1.33 peak and 1.17 peak.
End of explanation
table = ListTable.ListTable()
table.append(["Nbins", "E=1.17", "E=1.33"])
table.append(["", "MeV", "MeV"])
table.append([5, 1.0, 1.0])
table.append([10, E133_10/E133_5, E133_10/E133_5])
table.append([20, E133_20/E133_5, E133_20/E133_5])
table
Explanation: Comparing peak values
We would compare peak values at 10 bins and at 5 bins. The presence of δ-peaks means that with doubling number of bins we shall expect the roughly doubling the peak values.
End of explanation
Znow = 200.0 # we at 000mm at the cooolimator exit
Zshot = 360.0 # shot isocenter is at 380mm
# radial, X and Y, all units in mm
hr = H1Du.H1Du(120, 0.0, 40.0)
hx = H1Du.H1Du(128, -32.0, 32.0)
hy = H1Du.H1Du(128, -32.0, 32.0)
for e in events:
WT = e[0]
xx, yy, zz = BEAMphsf.move_event(e, Znow, Zshot)
#xx = e[2]
#yy = e[3]
#zz = e[4]
r = math.sqrt(xx*xx + yy*yy)
hr.fill(r, WT)
hx.fill(xx, WT)
hy.fill(yy, WT)
print("Number of events in R histogram: {0}".format(hr.nof_events()))
print("Integral in R histogram: {0}".format(hr.integral()))
print("Underflow bin: {0}".format(hr.underflow()))
print("Overflow bin: {0}\n".format(hr.overflow()))
print("Number of events in X histogram: {0}".format(hx.nof_events()))
print("Integral in X histogram: {0}".format(hx.integral()))
print("Underflow bin: {0}".format(hx.underflow()))
print("Overflow bin: {0}\n".format(hx.overflow()))
print("Number of events in Y histogram: {0}".format(hy.nof_events()))
print("Integral in Y histogram: {0}".format(hy.integral()))
print("Underflow bin: {0}".format(hy.underflow()))
print("Overflow bin: {0}".format(hy.overflow()))
X = []
Y = []
W = []
norm = 1.0/hr.integral()
sum = 0.0
st = hr.step()
for k in range (0, hr.size()+1):
r_lo = hr.lo() + float(k) * st
r_hi = r_lo + st
r = 0.5*(r_lo + r_hi)
ba = math.pi * (r_hi*r_hi - r_lo*r_lo) # bin area
d = hr[k] # data from bin with index k
y = d[0] / ba # first part of bin is collected weights
y = y * norm
X.append(r)
Y.append(y)
W.append(st)
sum += y * ba
print("PDF normalization: {0}".format(sum))
p1 = plt.bar(X, Y, W, 0.0, color='b')
plt.xlabel('Radius(mm)')
plt.ylabel('PDF of the photons')
plt.title('Radial distribution')
plt.grid(True);
plt.tick_params(axis='x', direction='out')
plt.tick_params(axis='y', direction='out')
plt.show()
Explanation: The result is as expected. Only few percent of the values in the 1.33 and 1.17 MeV bins are due to scattered radiation. Most values are coming from primary source and are δ-peaks in energy.
Spatial Distribution tests
Here we will plot spatial distribution of the particles, projected from collimator exit position to the isocenter location at 38cm
End of explanation
X = []
Y = []
W = []
norm = 1.0/hx.integral()
sum = 0.0
st = hx.step()
for k in range (0, hx.size()):
x_lo = hx.lo() + float(k)*st
x_hi = x_lo + st
x = 0.5*(x_lo + x_hi)
d = hx[k] # data from bin with index k
y = d[0] / st # first part of bin is collected weights
y = y * norm
X.append(x)
Y.append(y)
W.append(st)
sum += y*st
print("PDF normalization: {0}".format(sum))
p1 = plt.bar(X, Y, W, color='b')
plt.xlabel('X(mm)')
plt.ylabel('PDF of the photons')
plt.title('X distribution')
plt.grid(True);
plt.tick_params(axis='x', direction='out')
plt.tick_params(axis='y', direction='out')
plt.show()
X = []
Y = []
W = []
norm = 1.0/hy.integral()
sum = 0.0
st = hy.step()
for k in range (0, hy.size()):
x_lo = hy.lo() + float(k)*st
x_hi = x_lo + st
x = 0.5*(x_lo + x_hi)
d = hy[k] # data from bin with index k
y = d[0] / st # first part of bin is collected weights
y = y * norm
X.append(x)
Y.append(y)
W.append(st)
sum += y*st
print("PDF normalization: {0}".format(sum))
p1 = plt.bar(X, Y, W, color='b')
plt.xlabel('Y(mm)')
plt.ylabel('PDF of the photons')
plt.title('Y distribution')
plt.grid(True);
plt.tick_params(axis='x', direction='out')
plt.tick_params(axis='y', direction='out')
plt.show()
Explanation: NB: peak at the far right above 40mm is overflow bin
End of explanation
# angular, WZ, WX and WY, all units in radians
h_wz = H1Du.H1Du(100, 1.0 - 0.05, 1.0)
h_wx = H1Du.H1Du(110, -0.055, 0.055)
h_wy = H1Du.H1Du(110, -0.055, 0.055)
for e in events:
WT = e[0]
wx = e[5]
wy = e[6]
wz = e[7]
h_wz.fill(wz, WT)
h_wx.fill(wx, WT)
h_wy.fill(wy, WT)
print("Number of events in WZ histogram: {0}".format(h_wz.nof_events()))
print("Integral in WZ histogram: {0}".format(h_wz.integral()))
print("Underflow bin: {0}".format(h_wz.underflow()))
print("Overflow bin: {0}\n".format(h_wz.overflow()))
print("Number of events in WX histogram: {0}".format(h_wx.nof_events()))
print("Integral in WX histogram: {0}".format(h_wx.integral()))
print("Underflow bin: {0}".format(h_wx.underflow()))
print("Overflow bin: {0}\n".format(h_wx.overflow()))
print("Number of events in WY histogram: {0}".format(h_wy.nof_events()))
print("Integral in WY histogram: {0}".format(h_wy.integral()))
print("Underflow bin: {0}".format(h_wy.underflow()))
print("Overflow bin: {0}".format(h_wy.overflow()))
X = []
Y = []
W = []
norm = 1.0/h_wz.integral()
sum = 0.0
st = h_wz.step()
for k in range (0, h_wz.size()+1):
x_lo = h_wz.lo() + float(k)*st
x_hi = x_lo + st
x = 0.5*(x_lo + x_hi)
d = h_wz[k] # data from bin with index k
y = d[0] / st # first part of bin is collected weights
y = y * norm
X.append(x)
Y.append(y)
W.append(st)
sum += y*st
print("PDF normalization: {0}".format(sum))
p1 = plt.bar(X, Y, W, color='g')
plt.xlabel('WZ')
plt.ylabel('PDF of the photons')
plt.title('Angular Z distribution')
plt.grid(True);
plt.tick_params(axis='x', direction='out')
plt.tick_params(axis='y', direction='out')
plt.show()
X = []
Y = []
W = []
norm = 1.0/h_wx.integral()
sum = 0.0
st = h_wx.step()
for k in range (0, h_wx.size()):
x_lo = h_wx.lo() + float(k)*st
x_hi = x_lo + st
x = 0.5*(x_lo + x_hi)
d = h_wx[k] # data from bin with index k
y = d[0] / st # first part of bin is collected weights
y = y * norm
X.append(x)
Y.append(y)
W.append(st)
sum += y*st
print("PDF normalization: {0}".format(sum))
p1 = plt.bar(X, Y, W, color='g')
plt.xlabel('WX')
plt.ylabel('PDF of the photons')
plt.title('Angular X distribution')
plt.grid(True);
plt.tick_params(axis='x', direction='out')
plt.tick_params(axis='y', direction='out')
plt.show()
X = []
Y = []
W = []
norm = 1.0/h_wy.integral()
sum = 0.0
st = h_wy.step()
for k in range (0, h_wy.size()):
x_lo = h_wy.lo() + float(k)*st
x_hi = x_lo + st
x = 0.5*(x_lo + x_hi)
d = h_wy[k] # data from bin with index k
y = d[0] / st # first part of bin is collected weights
y = y * norm
X.append(x)
Y.append(y)
W.append(st)
sum += y*st
print("PDF normalization: {0}".format(sum))
p1 = plt.bar(X, Y, W, color='g')
plt.xlabel('WY')
plt.ylabel('PDF of the photons')
plt.title('Angular Y distribution')
plt.grid(True);
plt.tick_params(axis='x', direction='out')
plt.tick_params(axis='y', direction='out')
plt.show()
Explanation: We find spatial distribution to be consistent with the collimation setup
Angular Distribution tests
Here we plot particles angular distribution for all three directional cosines, at the collimator exit. We expect angular distribution to fill collimation angle which is close to 0.017 radians (0.5x15/380).
End of explanation |
10,094 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Setup the environment
Step1: Do the work to do the plotting
Step2: Print yields and plot all the templates
$B_s \to D_s^- \mu^+ \nu_{\mu} $
Step3: Print yields and plot all the templates
$B_s \to k^- \mu^+ \nu_{\mu} $
Step4: Combining/Removing Histograms
We can see that several of the templates have similar shapes.
The following can be combined or removed | Python Code:
import sys
sys.path.append('../../FourVector')
sys.path.append('../project')
from FourVector import FourVector
from ThreeVector import ThreeVector
from FutureColliderTools import SmearVertex, GetCorrectedMass, GetMissingMass2, GetQ2
from FutureColliderDataLoader import LoadData_KMuNu, LoadData_DsMuNu
from FutureColliderVariables import *
import numpy as np
import ROOT
ROOT.enableJSVis()
ROOT.gStyle.SetOptStat(0)
Explanation: Setup the environment
End of explanation
from prettytable import PrettyTable
def PlotHistograms(Filename):
#Due to python's garbage collection the Histograms, Stacks, Files etc. need to be stored somewhere.
global Histograms
global Stacks
global Files
global legend
global Textarr
canvas.Divide(3,3)
Histograms_norm = []
table = PrettyTable()
for it, resolution in enumerate(np.linspace(0.3, 1.0, 8)):
# Open the histogram file for reading
File_Toys = ROOT.TFile.Open(Filename.format(resolution), "READ")
Files += [File_Toys]
KeyList = [ Key for Key in File_Toys.GetListOfKeys() if "MCORR" in Key.GetName() ]
Histograms_norm += [[ Key.ReadObj().Clone() for Key in KeyList if "_norm" in Key.GetName()]]
Histograms += [[ Key.ReadObj().Clone() for Key in KeyList if "_norm" not in Key.GetName()]]
# On the first loop, fill the Legend, and add the first column to the Table
if it == 0:
HistogramNames = [Hist.GetName().replace("MCORR_", "").replace("Combinatorial", "C") for Hist in Histograms[0] ]
table.add_column("", HistogramNames)
HistogramNames = [Hist.GetTitle() for Hist in Histograms_norm[0] ]
for Name, Hist in zip(HistogramNames, Histograms_norm[0]):
legend.AddEntry(Hist, Name, "lep")
HistogramYields = [int(Hist.Integral()) for Hist in Histograms[-1] ]
table.add_column(str(resolution), HistogramYields)
#Make a stack of the Historams
Stack = ROOT.THStack()
Color = 1
for Hist in Histograms_norm[it]:
#Hist.Sumw2(False)
Hist.SetDirectory(0)
Hist.SetLineColor(Color)
Hist.SetFillStyle(0)
Color+=1
Stack.Add(Hist)
Stacks += [Stack]
canvas.cd(it+1)
Stack.Draw("nostack")
Text = ROOT.TPaveText(0.1,0.7,0.5,0.9, "NDC")
Text.SetFillColor(0)
Text.SetBorderSize(0)
Text.AddText("#sigma (Vertex) = "+str(resolution)+" #dot #sigma (LHCb)")
Text.Draw()
Textarr += [Text]
canvas.cd(9)
legend.Draw()
canvas.DrawClone()
print table
Explanation: Do the work to do the plotting
End of explanation
canvas = ROOT.TCanvas("c1", "c1", 900, 900)
Histograms = []
Stacks = []
Files = []
Textarr = []
legend = ROOT.TLegend(0.101,0.101,0.899,0.899);
PlotHistograms("../output/Source_Histograms_DsMu_{0}_LHCb.root")
canvas.Draw()
Explanation: Print yields and plot all the templates
$B_s \to D_s^- \mu^+ \nu_{\mu} $
End of explanation
canvas = ROOT.TCanvas("c1", "c1", 900, 900)
Histograms = []
Stacks = []
Files = []
legend = ROOT.TLegend(0.101,0.1,0.9,0.9);
PlotHistograms("../output/Source_Histograms_KMu_{0}_LHCb.root")
canvas.Draw()
Explanation: Print yields and plot all the templates
$B_s \to k^- \mu^+ \nu_{\mu} $
End of explanation
canvas = ROOT.TCanvas("c1", "c1", 900, 900)
Histograms = []
Stacks = []
Files = []
legend = ROOT.TLegend(0.1,0.1,0.9,0.9);
PlotHistograms("../output/Source_Histograms_DsMu_{0}_LHCb_Merged.root")
canvas.Draw()
canvas = ROOT.TCanvas("c1", "c1", 900, 900)
Histograms = []
Stacks = []
Files = []
legend = ROOT.TLegend(0.1,0.1,0.9,0.9);
PlotHistograms("../output/Source_Histograms_KMu_{0}_LHCb_Merged.root")
canvas.Draw()
Explanation: Combining/Removing Histograms
We can see that several of the templates have similar shapes.
The following can be combined or removed:
For $B_s \to D_s^- \mu^+ \nu_{\mu} $
Both $B_s \to D_s \tau X$ modes
Both Further Excited Ds* resonance
For $B_s \to K^+ \mu^+ \nu_{\mu} $
Both $B_s \to D_s \tau X$ modes
Both Combinatorial Modes
Both Further Excited Ds* resonances and the Lambda Mode
$B_s \to D_s \mu^+ \nu_{\mu} $ and $B_s \to D_s^* \mu^+ \nu_{\mu} $
End of explanation |
10,095 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Here, we transform some strings to lowercase. This is because there are duplicate entries in the dataset which in both upper and lower.
This increases redundancy
Step1: There is still alot of redundancy we can exploit. We can generalize these strings remove specalized strings into more general form.
For example 'software engineer, senior' needs to be reduced to 'software engineer'.
This also applies to the other columns with string attributes.
Step2: We have reduced the number of names down to 1585
Step3: This person messed up the SOC_NAME
Step4: We have now cut the number of names in half from the original number.
Step5: Striping charactors did not help much.
At this point investigated a spellchecker in python. Was not able to get something to work within enviroment.
I am now going to consider removing entries that are unique, with a count of one.
Step6: Theres a problem, I am not getting a real representation of the occurence of names in the data. I now need to do something where I can get the actual number of occurences.
Step7: Going to go back to this df and create a copy. Will then over write the Soc_Name with the reduced name I have.
Step8: Test code to test algorithm | Python Code:
cleandata1['SOC_NAME'].value_counts()
Explanation: Here, we transform some strings to lowercase. This is because there are duplicate entries in the dataset which in both upper and lower.
This increases redundancy
End of explanation
cleandata1['SOC_NAME'].value_counts().count()
Explanation: There is still alot of redundancy we can exploit. We can generalize these strings remove specalized strings into more general form.
For example 'software engineer, senior' needs to be reduced to 'software engineer'.
This also applies to the other columns with string attributes.
End of explanation
reducedf = pd.DataFrame({'SOC_NAME': cleandata1['SOC_NAME'].value_counts().index, 'Count':cleandata1['SOC_NAME'].value_counts().values})
#df['Counts'] = df.groupby(['SOC_NAME'])['Count'].transform('count') #I don't remember what I was trying to do here.
#df = df.set_index(['SOC_NAME'])
reducedf
reducedf['Name1'] = ''
reducedf
reducedf.iloc[3]['Count'] #example of accessing a location
%%timeit
for index, row in reducedf.iterrows():
names = row['SOC_NAME'].split(",")
if(names[0].endswith('*')):
reducedf.set_value([index],['Name1'],(names[0][:-1]))
if not (names[0].endswith('s')):
reducedf.set_value([index],['Name1'],(names[0]+'s'))
else:
reducedf.set_value([index],['Name1'],names[0])
reducedf
cleandata1['SOC_NAME'].value_counts().count()
(cleandata1.loc[(cleandata1['SOC_NAME']=='software developers, appllications')]) #an example of a query
Explanation: We have reduced the number of names down to 1585
End of explanation
reducedf['Name1'].value_counts()
reducedf['Name1'].value_counts().count()
Explanation: This person messed up the SOC_NAME
End of explanation
reducedf.sort_values(['Name1'])
reducedf['Name2'] = ""
%%timeit
regex = re.compile('[^a-z\s]')
for index, row in reducedf.iterrows():
reducedf.set_value([index],['Name2'],(regex.sub('', row['Name1'])))
reducedf.sort_values(['Name1'])
reducedf['Name2'].value_counts().count()
Explanation: We have now cut the number of names in half from the original number.
End of explanation
dfName2Check = pd.DataFrame({'Name2': reducedf['Name2'].value_counts().index, 'Count':reducedf['Name2'].value_counts().values})
dfName2Check
Explanation: Striping charactors did not help much.
At this point investigated a spellchecker in python. Was not able to get something to work within enviroment.
I am now going to consider removing entries that are unique, with a count of one.
End of explanation
cleandata1
Explanation: Theres a problem, I am not getting a real representation of the occurence of names in the data. I now need to do something where I can get the actual number of occurences.
End of explanation
cleandata2 = cleandata1.copy()
name = cleandata2.iloc[3002440]['SOC_NAME'] #example of accessing a location
print(name)
newname = reducedf.loc[(reducedf['SOC_NAME']==name)]
newname1 = newname.iloc[0]['Name2']
print(newname1)
Explanation: Going to go back to this df and create a copy. Will then over write the Soc_Name with the reduced name I have.
End of explanation
for index, row in cleandata2.iterrows():
name = row['SOC_NAME']
newname = reducedf.loc[(reducedf['SOC_NAME']==name)]
newname1 = newname.iloc[0]['Name2']
cleandata2.set_value([index],['SOC_NAME'],newname1)
Explanation: Test code to test algorithm
End of explanation |
10,096 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2/1/17
FEProblemBase
Step1: Jacobian calculations related to deviatoric stress tensor ($\hat{\tau}$) and rate of strain tensor ($\hat{\epsilon}$)
Note that the total stress tensor ($\hat{\sigma}$) is equal to the sum of the deviatoric stress tensor ($\hat{\tau}$) and the stress induced by pressure ($-p\hat{I}$), e.g.
\begin{equation}
\hat{\sigma} = \hat{\tau} - p\hat{I}
\end{equation}
Step2: This is an example of an off-diagonal jacobian computation
Step3: Now let's look at an off diagonal-term for
Step4: Hmm...that's not very revealing...this result is completely symmetric...it doesn't tell me what the code implementation should be. Let's try 3D in order to elucidate
Step5: Alright, it looks like we get the normal components corresponding to residual $i$ and derivative variable $j$!!! Boom!
Step6: 5/3/17
Step7: 5/10/17
Ok, the analytic_turbulence problem sucks. Even if I start with Dirichlet boundary conditions on all boundaries and initial conditions representing the supposed analytic solution, and then run a transient simulation, the solution evolves away from the supposed analytic solution. backwards_step_adaptive.i runs to completion, but that's for a relatively low inlet velocity.
Getting some pretty good results now also with backwards_step_adaptive_inlet_v_100.i which I wasn't a few days before. This could perhaps be due to the introduction of the SUPG terms. Convergence becomes a little slow at longer time steps, perhaps because of incomplete Jacobian implementation? Or poor relative scaling of the variables? Results for kin actually don't look too far off from the results in the Kuzmin paper. This simulation uses a Reynolds number of 100, which is still pretty small! Next effort will be with the Reynolds number in the Kuzmin paper of 47,625.
It's something I've observed over the years that decreasing element size can lead to decreasing solver convergence. Note that I'm not talking about convergence to the true solution. I wish I could find a good piece of literature discussing this phenomenon. There are just so many things to consider about a finite element solve; it can be fun at times and frustrating at others.
5/11/17
Ok, going to do some methods of manufactured solutions!
Step8: Momentum equations
Step9: Traction Form
Step10: Laplace Form
Step11: Pressure equation
Step12: Or testing with a simple diffusion term
Step13: Turbulent kinetic energy equation
Step14: Turbulent dissipation
Step15: Simple diffusion
Step16: Verified RANS kernels
INSK
INSEpsilon
INSMomentumTurbulentViscosityTractionForm
INSMomentumTurbulentViscosityLaplaceForm
INSMomentumShearStressWallFunction with |u|/yStarPlus branch of uTau
with exp_form = false.
Step17: Ok, so apparently just scaling every variable's manufactured solution by 200 causes MOOSE convergence issues. Sigh
Step18: 5/16/17
Step19: Tasks accomplished today | Python Code:
import sympy as sp
sxx, sxy, syx, syy, nx, ny = sp.var('sxx sxy syx syy nx ny')
s = sp.Matrix([[sxx, sxy],[syx, syy]])
n = sp.Matrix([nx, ny])
s*n
prod = n.transpose()*s*n
prod2 = n.transpose()*(s*n)
print(prod)
print(prod2)
print(prod==prod2)
prod.shape
sp.expand(prod) == sp.expand(prod2)
lhs = n.transpose()*s
print(lhs.shape)
rhs = (n.transpose() * s * n) * n.transpose()
print(rhs.shape)
rhs2 = (n.transpose()*s) * (n*n.transpose())
print(rhs2)
rhs3 = n.transpose() * (s*n*n.transpose())
print(sp.expand(rhs) == sp.expand(rhs2) == sp.expand(rhs3))
print(n*n.transpose())
print(n.transpose()*n)
print(sp.simplify(lhs))
print(sp.simplify(rhs))
elml = lhs[0,0]
elmr = rhs[0,0]
print(elml.expand())
print(elmr.expand())
elmr.expand()
elmr.expand().subs(nx, sp.sqrt(1 - ny**2))
elmr.expand().subs(nx, sp.sqrt(1 - ny**2)).simplify()
help(expr.replace)
t = lhs - rhs
print(t)
t1 = t[0,0]
t2 = t[0,1]
print(t1)
print(t2)
t1.simplify()
ddx, ddy, ux, uy = sp.var('ddx ddy ux uy')
grad = sp.Matrix([ddx,ddy])
u = sp.Matrix([ux,uy])
print(grad.shape)
phij,mu = sp.var('phij mu')
uDuxj = sp.Matrix([phij,0])
uDuyj = sp.Matrix([0,phij])
grad*u.transpose()
jacx = n.transpose() * (mu * (grad*uDuxj.transpose() + (grad*uDuxj.transpose()).transpose())) * n
print(jacx)
sp.expand(jacx[0,0])*nx
jacy = n.transpose() * (mu * (grad*uDuyj.transpose() + (grad*uDuyj.transpose()).transpose())) * n
print(jacy)
sp.expand(jacy[0,0])*ny
sp.factor(jacy[0,0])
print(sp.factor((jacx[0,0]*n.transpose())[0,0]))
print(sp.factor((jacy[0,0]*n.transpose())[0,1]))
sJacX = mu * (grad*uDuxj.transpose() + (grad*uDuxj.transpose()).transpose())
sJacY = mu * (grad*uDuyj.transpose() + (grad*uDuyj.transpose()).transpose())
print(sJacX)
print(sJacY)
print(sp.factor((n.transpose()*sJacX)[0,0]))
print(sp.factor((n.transpose()*sJacY)[0,1]))
jacx.shape
Explanation: 2/1/17
FEProblemBase::reinitMaterials only calls property computation for Material objects currently active on the given subdomain so that's good. However, it's possible that material objects "active" on the subdomain aren't actually being used in any computing objects like kernels, etc. So we would like to do some additional checking.
Alright, let's say we're computing the residual thread. Then assuming we cannot compute properties in a material in isolation, we would like to do the next best thing: only call computeQpProperties for materials that have actually been asked to supply properties to kernels, dg_kernels, boundary_conditions, and interface_kernels.
So what am I doing as of commit d7dbfe5? I am determining the needed_mat_props through ComputeResidualThread::subdomainChanged() -> FEProblemBase::prepareMaterials. In the latter method, we first ask all materials--if there are any materials active on the block--to update their material property dependencies and then we ask materials on the boundaries of the subdomain to also update their dependencies. Note that this could lead to a boundary material object getting asked to update their material property dependencies twice because we first pass to MaterialWarehouse::updateMatPropDependenceyHelper all material objects as long as there is any one material object active in a block sense on the subdomain, and then we pass active material boundary objects. But this overlap doesn't matter so much because our needed_mat_props is a set, so if we try to insert the same material properties multiple times, it will silently and correctly fail. Note, however, that this could also pass needed_mat_props from material objects not on the current block, so that needs to be changed.
So what happens in MaterialWarehouse::updateMatPropDependencyHelper? We add mp_deps from MaterialPropertyInterface::getMatPropDependencies. However, it should be noted that this is only done for <i>material objects</i>. Is this fine? Well let's figure it out. It returns _material_property_dependencies which is a set added to by calling addMatPropDependency. Now this gets called when the object that inherits from MaterialPropertyInterface calls its own getMaterialProperty method. So I hypothesize that in my simple test, if I ask to have a material property in my kernel object with getMaterialProperty that will not register in any material objects list of _material_property_dependencies and consequently computeQpProperties will never get called. I will test that the next time I sit down at my comp.
2/2/17
Three tests:
Run with one material that doesn't supply any properties. Desired behavior: computeQpProperties does not get called. Expected to Pass. With devel MOOSE: expected to Fail (expected change)
Run two materials, one that supplies properties, another that does not. Desired behavior: computeQpProperties does not get called for the material not supplying properties while the other one does. Expected behavior: both materials compute methods get called. Fail. With devel MOOSE: expected to Fail (expected to not change)
Run with a kernel that uses a material property and an elemental aux variable that does not. Desired behavior: computeQpProperties should get called through the residual and jacobian threads but not through the aux kernel thread. Expected to Pass. With devel MOOSEE: expected to Fail (expected change)
Calls to computeProperties:
ComputeResidualThread
ComputeResidualThread
0th nonlinear residual printed
ComputeJacobianThread
ComputeResidualThread
0th linear residual printed
ComputeResidualThread
1st linear residual printed
ComputeResidualThread
ComputeResidualThread
1st nonlinear residual printed
ComputeElemAuxVarsThread -> Actually this is fine because this is the Aux Kernel that is created for outputting the material property
Number of calls: 8
1. 1-4
2. 5-8
...
7. 25-28
8. 29-32
Failed tests:
random.material_serial
controls*
Failed but now passing:
element_aux_boundary
bnd_material_test
elem_aux_bc_on_bound
output.boundary
multiplicity
material_point_source_test
line_material_sampler
2/6/17
Ok my new test is failing with threads and I don't really know why. It seems like the number of calls to computing threads should be the same...
Calls to computeProperties:
ComputeResidualThread
ComputeResidualThread
0th nonlinear residual printed
ComputeJacobianThread
ComputeResidualThread
0th linear residual printed
ComputResidualThread
1st linear residual printed
ComputeResidualThread
ComputeResidualThread
1st nonlinear residual printed
ComputeElemAuxVarsThread
Yep so thread computing pattern is the exact same. How about whether the material is the same location in memory every time?
0x7fed90 (1, 2, 8)
Increments:
1-4, 5-8, 9-12 -> average of 10.5 which is what is observed in the output
0x810b10 (3, 4, 5, 6, 7
4/28/17
Navier Stokes module development
End of explanation
import sympy as sp
sxx, sxy, syx, syy, nx, ny, mu = sp.var('sxx sxy syx syy nx ny mu')
ddx, ddy, ux, uy = sp.var('ddx ddy ux uy')
grad = sp.Matrix([ddx,ddy])
u = sp.Matrix([ux,uy])
phij,mu = sp.var('phij mu')
uDuxj = sp.Matrix([phij,0])
uDuyj = sp.Matrix([0,phij])
rateOfStrain = (grad*u.transpose() + (grad*u.transpose()).transpose()) * 1 / 2
d_rateOfStrain_d_uxj = (grad*uDuxj.transpose() + (grad*uDuxj.transpose()).transpose()) * 1 / 2
d_rateOfStrain_d_uyj = (grad*uDuyj.transpose() + (grad*uDuyj.transpose()).transpose()) * 1 / 2
print(rateOfStrain)
print(d_rateOfStrain_d_uxj)
print(d_rateOfStrain_d_uyj)
tau = rateOfStrain * 2 * mu
d_tau_d_uxj = d_rateOfStrain_d_uxj * 2 * mu
d_tau_d_uyj = d_rateOfStrain_d_uyj * 2 * mu
print(tau)
print(d_tau_d_uxj)
print(d_tau_d_uyj)
normals = sp.Matrix([nx,ny])
y_component_normal = sp.Matrix([0,ny])
x_component_normal = sp.Matrix([nx,0])
test = sp.var('test')
test_x = sp.Matrix([test,0])
test_y = sp.Matrix([0,test])
Explanation: Jacobian calculations related to deviatoric stress tensor ($\hat{\tau}$) and rate of strain tensor ($\hat{\epsilon}$)
Note that the total stress tensor ($\hat{\sigma}$) is equal to the sum of the deviatoric stress tensor ($\hat{\tau}$) and the stress induced by pressure ($-p\hat{I}$), e.g.
\begin{equation}
\hat{\sigma} = \hat{\tau} - p\hat{I}
\end{equation}
End of explanation
normals.transpose() * d_tau_d_uxj * test_y
Explanation: This is an example of an off-diagonal jacobian computation: derivative with respect to $x$ while test function corresponds to $y$
Specifically this corresponds to an off-diagonal contribution corresponding to the residual term:
\begin{equation}
\vec{n}^T \cdot \hat{\tau} \cdot \vec{v}_y
\end{equation}
End of explanation
sp.factor(normals.transpose() * d_tau_d_uxj * normals * normals.transpose() * test_y)
Explanation: Now let's look at an off diagonal-term for:
\begin{equation}
\left(\vec{n}^T \cdot \hat{\tau} \cdot \vec{n} \right) \vec{n}^T \cdot \vec{v}_y
\end{equation}
End of explanation
import sympy as sp
nx, ny, nz, mu, phij, ddx, ddy, ddz, ux, uy, uz = sp.var('nx ny nz mu phij ddx ddy ddz ux uy uz')
grad = sp.Matrix([ddx,ddy,ddz])
u = sp.Matrix([ux, uy, uz])
uDuxj = sp.Matrix([phij,0,0])
uDuyj = sp.Matrix([0,phij,0])
uDuzj = sp.Matrix([0,0,phij])
rateOfStrain = (grad*u.transpose() + (grad*u.transpose()).transpose()) * 1 / 2
d_rateOfStrain_d_uxj = (grad*uDuxj.transpose() + (grad*uDuxj.transpose()).transpose()) * 1 / 2
d_rateOfStrain_d_uyj = (grad*uDuyj.transpose() + (grad*uDuyj.transpose()).transpose()) * 1 / 2
d_rateOfStrain_d_uzj = (grad*uDuzj.transpose() + (grad*uDuzj.transpose()).transpose()) * 1 / 2
print(rateOfStrain)
print(d_rateOfStrain_d_uxj)
print(d_rateOfStrain_d_uyj)
print(d_rateOfStrain_d_uzj)
tau = rateOfStrain * 2 * mu
d_tau_d_uxj = d_rateOfStrain_d_uxj * 2 * mu
d_tau_d_uyj = d_rateOfStrain_d_uyj * 2 * mu
d_tau_d_uzj = d_rateOfStrain_d_uzj * 2 * mu
print(tau)
print(d_tau_d_uxj)
print(d_tau_d_uyj)
print(d_tau_d_uzj)
normals = sp.Matrix([nx,ny,nz])
test = sp.var('test')
test_x = sp.Matrix([test,0,0])
test_y = sp.Matrix([0,test,0])
test_z = sp.Matrix([0,0,test])
sp.factor(normals.transpose() * d_tau_d_uxj * normals * normals.transpose() * test_y)
sp.factor(normals.transpose() * d_tau_d_uxj * normals * normals.transpose() * test_z)
sp.factor(normals.transpose() * d_tau_d_uyj * normals * normals.transpose() * test_x)
Explanation: Hmm...that's not very revealing...this result is completely symmetric...it doesn't tell me what the code implementation should be. Let's try 3D in order to elucidate
End of explanation
(normals.transpose() * tau)[0]
sp.factor(_)
Explanation: Alright, it looks like we get the normal components corresponding to residual $i$ and derivative variable $j$!!! Boom!
End of explanation
from scipy.special import erf
from numpy import exp, sqrt, pi
import numpy as np
def u(x, y, u1, u2, sigma):
return (u1 + u2) / 2. - (u1 - u2) / 2. * erf(sigma * y / x)
def v(x, y, u1, u2, sigma):
return (u1 - u2) / (2. * sigma * sqrt(pi)) * exp(-(sigma * y / x)**2)
def p():
return 0
def k(x, y, k0, sigma):
return k0 * exp(-(sigma * y / x)**2)
def epsilon(x, y, epsilon0, sigma):
return epsilon0 / x * exp(-(sigma * y / x)**2)
def muT(x, y, muT0, sigma):
return muT0 * x * exp(-(sigma * y / x)**2)
def k0(u1, u2, sigma):
return 343. / 75000. * u1 * (u1 - u2) * sigma / sqrt(pi)
def epsilon0(u1, u2, sigma, Cmu):
return 343. / 22500. * Cmu * u1 * (u1 - u2)**2 * sigma**2 / pi
def muT0(u1, rho):
return 343. / 250000. * rho * u1
def Re(rho, u1, L, mu):
return rho * u1 * L / mu
u1 = 1
u2 = 0
sigma = 13.5
Cmu = 0.9
x = np.arange(10, 100.5, .5)
y = np.arange(-30, 30.5, .5)
x,y = np.meshgrid(x, y)
uplot = u(x, y, u1, u2, sigma)
vplot = v(x, y, u1, u2, sigma)
kplot = k(x, y, k0(u1, u2, sigma), sigma)
epsPlot = epsilon(x, y, epsilon0(u1, u2, sigma, Cmu), sigma)
muTplot = muT(x, y, muT0(u1, 1), sigma)
import matplotlib.pyplot as plt
plt.pcolor(x, y, uplot)
plt.colorbar()
plt.show()
plt.pcolor(x, y, vplot)
plt.colorbar()
plt.show()
plt.pcolor(x, y, kplot)
plt.colorbar()
plt.show()
plt.pcolor(x, y, epsPlot)
plt.colorbar()
plt.show()
plt.pcolor(x, y, muTplot)
plt.colorbar()
plt.show()
import sympy as sp
from sympy import diff
x, y, sigma, Cmu, rho, mu, k0, eps0 = sp.var('x y sigma Cmu rho mu k0 eps0')
def gradVec2(u_vec, x, y):
return sp.Matrix([[diff(u_vec[0], x), diff(u_vec[1],x)], [diff(u_vec[0], y), diff(u_vec[1], y)]])
def divTen2(tensor, x, y):
return sp.Matrix([diff(tensor[0,0], x) + diff(tensor[1,0], y), diff(tensor[0, 1], x) + diff(tensor[1,1], y)])
def divVec2(u_vec, x, y):
return diff(u_vec[0], x) + diff(u_vec[1], y)
u = (1 - sp.erf(sigma * y / x)) / 2
v = sp.exp(-(sigma * y / x)**2) / 2 / sigma / sp.sqrt(sp.pi)
k = k0 * sp.exp(-(sigma * y / x)**2)
eps = eps0 / x * sp.exp(-(sigma * y / x)**2)
muT = rho * Cmu * k**2 / eps
u_vec = sp.Matrix([u, v])
grad_u_vec = gradVec2(u_vec, x, y)
visc_term = divTen2((mu + muT) * (grad_u_vec + grad_u_vec.transpose()), x, y)
print(sp.simplify(divVec2(u_vec, x, y)))
visc_term
visc_term.shape
momentum_equations = rho * u_vec.transpose() * grad_u_vec - visc_term.transpose()
u_eq = momentum_equations[0]
v_eq = momentum_equations[1]
sp.simplify(v_eq)
sp.simplify(u_eq)
sp.collect(u_eq, x)
u_eq = u_eq.subs(k0, sigma / sp.sqrt(sp.pi) * 343 / 75000)
print(u_eq)
u_eq = u_eq.subs(eps0, Cmu * sigma**2 / sp.pi * 343 / 22500)
print(u_eq)
sp.simplify(u_eq)
grad_u_vec = sp.Matrix([[diff(u, x), diff(v, x)], [diff(u, y), diff(v, y)]])
grad_u_vec
clear(pi)
from sympy.physics.vector import ReferenceFrame
R = ReferenceFrame('R')
u = (1 - sp.erf(sigma * R[1] / R[0])) / 2
v = sp.exp(sigma * R[1] / R[0]) / 2 / sigma / sp.sqrt(pi)
k = k0 * sp.exp(-(sigma * R[1] / R[0])**2)
eps = eps0 / R[0] * sp.exp(-(sigma * R[1] / R[0])**2)
muT = rho * Cmu * k**2 / eps
u_vec[0]
grad_u_vec = gradVec2(u_vec)
from scipy.special import erf
erf(2)
erf(-1)
erf(.99)
from numpy import pi, sqrt, exp
def d_erf(x):
return 2. / sqrt(pi) * exp(-x**2)
def d_half_erf(x):
return 2. / sqrt(pi) * exp(-(0.5*x)**2) * 0.5
d_half_erf(-2)
d_erf(-1)
print(pi)
d_erf(0)
d_erf(1)
import numpy as np
libmesh = np.loadtxt("/home/lindsayad/projects/moose/libmesh/contrib/fparser/examples/first_orig.dat")
libmesh.shape
xl = libmesh[:,0]
ypl = libmesh[:,1]
xt = np.arange(-1,1,.01)
yt = d_erf(xt)
import matplotlib.pyplot as plt
plt.close()
plt.plot(xl, ypl, label="libmesh")
plt.plot(xt, yt, label='true')
plt.legend()
plt.show()
plt.close()
plt.plot(xl, yt / ypl)
plt.show()
Explanation: 5/3/17
End of explanation
from sympy import *
x, y, L = var('x y L')
from random import randint, random, uniform
for i in range(30):
print('%.2f' % uniform(.1, .99))
from random import randint, random, uniform
def sym_func(x, y, L):
return round(uniform(.1, .99),1) + round(uniform(.1, .99),1) * sin(round(uniform(.1, .99),1) * pi * x / L) \
+ round(uniform(.1, .99),1) * sin(round(uniform(.1, .99),1) * pi * y / L) \
+ round(uniform(.1, .99),1) * sin(round(uniform(.1, .99),1) * pi * x * y / L)
u = sym_func(x, y, 1)
v = sym_func(x, y, 1)
p = sym_func(x, y, 1)
k = sym_func(x, y, 1)
eps = sym_func(x, y, 1)
print(u, v, p, k, eps, sep="\n")
import sympy as sp
def gradVec2(u_vec, x, y):
return sp.Matrix([[diff(u_vec[0], x), diff(u_vec[1],x)], [diff(u_vec[0], y), diff(u_vec[1], y)]])
def divTen2(tensor, x, y):
return sp.Matrix([diff(tensor[0,0], x) + diff(tensor[1,0], y), diff(tensor[0, 1], x) + diff(tensor[1,1], y)])
def divVec2(u_vec, x, y):
return diff(u_vec[0], x) + diff(u_vec[1], y)
def gradScalar2(u, x, y):
return sp.Matrix([diff(u, x), diff(u,y)])
def strain_rate(u_vec, x, y):
return gradVec2(u_vec, x, y) + gradVec2(u_vec, x, y).transpose()
def strain_rate_squared_2(u_vec, x, y):
tensor = gradVec2(u_vec, x, y) + gradVec2(u_vec, x, y).transpose()
rv = 0
for i in range(2):
for j in range(2):
rv += tensor[i, j] * tensor[i, j]
return rv
def laplace2(u, x, y):
return diff(diff(u, x), x) + diff(diff(u, y), y)
Explanation: 5/10/17
Ok, the analytic_turbulence problem sucks. Even if I start with Dirichlet boundary conditions on all boundaries and initial conditions representing the supposed analytic solution, and then run a transient simulation, the solution evolves away from the supposed analytic solution. backwards_step_adaptive.i runs to completion, but that's for a relatively low inlet velocity.
Getting some pretty good results now also with backwards_step_adaptive_inlet_v_100.i which I wasn't a few days before. This could perhaps be due to the introduction of the SUPG terms. Convergence becomes a little slow at longer time steps, perhaps because of incomplete Jacobian implementation? Or poor relative scaling of the variables? Results for kin actually don't look too far off from the results in the Kuzmin paper. This simulation uses a Reynolds number of 100, which is still pretty small! Next effort will be with the Reynolds number in the Kuzmin paper of 47,625.
It's something I've observed over the years that decreasing element size can lead to decreasing solver convergence. Note that I'm not talking about convergence to the true solution. I wish I could find a good piece of literature discussing this phenomenon. There are just so many things to consider about a finite element solve; it can be fun at times and frustrating at others.
5/11/17
Ok, going to do some methods of manufactured solutions!
End of explanation
pnew = Integer(0)
type(pnew)
Explanation: Momentum equations
End of explanation
cmu = 0.09
uvec = sp.Matrix([u, v])
mu, rho = var('mu rho')
visc_term = (-mu * divTen2(gradVec2(uvec, x, y) + gradVec2(uvec, x, y).transpose(), x, y)).transpose()
conv_term = rho * uvec.transpose() * gradVec2(uvec, x, y)
pressure_term = gradScalar2(p, x, y).transpose()
turbulent_visc_term = -(divTen2(rho * cmu * k**2 / eps * (gradVec2(uvec, x, y) + gradVec2(uvec, x, y).transpose()), x, y)).transpose()
# print(visc_term.shape, conv_term.shape, pressure_term.shape, sep="\n")
source = conv_term + visc_term + pressure_term + turbulent_visc_term
print(source[0])
print(source[1])
Explanation: Traction Form
End of explanation
cmu = 0.09
uvec = sp.Matrix([u, v])
mu, rho = var('mu rho')
visc_term = (-mu * divTen2(gradVec2(uvec, x, y), x, y)).transpose()
conv_term = rho * uvec.transpose() * gradVec2(uvec, x, y)
pressure_term = gradScalar2(p, x, y).transpose()
turbulent_visc_term = -(divTen2(rho * cmu * k**2 / eps * (gradVec2(uvec, x, y)), x, y)).transpose()
# print(visc_term.shape, conv_term.shape, pressure_term.shape, sep="\n")
source = conv_term + visc_term + pressure_term + turbulent_visc_term
print(source[0])
print(source[1])
Explanation: Laplace Form
End of explanation
-divVec2(uvec, x, y)
Explanation: Pressure equation
End of explanation
diff_term = -laplace2(p, x, y)
print(diff_term)
Explanation: Or testing with a simple diffusion term
End of explanation
cmu = 0.09
sigk = 1.
sigeps = 1.3
c1eps = 1.44
c2eps = 1.92
conv_term = rho * uvec.transpose() * gradScalar2(k, x, y)
diff_term = - divVec2((mu + rho * cmu * k**2 / eps / sigk) * gradScalar2(k, x, y), x, y)
creation_term = - rho * cmu * k**2 / eps / 2 * strain_rate_squared_2(uvec, x, y)
destruction_term = rho * eps
terms = [conv_term[0,0], diff_term, creation_term, destruction_term]
L = 0
for term in terms:
L += term
print(L)
Explanation: Turbulent kinetic energy equation
End of explanation
cmu = 0.09
sigk = 1.
sigeps = 1.3
c1eps = 1.44
c2eps = 1.92
conv_term = rho * uvec.transpose() * gradScalar2(eps, x, y)
diff_term = - divVec2((mu + rho * cmu * k**2 / eps / sigeps) * gradScalar2(eps, x, y), x, y)
creation_term = - rho * c1eps * cmu * k / 2 * strain_rate_squared_2(uvec, x, y)
destruction_term = rho * c2eps * eps**2 / k
terms = [conv_term[0,0], diff_term, creation_term, destruction_term]
L = 0
for term in terms:
L += term
print(L)
Explanation: Turbulent dissipation
End of explanation
diff_term = -laplace2(u, x, y)
print(diff_term)
def z(func, xh, yh):
u = np.zeros(xh.shape)
for i in range(0,xh.shape[0]):
for j in range(0,xh.shape[1]):
u[i][j] = func.subs({x:xh[i][j], y:yh[i][j]}).evalf()
# print(func.subs({x:xh[i][j], y:yh[i][j]}).evalf())
return u
xnum = np.arange(0, 1.01, .05)
ynum = np.arange(0, 1.01, .05)
xgrid, ygrid = np.meshgrid(xnum, ynum)
uh = z(u, xgrid, ygrid)
vh = z(v, xgrid, ygrid)
ph = z(p, xgrid, ygrid)
kh = z(k, xgrid, ygrid)
epsh = z(eps, xgrid, ygrid)
import matplotlib.pyplot as plt
plot_funcs = [uh, vh, ph, kh, epsh]
for func in plot_funcs:
plt.pcolor(xgrid, ygrid, func, cmap='coolwarm')
cbar = plt.colorbar()
plt.show()
f, g = symbols('f g', cls=Function)
f(x,y).diff(x)
vx, vy = symbols('v_x v_y', cls=Function)
mu, x, y = var('mu x y')
nx, ny = var('n_x n_y')
n = sp.Matrix([nx, ny])
v_vec = sp.Matrix([vx(x, y), vy(x, y)])
sigma = strain_rate(v_vec, x, y)
tw = n.transpose() * sigma - n.transpose() * sigma * n * n.transpose()
tw[0]
tw[1]
vx, vy = symbols('v_x v_y', cls=Function)
mu, x, y = var('mu x y')
nx, ny = var('n_x n_y')
n = sp.Matrix([Integer(0), Integer(1)])
v_vec = sp.Matrix([vx(x, y), 0])
sigma = strain_rate(v_vec, x, y)
tw = n.transpose() * sigma - n.transpose() * sigma * n * n.transpose()
tw[0]
Explanation: Simple diffusion
End of explanation
u
import sympy as sp
def gradVec2(u_vec, x, y):
return sp.Matrix([[diff(u_vec[0], x), diff(u_vec[1],x)], [diff(u_vec[0], y), diff(u_vec[1], y)]])
def divTen2(tensor, x, y):
return sp.Matrix([diff(tensor[0,0], x) + diff(tensor[1,0], y), diff(tensor[0, 1], x) + diff(tensor[1,1], y)])
def divVec2(u_vec, x, y):
return diff(u_vec[0], x) + diff(u_vec[1], y)
def gradScalar2(u, x, y):
return sp.Matrix([diff(u, x), diff(u,y)])
def strain_rate(u_vec, x, y):
return gradVec2(u_vec, x, y) + gradVec2(u_vec, x, y).transpose()
def strain_rate_squared_2(u_vec, x, y):
tensor = gradVec2(u_vec, x, y) + gradVec2(u_vec, x, y).transpose()
rv = 0
for i in range(2):
for j in range(2):
rv += tensor[i, j] * tensor[i, j]
return rv
def laplace2(u, x, y):
return diff(diff(u, x), x) + diff(diff(u, y), y)
def L_momentum_traction(uvec, k, eps, x, y):
cmu = 0.09
mu, rho = sp.var('mu rho')
visc_term = (-mu * divTen2(gradVec2(uvec, x, y) + gradVec2(uvec, x, y).transpose(), x, y)).transpose()
conv_term = rho * uvec.transpose() * gradVec2(uvec, x, y)
pressure_term = gradScalar2(p, x, y).transpose()
turbulent_visc_term = -(divTen2(rho * cmu * k**2 / eps * (gradVec2(uvec, x, y) + gradVec2(uvec, x, y).transpose()), x, y)).transpose()
# print(visc_term.shape, conv_term.shape, pressure_term.shape, sep="\n")
source = conv_term + visc_term + pressure_term + turbulent_visc_term
return source
def bc_terms_momentum_traction(uvec, nvec, k, eps, x, y):
cmu = 0.09
mu, rho = sp.var('mu rho')
visc_term = (-mu * nvec.transpose() * (gradVec2(uvec, x, y) + gradVec2(uvec, x, y).transpose(), x, y)).transpose()
turbulent_visc_term = -(nvec.transpose() * (rho * cmu * k**2 / eps * (gradVec2(uvec, x, y) + gradVec2(uvec, x, y).transpose()), x, y)).transpose()
return visc_term + turbulent_visc_term
def L_momentum_laplace(uvec, k, eps, x, y):
cmu = 0.09
mu, rho = var('mu rho')
visc_term = (-mu * divTen2(gradVec2(uvec, x, y), x, y)).transpose()
conv_term = rho * uvec.transpose() * gradVec2(uvec, x, y)
pressure_term = gradScalar2(p, x, y).transpose()
turbulent_visc_term = -(divTen2(rho * cmu * k**2 / eps * (gradVec2(uvec, x, y)), x, y)).transpose()
# print(visc_term.shape, conv_term.shape, pressure_term.shape, sep="\n")
source = conv_term + visc_term + pressure_term + turbulent_visc_term
return source
def L_pressure(uvec, x, y):
return -divVec2(uvec, x, y)
def L_kin(uvec, k, eps, x, y):
cmu = 0.09
sigk = 1.
sigeps = 1.3
c1eps = 1.44
c2eps = 1.92
conv_term = rho * uvec.transpose() * gradScalar2(k, x, y)
diff_term = - divVec2((mu + rho * cmu * k**2 / eps / sigk) * gradScalar2(k, x, y), x, y)
creation_term = - rho * cmu * k**2 / eps / 2 * strain_rate_squared_2(uvec, x, y)
destruction_term = rho * eps
terms = [conv_term[0,0], diff_term, creation_term, destruction_term]
L = 0
for term in terms:
L += term
return L
def L_eps(uvec, k, eps, x, y):
cmu = 0.09
sigk = 1.
sigeps = 1.3
c1eps = 1.44
c2eps = 1.92
conv_term = rho * uvec.transpose() * gradScalar2(eps, x, y)
diff_term = - divVec2((mu + rho * cmu * k**2 / eps / sigeps) * gradScalar2(eps, x, y), x, y)
creation_term = - rho * c1eps * cmu * k / 2 * strain_rate_squared_2(uvec, x, y)
destruction_term = rho * c2eps * eps**2 / k
terms = [conv_term[0,0], diff_term, creation_term, destruction_term]
L = 0
for term in terms:
L += term
return L
def prep_moose_input(sym_expr):
rep1 = re.sub(r'\*\*',r'^',str(sym_expr))
rep2 = re.sub(r'mu',r'${mu}',rep1)
rep3 = re.sub(r'rho',r'${rho}',rep2)
return rep3
def write_all_functions():
target = open('/home/lindsayad/python/mms_input.txt','w')
target.write("[Functions]" + "\n")
target.write(" [./u_source_func]" + "\n")
target.write(" type = ParsedFunction" + "\n")
target.write(" value = '" + prep_moose_input(L_momentum_traction(uVecNew, kinNew, epsilonNew, x, y)[0]) + "'" + "\n")
target.write(" [../]" + "\n")
target.write(" [./v_source_func]" + "\n")
target.write(" type = ParsedFunction" + "\n")
target.write(" value = '" + prep_moose_input(L_momentum_traction(uVecNew, kinNew, epsilonNew, x, y)[1]) + "'" + "\n")
target.write(" [../]" + "\n")
target.write(" [./p_source_func]" + "\n")
target.write(" type = ParsedFunction" + "\n")
target.write(" value = '" + prep_moose_input(L_pressure(uVecNew, x, y)) + "'" + "\n")
target.write(" [../]" + "\n")
target.write(" [./kin_source_func]" + "\n")
target.write(" type = ParsedFunction" + "\n")
target.write(" value = '" + prep_moose_input(L_kin(uVecNew, kinNew, epsilonNew, x, y)) + "'" + "\n")
target.write(" [../]" + "\n")
target.write(" [./epsilon_source_func]" + "\n")
target.write(" type = ParsedFunction" + "\n")
target.write(" value = '" + prep_moose_input(L_eps(uVecNew, kinNew, epsilonNew, x, y)) + "'" + "\n")
target.write(" [../]" + "\n")
target.write(" [./u_func]" + "\n")
target.write(" type = ParsedFunction" + "\n")
target.write(" value = '" + str(uNew) + "'" + "\n")
target.write(" [../]" + "\n")
target.write(" [./v_func]" + "\n")
target.write(" type = ParsedFunction" + "\n")
target.write(" value = '" + str(vNew) + "'" + "\n")
target.write(" [../]" + "\n")
target.write(" [./p_func]" + "\n")
target.write(" type = ParsedFunction" + "\n")
target.write(" value = '" + str(pNew) + "'" + "\n")
target.write(" [../]" + "\n")
target.write(" [./kin_func]" + "\n")
target.write(" type = ParsedFunction" + "\n")
target.write(" value = '" + str(kinNew) + "'" + "\n")
target.write(" [../]" + "\n")
target.write(" [./epsilon_func]" + "\n")
target.write(" type = ParsedFunction" + "\n")
target.write(" value = '" + str(epsilonNew) + "'" + "\n")
target.write(" [../]" + "\n")
target.write("[]" + "\n")
target.close()
def write_reduced_functions():
target = open('/home/lindsayad/python/mms_input.txt','w')
target.write("[Functions]" + "\n")
target.write(" [./u_source_func]" + "\n")
target.write(" type = ParsedFunction" + "\n")
target.write(" value = '" + prep_moose_input(L_momentum_traction(uVecNew, kinNew, epsilonNew, x, y)[0]) + "'" + "\n")
target.write(" [../]" + "\n")
target.write(" [./kin_source_func]" + "\n")
target.write(" type = ParsedFunction" + "\n")
target.write(" value = '" + prep_moose_input(L_kin(uVecNew, kinNew, epsilonNew, x, y)) + "'" + "\n")
target.write(" [../]" + "\n")
target.write(" [./epsilon_source_func]" + "\n")
target.write(" type = ParsedFunction" + "\n")
target.write(" value = '" + prep_moose_input(L_eps(uVecNew, kinNew, epsilonNew, x, y)) + "'" + "\n")
target.write(" [../]" + "\n")
target.write(" [./u_func]" + "\n")
target.write(" type = ParsedFunction" + "\n")
target.write(" value = '" + str(uNew) + "'" + "\n")
target.write(" [../]" + "\n")
target.write(" [./kin_func]" + "\n")
target.write(" type = ParsedFunction" + "\n")
target.write(" value = '" + str(kinNew) + "'" + "\n")
target.write(" [../]" + "\n")
target.write(" [./epsilon_func]" + "\n")
target.write(" type = ParsedFunction" + "\n")
target.write(" value = '" + str(epsilonNew) + "'" + "\n")
target.write(" [../]" + "\n")
target.write("[]" + "\n")
target.close()
yStarPlus = 11.06
# uNew = yStarPlus**2 / y + u * (y - 1.) * 200
# uNew = u * (y - 1.) * 200
# vNew = Integer(0)
# vNew = v * (y - 1.5) * 200
# pNew = Integer(0)
# # Converges
# uNew = u
# vNew = v
# pNew = p
# kinNew = k
# epsilonNew = eps
# # Does not converge
# uNew = u * 200
# vNew = v * 200
# pNew = p * 200
# kinNew = k * 200
# epsilonNew = eps * 200
# # Converges
# uNew = u * (y - 1.)
# vNew = v
# pNew = p
# kinNew = k
# epsilonNew = eps
# # Converges
# uNew = u * (y - 1.) + 1. / y
# vNew = v
# pNew = p
# kinNew = k
# epsilonNew = eps
# # Does not converge
# uNew = u * (y - 1.) + yStarPlus**2 / y
# vNew = v
# pNew = p
# kinNew = k
# epsilonNew = eps
# # Converges
# uNew = u * (y - 1.) + 1.1 / y
# vNew = 0
# pNew = 0
# kinNew = k
# epsilonNew = eps
# Want to test natural boundary condition
uNew = 0.5 + sin(pi * x / 2) + sin(pi * y / 2)
vNew = 0
pNew = 0
kinNew = k
epsilonNew = eps
uVecNew = sp.Matrix([uNew, vNew])
write_reduced_functions()
print(u)
print(v)
print(p)
print(k)
print(eps)
Explanation: Verified RANS kernels
INSK
INSEpsilon
INSMomentumTurbulentViscosityTractionForm
INSMomentumTurbulentViscosityLaplaceForm
INSMomentumShearStressWallFunction with |u|/yStarPlus branch of uTau
with exp_form = false.
End of explanation
vx, vy = symbols('v_x v_y', cls=Function)
mu, x, y = var('mu x y')
nx, ny = var('n_x n_y')
n = sp.Matrix([Integer(0), Integer(1)])
v_vec = sp.Matrix([vx(x, y), 0])
kinFunc, epsFunc = symbols('kinFunc epsFunc', cls=Function)
blah = bc_terms_momentum_traction(uVecNew, n, kinNew, epsilonNew, x, y)
type(blah)
blah[0]
sigma = mu * strain_rate(v_vec, x, y)
tw = n.transpose() * sigma - n.transpose() * sigma * n * n.transpose()
tw[0]
(n.transpose() * sigma)[0]
cmu = 0.09
mu, rho = sp.var('mu rho')
visc_term = (-mu * n.transpose() * (gradVec2(v_vec, x, y) + gradVec2(v_vec, x, y).transpose(), x, y)).transpose()
turbulent_visc_term = -(n.transpose() * (rho * cmu * k**2 / eps * (gradVec2(v_vec, x, y) + gradVec2(v_vec, x, y).transpose()), x, y)).transpose()
visc_term
print(visc_term)
yStarPlus = 11.06
# uNew = yStarPlus**2 / y + u * (y - 1.) * 200
# uNew = u * (y - 1.) * 200
# vNew = Integer(0)
# vNew = v * (y - 1.5) * 200
# pNew = Integer(0)
# # Converges
# uNew = u
# vNew = v
# pNew = p
# kinNew = k
# epsilonNew = eps
# # Does not converge
# uNew = u * 200
# vNew = v * 200
# pNew = p * 200
# kinNew = k * 200
# epsilonNew = eps * 200
# # Converges
# uNew = u * (y - 1.)
# vNew = v
# pNew = p
# kinNew = k
# epsilonNew = eps
# # Converges
# uNew = u * (y - 1.) + 1. / y
# vNew = v
# pNew = p
# kinNew = k
# epsilonNew = eps
# # Does not converge
# uNew = u * (y - 1.) + yStarPlus**2 / y
# vNew = v
# pNew = p
# kinNew = k
# epsilonNew = eps
# # Converges
# uNew = u * (y - 1.) + 1.1 / y
# vNew = 0
# pNew = 0
# kinNew = k
# epsilonNew = eps
from moose_calc_routines import *
from sympy import *
init_printing()
yStarPlus = 1.1
vx, vy = symbols('v_x v_y', cls=Function, positive=True, real=True)
mu, x, y = var('mu x y', real=True, positive=True)
nx, ny = var('n_x n_y')
n = sp.Matrix([Integer(0), Integer(1)])
v_vec = sp.Matrix([vx(x, y), 0])
kinFunc, epsFunc = symbols('k_f \epsilon_f', cls=Function)
u = sym_func(x, y, 1)
v = sym_func(x, y, 1)
p = sym_func(x, y, 1)
k = sym_func(x, y, 1)
eps = sym_func(x, y, 1)
# Want to test wall function bc
uNew = mu * yStarPlus**2 / y
# uNew = 0.5 + sin(pi * x / 2) + sin(pi * y / 2) + mu * yStarPlus**2 / y
vNew = 0
pNew = 0
kinNew = k
epsilonNew = eps
# # Want to test natural boundary condition
# uNew = 0.5 + sin(pi * x / 2) + sin(pi * y / 2)
# vNew = 0
# pNew = 0
# kinNew = k
# epsilonNew = eps
uVecNew = sp.Matrix([uNew, vNew])
numeric = bc_terms_momentum_traction(uVecNew, n, kinNew, epsilonNew, x, y, symbolic=False)
numeric_wall_function = wall_function_momentum_traction(uVecNew, n, kinNew, epsilonNew, x, y, "kin", symbolic=False)
symbolic = bc_terms_momentum_traction(v_vec, n, kinFunc(x, y), epsFunc(x, y), x, y, symbolic=True)
wall_function = wall_function_momentum_traction(v_vec, n, kinFunc(x, y), epsFunc(x, y), x, y, "kin", symbolic=True)
write_reduced_functions(uVecNew, kinNew, epsilonNew, x, y)
expr = numeric[0] - numeric_wall_function[0]
expr.subs(y, 1).collect('mu')
expr = symbolic[0] - wall_function[0]
# print(expr)
# expr
newexp = expr.subs(vx(x, y), vx(y)).subs(Abs(vx(y)), vx(y))
# print(newexp)
newexp
dsolve(newexp, vx(y))
symbolic[0]
wall_function[0]
from moose_calc_routines import *
from sympy import *
import sympy as sp
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
init_printing()
x, y = var('x y')
# # INS turbulence
# u = 0.4*sin(0.5*pi*x) + 0.4*sin(pi*y) + 0.7*sin(0.2*pi*x*y) + 0.5
# v = 0.6*sin(0.8*pi*x) + 0.3*sin(0.3*pi*y) + 0.2*sin(0.3*pi*x*y) + 0.3
# p = 0.5*sin(0.5*pi*x) + 1.0*sin(0.3*pi*y) + 0.5*sin(0.2*pi*x*y) + 0.5
# k = 0.4*sin(0.7*pi*x) + 0.9*sin(0.7*pi*y) + 0.7*sin(0.4*pi*x*y) + 0.4
# eps = 0.6*sin(0.3*pi*x) + 0.9*sin(0.9*pi*y) + 0.8*sin(0.6*pi*x*y) + 0.5
# uvec = sp.Matrix([u, v])
# n = sp.Matrix([Integer(0), Integer(1)])
# INS only
u = 0.4*sin(0.5*pi*x) + 0.4*sin(pi*y) + 0.7*sin(0.2*pi*x*y) + 0.5
v = 0.6*sin(0.8*pi*x) + 0.3*sin(0.3*pi*y) + 0.2*sin(0.3*pi*x*y) + 0.3
p = 0.5*sin(0.5*pi*x) + 1.0*sin(0.3*pi*y) + 0.5*sin(0.2*pi*x*y) + 0.5
uvec = sp.Matrix([u, v])
nvec = sp.Matrix([Integer(0), Integer(1)])
nvecs = {'left' : sp.Matrix([-1, 0]), 'top' : sp.Matrix([0, 1]), \
'right' : sp.Matrix([1, 0]), 'bottom' : sp.Matrix([0, -1])}
source = {bnd_name :
prep_moose_input(-bc_terms_momentum_traction_no_turbulence(uvec, nvec, p, x, y, parts=True)[0])
for bnd_name, nvec in nvecs.items()}
source
surface_terms = bc_terms_momentum_traction_no_turbulence(uvec, nvec, p, x, y, parts=True)
tested_bc = no_bc_bc(uvec, nvec, p, x, y, parts=True)
needed_func = tested_bc - surface_terms
print(prep_moose_input(needed_func[0]))
print(prep_moose_input(needed_func[1]))
surface_terms = bc_terms_momentum_traction(uvec, n, p, k, eps, x, y, symbolic=False, parts=True)
tested_bc = wall_function_momentum_traction(uvec, n, p, k, eps, x, y, "kin", symbolic=False, parts=True)
needed_func = tested_bc - surface_terms
needed_func
print(prep_moose_input(needed_func[0]))
print(prep_moose_input(-surface_terms[0]))
wall_function_momentum_traction(uvec, n, p, k, eps, x, y, "kin", symbolic=True, parts=True)
surface_terms = bc_terms_diffusion(u, n, x, y)
tested_bc = vacuum(u, n)
surface_terms
tested_bc
needed_func = tested_bc - surface_terms
needed_func
print(needed_func)
Explanation: Ok, so apparently just scaling every variable's manufactured solution by 200 causes MOOSE convergence issues. Sigh
End of explanation
from moose_calc_routines import *
from sympy import *
import sympy as sp
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
init_printing()
x, y = var('x y')
# INS turbulence
u = 0.4*sin(0.5*pi*x) + 0.4*sin(pi*y) + 0.7*sin(0.2*pi*x*y) + 0.5
v = 0.6*sin(0.8*pi*x) + 0.3*sin(0.3*pi*y) + 0.2*sin(0.3*pi*x*y) + 0.3
p = 0.5*sin(0.5*pi*x) + 1.0*sin(0.3*pi*y) + 0.5*sin(0.2*pi*x*y) + 0.5
k = 0.4*sin(0.7*pi*x) + 0.9*sin(0.7*pi*y) + 0.7*sin(0.4*pi*x*y) + 0.4
eps = 0.6*sin(0.3*pi*x) + 0.9*sin(0.9*pi*y) + 0.8*sin(0.6*pi*x*y) + 0.5
# # INS only
# u = 0.4*sin(0.5*pi*x) + 0.4*sin(pi*y) + 0.7*sin(0.2*pi*x*y) + 0.5
# v = 0.6*sin(0.8*pi*x) + 0.3*sin(0.3*pi*y) + 0.2*sin(0.3*pi*x*y) + 0.3
# p = 0.5*sin(0.5*pi*x) + 1.0*sin(0.3*pi*y) + 0.5*sin(0.2*pi*x*y) + 0.5
uvec = sp.Matrix([u, v])
nvecs = {'left' : sp.Matrix([-1, 0]), 'top' : sp.Matrix([0, 1]), \
'right' : sp.Matrix([1, 0]), 'bottom' : sp.Matrix([0, -1])}
source = {bnd_name :
prep_moose_input(#ins_epsilon_wall_function_bc(nvec, k, eps, x, y)
-bc_terms_eps(nvec, k, eps, x, y)[0,0])
for bnd_name, nvec in nvecs.items()}
# anti_bounds = {'left' : 'top right bottom', 'top' : 'right bottom left',
# 'right' : 'bottom left top', 'bottom' : 'left top right'}
anti_bounds = {'left' : 'top right bottom left', 'top' : 'right bottom left top',
'right' : 'bottom left top right', 'bottom' : 'left top right bottom'}
h_list = ['5', '10']
base = "k_epsilon_general_bc"
h_array = np.array([.2, .1])
volume_source = {'u' : prep_moose_input(L_momentum_traction(uvec, p, k, eps, x, y)[0]),
'v' : prep_moose_input(L_momentum_traction(uvec, p, k, eps, x, y)[1]),
'p' : prep_moose_input(L_pressure(uvec, x, y)),
'k' : prep_moose_input(L_kin(uvec, k, eps, x, y)),
'eps' : prep_moose_input(L_eps(uvec, k , eps, x, y))}
diri_func = {'u' : u, 'v' : v, 'p' : p, 'k' : k, 'eps' : eps}
a_string = "b"
a_string += "c"
a_string
"a" + None
optional_save_string="epsilon_wall_func_natural"
plot_order_accuracy('left', h_array, base, optional_save_string=optional_save_string)
plot_order_accuracy('right', h_array, base, optional_save_string=optional_save_string)
plot_order_accuracy('top', h_array, base, optional_save_string=optional_save_string)
plot_order_accuracy('bottom', h_array, base, optional_save_string=optional_save_string)
Explanation: 5/16/17
End of explanation
string = "Functions" + str('u')
string
import numpy as np
import matplotlib.pyplot as plt
x = np.arange(0, 2.1, .1)
u = 2*x**2 + 4*x
v = 3*x**2 + 2*x + 1
plt.plot(x, u)
plt.plot(x, v)
plt.show()
x = np.arange(0, 2.1, .1)
u = 1 - 2*x + 2*x**2
v = x**2
plt.plot(x, u)
plt.plot(x, v)
plt.show()
Explanation: Tasks accomplished today:
Showed formal order accuracy of natural boundary condition using MMS with pure navier stokes
Showed grid convergence with "kinetic" branch of INSMomentumShearStressWallFunctionBC but with accuracy order between formal order and formal order - 1 for u, v, and p for top and bottom boundaries. Unable to solve for left and right boundaries. Formal order accuracy for $\epsilon$ and k for solved cases.
Showed grid convergence with "velocity" branch of INSMomentumShearStressWallFunctionBC but with accuracy order between formal order and formal order - 1 for top and bottom boundaries; formal order accuracy for $\epsilon$ and k for top and bottom boundaries. Between formal order - 1 and formal order - 2 for p, u, and v for left boundary; between formal order and formal order - 1 for $\epsilon$ and k for left boundary. Unable to solve for right boundary.
Demonstrated that just by introducing a small error in the MOOSE code (multiplying a term by 1.1), we can destroy grid convergence by two orders. This makes me feel better about the fact that we're not achieving the exact formal order of accuracy with the INSMomentumShearStressWallFunctionBC but we're still within an order of the formal order.
Natural results suggest the moose python calculation routine for the integrated by parts terms is wrong!
End of explanation |
10,097 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
适配器模式(Adapter pattern)是一种结构型设计模式,帮助我们实现两个不兼容接口之间的兼容。
首先,解释一下不兼容接口的真正含义。如果我们希望把一个老组件用于一个新系统中,或者把一个新组件用于一个老系统中,不对代码进行任何修改两者就能够通信的情况很少见。但又并非总是能修改代码,或因为我们无法访问这些代码(例如,组件以外部库的方式提供),或因为修改代码本身就不切实际。在这些情况下,我们可以编写一个额外的代码层,该代码层包含让两个接口之间能够通信需要进行的所有修改。这个代码层就叫适配器。
现在有四个类:
Step1: 可能小伙伴们会觉得设计模式这块的东西略微有些复杂,完全不用感到灰心,如果不是想要将软件开发作为自己的职业的话,可能一辈子也不需要了解,或者不经意间用到也不知道。但是这部分内容可以用来复习类的概念知识。
这四个类的的接口各不一样,现在需要实现一个适配器,将这些不同类的接口统一起来:
回顾一下所掌握的装饰器的知识,现在有以下函数
Step2: 每次函数都要输出一个print语句告知用户当前在哪个函数中,这样的操作实在是太麻烦,可否实现一个装饰器,自动输出当前所在的函数名称?
Step3: 现在想要对于上面的装饰器做一下功能增强,把一些必要的参数传递给装饰器函数,以增加装饰器函数的灵活性,比如说,对于加法函数,可以传入'+++++'作为参数传入,相应的减法函数传入‘----’,乘法函数传入‘****’,除法函数传入'/////'。
Step5: 装饰器的内容都是很套路的函数操作,一般情况下就是用语简化重复代码:即“don't repeat yourself‘, 不要写重复的代码。
SQLite作为一个常见的数据库,也是python的标准库,用途非常广泛,适用于数据量较小的规模,但是又必须使用数据库的场景,sqlite的连接和CRUD操作可以参考标准库的文档。
现在用以下方式创建两个表,然后往里插入一些数据:
-- todo_schema.sql
-- Schema for to-do application examples.
-- Projects are high-level activities made up of tasks
create table project (
name text primary key,
description text,
deadline date
);
-- Tasks are steps that can be taken to complete a project
create table task (
id integer primary key autoincrement not null,
priority integer default 1,
details text,
status text,
deadline date,
completed_on date,
project text not null references project(name)
);
注意,需要将以上sql代码放到一个名为'todo_schema.sql'的文件之中。 | Python Code:
import os
class Dog(object):
def __init__(self):
self.name = "Dog"
def bark(self):
return "woof!"
class Cat(object):
def __init__(self):
self.name = "Cat"
def meow(self):
return "meow!"
class Human(object):
def __init__(self):
self.name = "Human"
def speak(self):
return "'hello'"
class Car(object):
def __init__(self):
self.name = "Car"
def make_noise(self, octane_level):
return "vroom%s" % ("!" * octane_level)
Explanation: 适配器模式(Adapter pattern)是一种结构型设计模式,帮助我们实现两个不兼容接口之间的兼容。
首先,解释一下不兼容接口的真正含义。如果我们希望把一个老组件用于一个新系统中,或者把一个新组件用于一个老系统中,不对代码进行任何修改两者就能够通信的情况很少见。但又并非总是能修改代码,或因为我们无法访问这些代码(例如,组件以外部库的方式提供),或因为修改代码本身就不切实际。在这些情况下,我们可以编写一个额外的代码层,该代码层包含让两个接口之间能够通信需要进行的所有修改。这个代码层就叫适配器。
现在有四个类:
End of explanation
def add(x, y):
print('add')
return x + y
def sub(x, y):
print('sub')
return x - y
def mul(x, y):
print('mul')
return x * y
def div(x, y):
print('div')
return x / y
Explanation: 可能小伙伴们会觉得设计模式这块的东西略微有些复杂,完全不用感到灰心,如果不是想要将软件开发作为自己的职业的话,可能一辈子也不需要了解,或者不经意间用到也不知道。但是这部分内容可以用来复习类的概念知识。
这四个类的的接口各不一样,现在需要实现一个适配器,将这些不同类的接口统一起来:
回顾一下所掌握的装饰器的知识,现在有以下函数
End of explanation
@debug
def add(x, y):
return x + y
@debug
def sub(x, y):
return x - y
@debug
def mul(x, y):
return x * y
@debug
def div(x, y):
return x / y
add(3,4)
Explanation: 每次函数都要输出一个print语句告知用户当前在哪个函数中,这样的操作实在是太麻烦,可否实现一个装饰器,自动输出当前所在的函数名称?
End of explanation
@debug(prefix='++++++')
def add(x, y):
return x + y
@debug(prefix='------')
def sub(x, y):
return x - y
@debug(prefix='******')
def mul(x, y):
return x * y
@debug(prefix='//////')
def div(x, y):
return x / y
add(3,4)
sub(3,2)
Explanation: 现在想要对于上面的装饰器做一下功能增强,把一些必要的参数传递给装饰器函数,以增加装饰器函数的灵活性,比如说,对于加法函数,可以传入'+++++'作为参数传入,相应的减法函数传入‘----’,乘法函数传入‘****’,除法函数传入'/////'。
End of explanation
import os
import sqlite3
db_filename = 'todo.db'
schema_filename = 'todo_schema.sql'
db_is_new = not os.path.exists(db_filename)
with sqlite3.connect(db_filename) as conn:
if db_is_new:
print('Creating schema')
with open(schema_filename, 'rt') as f:
schema = f.read()
conn.executescript(schema)
print('Inserting initial data')
conn.executescript(
insert into project (name, description, deadline)
values ('pymotw', 'Python Module of the Week',
'2016-11-01');
insert into task (details, status, deadline, project)
values ('write about select', 'done', '2016-04-25',
'pymotw');
insert into task (details, status, deadline, project)
values ('write about random', 'waiting', '2016-08-22',
'pymotw');
insert into task (details, status, deadline, project)
values ('write about sqlite3', 'active', '2017-07-31',
'pymotw');
)
else:
print('Database exists, assume schema does, too.')
下面请尝试检索上面所创建数据库中的全部数据:(使用fetchall)
Explanation: 装饰器的内容都是很套路的函数操作,一般情况下就是用语简化重复代码:即“don't repeat yourself‘, 不要写重复的代码。
SQLite作为一个常见的数据库,也是python的标准库,用途非常广泛,适用于数据量较小的规模,但是又必须使用数据库的场景,sqlite的连接和CRUD操作可以参考标准库的文档。
现在用以下方式创建两个表,然后往里插入一些数据:
-- todo_schema.sql
-- Schema for to-do application examples.
-- Projects are high-level activities made up of tasks
create table project (
name text primary key,
description text,
deadline date
);
-- Tasks are steps that can be taken to complete a project
create table task (
id integer primary key autoincrement not null,
priority integer default 1,
details text,
status text,
deadline date,
completed_on date,
project text not null references project(name)
);
注意,需要将以上sql代码放到一个名为'todo_schema.sql'的文件之中。
End of explanation |
10,098 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hdf5 File format
Vaex uses hdf5 (Hierarchical Data Format) for storing data. You can think of hdf5 files as being a file system, where the 'files' contain N-dimensional arrays, or think of it as the binary equivalent of XML files. Being almost like a filesystem, you can store data anyway, for instance under '/mydata/somearray'.
For vaex we based our layout on VOTable, any recommendation, comments or requests to standardize are welcome.
In vaex, every column is stored under /data, which can be found out using the h5ls tool
$ h5ls data/helmi-dezeeuw-2000-10p.hdf5
data Group
All columns are stored under this group, and can be listed
Step1: More information about a column can be found using | Python Code:
import h5py
import numpy as np
h5file = h5py.File("/Users/users/breddels/src/vaex/data/helmi-dezeeuw-2000-10p.hdf5", "r")
FeH = h5file["/data/FeH"]
# FeH is your regular numpy array (with some extras)
print("mean FeH", np.mean(FeH), "length", len(FeH))
Explanation: Hdf5 File format
Vaex uses hdf5 (Hierarchical Data Format) for storing data. You can think of hdf5 files as being a file system, where the 'files' contain N-dimensional arrays, or think of it as the binary equivalent of XML files. Being almost like a filesystem, you can store data anyway, for instance under '/mydata/somearray'.
For vaex we based our layout on VOTable, any recommendation, comments or requests to standardize are welcome.
In vaex, every column is stored under /data, which can be found out using the h5ls tool
$ h5ls data/helmi-dezeeuw-2000-10p.hdf5
data Group
All columns are stored under this group, and can be listed:
$ h5ls data/helmi-dezeeuw-2000-10p.hdf5/data
E Dataset {330000}
FeH Dataset {330000}
L Dataset {330000}
Lz Dataset {330000}
random_index Dataset {330000}
vx Dataset {330000}
vy Dataset {330000}
vz Dataset {330000}
x Dataset {330000}
y Dataset {330000}
z Dataset {330000}
If you for some reason don't want to use vaex, but access the data using Python, you could do something like this:
End of explanation
print(FeH.attrs["ucd"], FeH.attrs["unit"])
Explanation: More information about a column can be found using:
h5ls -v data/helmi-dezeeuw-2000-10p.hdf5/data/FeH
Opened "data/helmi-dezeeuw-2000-10p.hdf5" with sec2 driver.
FeH Dataset {330000/330000}
Attribute: ucd scalar
Type: variable-length null-terminated ASCII string
Data: "phys.abund.fe"
Attribute: unit scalar
Type: variable-length null-terminated ASCII string
Data: "dex"
Location: 1:2644064
Links: 1
Storage: 2640000 logical bytes, 2640000 allocated bytes, 100.00% utilization
Type: native double
Here we see that the (similar to VOTable), we have a ucd attribute which describes what the column represents, and its units.
These can be accessed using h5py as well
End of explanation |
10,099 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Customer Churn
Credits
Step1: We'll be keeping the statistical model pretty simple for this example so the feature space is almost unchanged from what you see above. The following code simply drops irrelevant columns and converts strings to boolean values (since models don't handle "yes" and "no" very well). The rest of the numeric columns are left untouched.
Step2: One slight side note. Many predictors care about the relative size of different features even though those scales might be arbitrary. For instance
Step3: Let's compare three fairly unique algorithms support vector machines, random forest, and k-nearest-neighbors. Nothing fancy here, just passing each to cross validation and determining how often the classifier predicted the correct class.
Step4: Random forest won, right?
Precision and recall
Measurements aren't golden formulas which always spit out high numbers for good models and low numbers for bad ones. Inherently they convey something sentiment about a model's performance, and it's the job of the human designer to determine each number's validity. The problem with accuracy is that outcomes aren't necessarily equal. If my classifier predicted a customer would churn and they didn't, that's not the best but it's forgivable. However, if my classifier predicted a customer would return, I didn't act, and then they churned... that's really bad.
We'll be using another built in scikit-learn function to construction a confusion matrix. A confusion matrix is a way of visualizing predictions made by a classifier and is just a table showing the distribution of predictions for a specific class. The x-axis indicates the true class of each observation (if a customer churned or not) while the y-axis corresponds to the class predicted by the model (if my classifier said a customer would churned or not).
Confusion matrix and confusion tables
Step5: An important question to ask might be, When an individual churns, how often does my classifier predict that correctly? This measurement is called "recall" and a quick look at these diagrams can demonstrate that random forest is clearly best for this criteria. Out of all the churn cases (outcome "1") random forest correctly retrieved 330 out of 482. This translates to a churn "recall" of about 68% (330/482≈2/3), far better than support vector machines (≈50%) or k-nearest-neighbors (≈35%).
Another question of importance is "precision" or, When a classifier predicts an individual will churn, how often does that individual actually churn? The differences in sematic are small from the previous question, but it makes quite a different. Random forest again out preforms the other two at about 93% precision (330 out of 356) with support vector machines a little behind at about 87% (235 out of 269). K-nearest-neighbors lags at about 80%.
While, just like accuracy, precision and recall still rank random forest above SVC and KNN, this won't always be true. When different measurements do return a different pecking order, understanding the values and tradeoffs of each rating should effect how you proceed.
ROC Plots & AUC
Another important metric to consider is ROC plots. We'll cover the majority of these concepts in lecture, but if you're itching for more, one of the best resources out there is this academic paper.
Simply put, the area under the curve (AUC) of a receiver operating characteristic (ROC) curve is a way to reduce ROC performance to a single value representing expected performance.
To explain with a little more detail, a ROC curve plots the true positives (sensitivity) vs. false positives (1 − specificity), for a binary classifier system as its discrimination threshold is varied. Since a random method describes a horizontal curve through the unit interval, it has an AUC of .5. Minimally, classifiers should perform better than this, and the extent to which they score higher than one another (meaning the area under the ROC curve is larger), they have better expected performance.
Step6: Feature Importance
Now that we understand the accuracy of each individual model for our particular dataset, let's dive a little deeper to get a better understanding of what features or behaviours are causing our customers to churn. In the next section, we will be using a RandomForestClassifer to build an ensemble of decision trees to predict whether a customer will churn or not churn. One of the first steps in building a decision tree to calculating the information gain associated with splitting on a particular feature. (More on this later.)
Let's look at the Top 10 features in our dataset that contribute to customer churn
Step7: Thinking in Probabilities
Decision making often favors probability over simple classifications. There's plainly more information in statements like "there's a 20% chance of rain tomorrow" and "about 55% of test takers pass the California bar exam" than just saying "it shouldn't rain tomorrow" or "you'll probably pass." Probability predictions for churn also allow us to gauge a customers expected value, and their expected loss. Who do you want to reach out to first, the client with a 80% churn risk who pays 20,000 annually, or the client who's worth 100,000 a year with a 40% risk? How much should you spend on each client?
While I'm moving a bit away from my expertise, being able to ask that question requires producing predictions a little differently. However, scikit-learn makes moving to probabilities easy; my three models have predict_proba() built right into their class objects. This is the same cross validation code with only a few lines changed.
Step8: How good is good?
Determining how good a predictor which gives probabilities rather than classes is a bit more difficult. If I predict there's a 20% likelihood of rain tomorrow I don't get to live out all the possible outcomes of the universe. It either rains or it doesn't.
What helps is that the predictors aren't making one prediction, they're making 3000+. So for every time I predict an event to occur 20% of the time I can see how often those events actually happen. Here's we'll use pandas to help me compare the predictions made by random forest against the actual outcomes.
Step9: We can see that random forests predicted that 75 individuals would have a 0.9 proability of churn and in actuality that group had a ~0.97 rate.
Calibration and Descrimination
Using the DataFrame above we can draw a pretty simple graph to help visualize probability measurements. The x axis represents the churn probabilities which random forest assigned to a group of individuals. The y axis is the actual rate of churn within that group, and each point is scaled relative to the size of the group.
Calibration is a relatively simple measurement and can be summed up as so | Python Code:
from __future__ import division
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import json
from sklearn.cross_validation import KFold
from sklearn.preprocessing import StandardScaler
from sklearn.cross_validation import train_test_split
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier as RF
%matplotlib inline
churn_df = pd.read_csv('../data/churn.csv')
col_names = churn_df.columns.tolist()
print "Column names:"
print col_names
to_show = col_names[:6] + col_names[-6:]
print "\nSample data:"
churn_df[to_show].head(6)
Explanation: Customer Churn
Credits: Forked from growth-workshop by aprial, as featured on the yhat blog
"Churn Rate" is a business term describing the rate at which customers leave or cease paying for a product or service. It's a critical figure in many businesses, as it's often the case that acquiring new customers is a lot more costly than retaining existing ones (in some cases, 5 to 20 times more expensive).
Understanding what keeps customers engaged, therefore, is incredibly valuable, as it is a logical foundation from which to develop retention strategies and roll out operational practices aimed to keep customers from walking out the door. Consequently, there's growing interest among companies to develop better churn-detection techniques, leading many to look to data mining and machine learning for new and creative approaches.
Predicting churn is particularly important for businesses w/ subscription models such as cell phone, cable, or merchant credit card processing plans. But modeling churn has wide reaching applications in many domains. For example, casinos have used predictive models to predict ideal room conditions for keeping patrons at the blackjack table and when to reward unlucky gamblers with front row seats to Celine Dion. Similarly, airlines may offer first class upgrades to complaining customers. The list goes on.
Wait, don't go!
So what are some of ops strategies that companies employ to prevent churn? Well, reducing churn, it turns out, often requires non-trivial resources. Specialized retention teams are common in many industries and exist expressly to call down lists of at-risk customers to plead for their continued business.
Organizing and running such teams is tough. From an ops perspective, cross-geographic teams must be well organized and trained to respond to a huge spectrum of customer complaints. Customers must be accurately targeted based on churn-risk, and retention treatments must be well-conceived and correspond reasonably to match expected customer value to ensure the economics make sense. Spending $1,000 on someone who wasn't about to leave can get expensive pretty quickly.
Within this frame of mind, efficiently dealing with turnover is an exercise of distinguishing who is likely to churn from who is not using the data at our disposal. The remainder of this post will explore a simple case study to show how Python and its scientific libraries can be used to predict churn and how you might deploy such a solution within operations to guide a retention team.
The Dataset
The data set we'll be using is a longstanding telecom customer data set.
The data is straightforward. Each row represents a subscribing telephone customer. Each column contains customer attributes such as phone number, call minutes used during different times of day, charges incurred for services, lifetime account duration, and whether or not the customer is still a customer.
End of explanation
# Isolate target data
churn_result = churn_df['Churn?']
y = np.where(churn_result == 'True.',1,0)
# We don't need these columns
to_drop = ['State','Area Code','Phone','Churn?']
churn_feat_space = churn_df.drop(to_drop,axis=1)
# 'yes'/'no' has to be converted to boolean values
# NumPy converts these from boolean to 1. and 0. later
yes_no_cols = ["Int'l Plan","VMail Plan"]
churn_feat_space[yes_no_cols] = churn_feat_space[yes_no_cols] == 'yes'
# Pull out features for future use
features = churn_feat_space.columns
print features
X = churn_feat_space.as_matrix().astype(np.float)
# This is important
scaler = StandardScaler()
X = scaler.fit_transform(X)
print "Feature space holds %d observations and %d features" % X.shape
print "Unique target labels:", np.unique(y)
Explanation: We'll be keeping the statistical model pretty simple for this example so the feature space is almost unchanged from what you see above. The following code simply drops irrelevant columns and converts strings to boolean values (since models don't handle "yes" and "no" very well). The rest of the numeric columns are left untouched.
End of explanation
from sklearn.cross_validation import KFold
def run_cv(X,y,clf_class,**kwargs):
# Construct a kfolds object
kf = KFold(len(y),n_folds=3,shuffle=True)
y_pred = y.copy()
# Iterate through folds
for train_index, test_index in kf:
X_train, X_test = X[train_index], X[test_index]
y_train = y[train_index]
# Initialize a classifier with key word arguments
clf = clf_class(**kwargs)
clf.fit(X_train,y_train)
y_pred[test_index] = clf.predict(X_test)
return y_pred
Explanation: One slight side note. Many predictors care about the relative size of different features even though those scales might be arbitrary. For instance: the number of points a basketball team scores per game will naturally be a couple orders of magnitude larger than their win percentage. But this doesn't mean that the latter is 100 times less signifigant. StandardScaler fixes this by normalizing each feature to a range of around 1.0 to -1.0 thereby preventing models from misbehaving. Well, at least for that reason.
Great, I now have a feature space X and a set of target values y. On to the predictions!
How good is your model?
Express, test, cycle. A machine learning pipeline should be anything but static. There are always new features to design, new data to use, new classifiers to consider each with unique parameters to tune. And for every change it's critical to be able to ask, "Is the new version better than the last?" So how do I do that?
As a good start, cross validation will be used throught this example. Cross validation attempts to avoid overfitting (training on and predicting the same datapoint) while still producing a prediction for each observation dataset. This is accomplished by systematically hiding different subsets of the data while training a set of models. After training, each model predicts on the subset that had been hidden to it, emulating multiple train-test splits. When done correctly, every observation will have a 'fair' corresponding prediction.
Here's what that looks like using scikit-learn libraries.
End of explanation
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier as RF
from sklearn.neighbors import KNeighborsClassifier as KNN
from sklearn.linear_model import LogisticRegression as LR
from sklearn.ensemble import GradientBoostingClassifier as GBC
from sklearn.metrics import average_precision_score
def accuracy(y_true,y_pred):
# NumPy interpretes True and False as 1. and 0.
return np.mean(y_true == y_pred)
print "Logistic Regression:"
print "%.3f" % accuracy(y, run_cv(X,y,LR))
print "Gradient Boosting Classifier"
print "%.3f" % accuracy(y, run_cv(X,y,GBC))
print "Support vector machines:"
print "%.3f" % accuracy(y, run_cv(X,y,SVC))
print "Random forest:"
print "%.3f" % accuracy(y, run_cv(X,y,RF))
print "K-nearest-neighbors:"
print "%.3f" % accuracy(y, run_cv(X,y,KNN))
Explanation: Let's compare three fairly unique algorithms support vector machines, random forest, and k-nearest-neighbors. Nothing fancy here, just passing each to cross validation and determining how often the classifier predicted the correct class.
End of explanation
from sklearn.metrics import confusion_matrix
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
def draw_confusion_matrices(confusion_matricies,class_names):
class_names = class_names.tolist()
for cm in confusion_matrices:
classifier, cm = cm[0], cm[1]
print(cm)
fig = plt.figure()
ax = fig.add_subplot(111)
cax = ax.matshow(cm)
plt.title('Confusion matrix for %s' % classifier)
fig.colorbar(cax)
ax.set_xticklabels([''] + class_names)
ax.set_yticklabels([''] + class_names)
plt.xlabel('Predicted')
plt.ylabel('True')
plt.show()
y = np.array(y)
class_names = np.unique(y)
confusion_matrices = [
( "Support Vector Machines", confusion_matrix(y,run_cv(X,y,SVC)) ),
( "Random Forest", confusion_matrix(y,run_cv(X,y,RF)) ),
( "K-Nearest-Neighbors", confusion_matrix(y,run_cv(X,y,KNN)) ),
( "Gradient Boosting Classifier", confusion_matrix(y,run_cv(X,y,GBC)) ),
( "Logisitic Regression", confusion_matrix(y,run_cv(X,y,LR)) )
]
# Pyplot code not included to reduce clutter
# from churn_display import draw_confusion_matrices
%matplotlib inline
draw_confusion_matrices(confusion_matrices,class_names)
Explanation: Random forest won, right?
Precision and recall
Measurements aren't golden formulas which always spit out high numbers for good models and low numbers for bad ones. Inherently they convey something sentiment about a model's performance, and it's the job of the human designer to determine each number's validity. The problem with accuracy is that outcomes aren't necessarily equal. If my classifier predicted a customer would churn and they didn't, that's not the best but it's forgivable. However, if my classifier predicted a customer would return, I didn't act, and then they churned... that's really bad.
We'll be using another built in scikit-learn function to construction a confusion matrix. A confusion matrix is a way of visualizing predictions made by a classifier and is just a table showing the distribution of predictions for a specific class. The x-axis indicates the true class of each observation (if a customer churned or not) while the y-axis corresponds to the class predicted by the model (if my classifier said a customer would churned or not).
Confusion matrix and confusion tables:
The columns represent the actual class and the rows represent the predicted class. Lets evaluate performance:
| | condition True | condition false|
|------|----------------|---------------|
|prediction true|True Positive|False positive|
|Prediction False|False Negative|True Negative|
Sensitivity, Recall or True Positive Rate quantify the models ability to predict our positive classes.
$$TPR = \frac{ TP}{TP + FN}$$
Specificity or True Negative Rate quantify the models ability to predict our Negative classes.
$$TNR = \frac{ TN}{FP + TN}$$
Example:
| | Spam | Ham|
|------|----------------|---------------|
|prediction Spam|100|50|
|Prediction Ham|75|900|
$$TPR = \frac{100}{100 + 75} = 57.14 \% Sensitive $$
$$TNR = \frac{ 900}{50 + 900} = 94.73 \% Specific $$
End of explanation
from sklearn.metrics import roc_curve, auc
from scipy import interp
def plot_roc(X, y, clf_class, **kwargs):
kf = KFold(len(y), n_folds=5, shuffle=True)
y_prob = np.zeros((len(y),2))
mean_tpr = 0.0
mean_fpr = np.linspace(0, 1, 100)
all_tpr = []
for i, (train_index, test_index) in enumerate(kf):
X_train, X_test = X[train_index], X[test_index]
y_train = y[train_index]
clf = clf_class(**kwargs)
clf.fit(X_train,y_train)
# Predict probabilities, not classes
y_prob[test_index] = clf.predict_proba(X_test)
fpr, tpr, thresholds = roc_curve(y[test_index], y_prob[test_index, 1])
mean_tpr += interp(mean_fpr, fpr, tpr)
mean_tpr[0] = 0.0
roc_auc = auc(fpr, tpr)
plt.plot(fpr, tpr, lw=1, label='ROC fold %d (area = %0.2f)' % (i, roc_auc))
mean_tpr /= len(kf)
mean_tpr[-1] = 1.0
mean_auc = auc(mean_fpr, mean_tpr)
plt.plot(mean_fpr, mean_tpr, 'k--',label='Mean ROC (area = %0.2f)' % mean_auc, lw=2)
plt.plot([0, 1], [0, 1], '--', color=(0.6, 0.6, 0.6), label='Random')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic')
plt.legend(loc="lower right")
plt.show()
print "Support vector machines:"
plot_roc(X,y,SVC,probability=True)
print "Random forests:"
plot_roc(X,y,RF,n_estimators=18)
print "K-nearest-neighbors:"
plot_roc(X,y,KNN)
print "Gradient Boosting Classifier:"
plot_roc(X,y,GBC)
Explanation: An important question to ask might be, When an individual churns, how often does my classifier predict that correctly? This measurement is called "recall" and a quick look at these diagrams can demonstrate that random forest is clearly best for this criteria. Out of all the churn cases (outcome "1") random forest correctly retrieved 330 out of 482. This translates to a churn "recall" of about 68% (330/482≈2/3), far better than support vector machines (≈50%) or k-nearest-neighbors (≈35%).
Another question of importance is "precision" or, When a classifier predicts an individual will churn, how often does that individual actually churn? The differences in sematic are small from the previous question, but it makes quite a different. Random forest again out preforms the other two at about 93% precision (330 out of 356) with support vector machines a little behind at about 87% (235 out of 269). K-nearest-neighbors lags at about 80%.
While, just like accuracy, precision and recall still rank random forest above SVC and KNN, this won't always be true. When different measurements do return a different pecking order, understanding the values and tradeoffs of each rating should effect how you proceed.
ROC Plots & AUC
Another important metric to consider is ROC plots. We'll cover the majority of these concepts in lecture, but if you're itching for more, one of the best resources out there is this academic paper.
Simply put, the area under the curve (AUC) of a receiver operating characteristic (ROC) curve is a way to reduce ROC performance to a single value representing expected performance.
To explain with a little more detail, a ROC curve plots the true positives (sensitivity) vs. false positives (1 − specificity), for a binary classifier system as its discrimination threshold is varied. Since a random method describes a horizontal curve through the unit interval, it has an AUC of .5. Minimally, classifiers should perform better than this, and the extent to which they score higher than one another (meaning the area under the ROC curve is larger), they have better expected performance.
End of explanation
train_index,test_index = train_test_split(churn_df.index)
forest = RF()
forest_fit = forest.fit(X[train_index], y[train_index])
forest_predictions = forest_fit.predict(X[test_index])
importances = forest_fit.feature_importances_[:10]
std = np.std([tree.feature_importances_ for tree in forest.estimators_],
axis=0)
indices = np.argsort(importances)[::-1]
# Print the feature ranking
print("Feature ranking:")
for f in range(10):
print("%d. %s (%f)" % (f + 1, features[f], importances[indices[f]]))
# Plot the feature importances of the forest
#import pylab as pl
plt.figure()
plt.title("Feature importances")
plt.bar(range(10), importances[indices], yerr=std[indices], color="r", align="center")
plt.xticks(range(10), indices)
plt.xlim([-1, 10])
plt.show()
Explanation: Feature Importance
Now that we understand the accuracy of each individual model for our particular dataset, let's dive a little deeper to get a better understanding of what features or behaviours are causing our customers to churn. In the next section, we will be using a RandomForestClassifer to build an ensemble of decision trees to predict whether a customer will churn or not churn. One of the first steps in building a decision tree to calculating the information gain associated with splitting on a particular feature. (More on this later.)
Let's look at the Top 10 features in our dataset that contribute to customer churn:
End of explanation
def run_prob_cv(X, y, clf_class, roc=False, **kwargs):
kf = KFold(len(y), n_folds=5, shuffle=True)
y_prob = np.zeros((len(y),2))
for train_index, test_index in kf:
X_train, X_test = X[train_index], X[test_index]
y_train = y[train_index]
clf = clf_class(**kwargs)
clf.fit(X_train,y_train)
# Predict probabilities, not classes
y_prob[test_index] = clf.predict_proba(X_test)
return y_prob
Explanation: Thinking in Probabilities
Decision making often favors probability over simple classifications. There's plainly more information in statements like "there's a 20% chance of rain tomorrow" and "about 55% of test takers pass the California bar exam" than just saying "it shouldn't rain tomorrow" or "you'll probably pass." Probability predictions for churn also allow us to gauge a customers expected value, and their expected loss. Who do you want to reach out to first, the client with a 80% churn risk who pays 20,000 annually, or the client who's worth 100,000 a year with a 40% risk? How much should you spend on each client?
While I'm moving a bit away from my expertise, being able to ask that question requires producing predictions a little differently. However, scikit-learn makes moving to probabilities easy; my three models have predict_proba() built right into their class objects. This is the same cross validation code with only a few lines changed.
End of explanation
import warnings
warnings.filterwarnings('ignore')
# Use 10 estimators so predictions are all multiples of 0.1
pred_prob = run_prob_cv(X, y, RF, n_estimators=10)
pred_churn = pred_prob[:,1]
is_churn = y == 1
# Number of times a predicted probability is assigned to an observation
counts = pd.value_counts(pred_churn)
counts[:]
from collections import defaultdict
true_prob = defaultdict(float)
# calculate true probabilities
for prob in counts.index:
true_prob[prob] = np.mean(is_churn[pred_churn == prob])
true_prob = pd.Series(true_prob)
# pandas-fu
counts = pd.concat([counts,true_prob], axis=1).reset_index()
counts.columns = ['pred_prob', 'count', 'true_prob']
counts
Explanation: How good is good?
Determining how good a predictor which gives probabilities rather than classes is a bit more difficult. If I predict there's a 20% likelihood of rain tomorrow I don't get to live out all the possible outcomes of the universe. It either rains or it doesn't.
What helps is that the predictors aren't making one prediction, they're making 3000+. So for every time I predict an event to occur 20% of the time I can see how often those events actually happen. Here's we'll use pandas to help me compare the predictions made by random forest against the actual outcomes.
End of explanation
from churn_measurements import calibration, discrimination
from sklearn.metrics import roc_curve, auc
from scipy import interp
from __future__ import division
from operator import idiv
def print_measurements(pred_prob):
churn_prob, is_churn = pred_prob[:,1], y == 1
print " %-20s %.4f" % ("Calibration Error", calibration(churn_prob, is_churn))
print " %-20s %.4f" % ("Discrimination", discrimination(churn_prob,is_churn))
print "Note -- Lower calibration is better, higher discrimination is better"
print "Support vector machines:"
print_measurements(run_prob_cv(X,y,SVC,probability=True))
print "Random forests:"
print_measurements(run_prob_cv(X,y,RF,n_estimators=18))
print "K-nearest-neighbors:"
print_measurements(run_prob_cv(X,y,KNN))
print "Gradient Boosting Classifier:"
print_measurements(run_prob_cv(X,y,GBC))
print "Random Forest:"
print_measurements(run_prob_cv(X,y,RF))
Explanation: We can see that random forests predicted that 75 individuals would have a 0.9 proability of churn and in actuality that group had a ~0.97 rate.
Calibration and Descrimination
Using the DataFrame above we can draw a pretty simple graph to help visualize probability measurements. The x axis represents the churn probabilities which random forest assigned to a group of individuals. The y axis is the actual rate of churn within that group, and each point is scaled relative to the size of the group.
Calibration is a relatively simple measurement and can be summed up as so: Events predicted to happen 60% of the time should happen 60% of the time. For all individuals I predict to have a churn risk of between 30 and 40%, the true churn rate for that group should be about 35%. For the graph above think of it as, How close are my predictions to the red line?
Discrimination measures How far are my predictions away from the green line? Why is that important?
Well, if we assign a churn probability of 15% to every individual we'll have near perfect calibration due to averages, but I'll be lacking any real insight. Discrimination gives a model a better score if it's able to isolate groups which are further from the base set.
Equations are replicated from Yang, Yates, and Smith (1991) and the code Yhat wrote can be found on GitHub here.
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.