Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
15,000 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Reading and writing files in Python
Goals for this assignment
Most of the data that you will use will come in a file of some sort. In fact, we've used data from files at various points during this class, but have mostly glossed over how we actually work with those files. In this assignment, and in class tomorrow, we're going to work with some of the common types of files using standard Python (and numpy) methods.
Your name
// put your name here!
Standard file types
Text files
Step1: Part 2
Step2: Part 3
Step4: Assignment wrapup
Please fill out the form that appears when you run the code below. You must completely fill this out in order to receive credit for the assignment! | Python Code:
# put your code here!
Explanation: Reading and writing files in Python
Goals for this assignment
Most of the data that you will use will come in a file of some sort. In fact, we've used data from files at various points during this class, but have mostly glossed over how we actually work with those files. In this assignment, and in class tomorrow, we're going to work with some of the common types of files using standard Python (and numpy) methods.
Your name
// put your name here!
Standard file types
Text files: This is a broad category of file that contains information stored as plain text. That data could be numerical, strings, or some combination of them. It's typical for a text file to have columns of data and have a format that looks like this:
```
Columns:
1. time
2. height
3. distance
0.317 0.227 0.197
0.613 0.432 2.715
1.305 0.917 5.613
```
where the rows starting with a pound sign (#) are meant to be comments and the ones following that have data that's broken up into columns by spaces or tabs. Text files often have the ".txt" file extension. The primary advantage of plain text files is their simplicity and portability; the disadvantages are that (1) there is no standard format for data in these files, so you have to look at each file and figure out exactly how it is structured; (2) it uses disk space very inefficiently, (3) it's difficult to store "non-rectangular" datasets (i.e., datasets where each row has a different number of columns, or missing data). Despite the disadvantages, however, their convenience means that they are useful for many purposes!
Comma-separated value files: This sort of file, also known as a "CSV" file, is a specific type of text file where numbers and strings are stored as a table of data. Each line in the file is an individual "record" that's analogous to a row in a spreadsheet, and each entry within that row is a "field", separated by commas. This type of data file often has the ".csv" file extension. The data above might be stored in a CSV file in the following way:
"height","time","distance"
0.317,0.227,0.197
0.613,0.432,2.715
1.305,0.917,5.613
CSV files share many of the same advantages and disadvantages of plain text files. One significant advantage, however, is that there is something closer to a standard file format for CSV files, and it's easier to deal with missing data/non-rectangular data.
Binary files: This encompasses a wide range of file types that store data in a much more compact format, and allow much more complex data types to be stored. Many of the files that you use - for example, any file that contains audio, video, or image data - are binary files, and Numpy also has its own binary file format. The internal format of these files can vary tremendously, but in general binary files have an extention that tells you what type of format it is (for example, .mp3, .mov, or .jpg), and standard tools exist to read and write those files.
Part 1: Reading and writing text files
The Python tutorial has a useful section on reading and writing text files. In short, you open a file using the open() method:
myfile = open('filename.txt','r')
where the first argument is a string containing the file name (you can use a variable if you want), and the second is a string that explains how a file will be used. 'r' means "read-only"; 'w' means "write-only"; 'r+' means "read and write". If you do not put any argument, the file is assumed to be read-only.
You can read the entire file by:
myfile.read()
or a single line with:
myfile.readline()
Each line of the file is returned as a string, which you can manipulate as you would any other string (by splitting it into a list, converting to a new file type, etc.)
You can also loop over the file and operate on each line sequentally:
for line in myfile:
print(line)
Note: if you get an extra line in between each printed-out line, that's because there is already a newline ('\n') character at the end of the line in the file. You can make this go away by using this syntax:
print(line,end='')
Also, if you want to get all of the lines of a file in a list you can use list(myfile) or myfile.readlines()
You can write to a file by creating a string and writing it. For example:
string = str(4.0) + 'abcde'
myfile.write(string)
Once you're done with a file, you can close it by using the close() method:
myfile.close()
You can also use the seek() function to move back and forth within a file; see the Python tutorial for more information on this.
You may also find yourself using the Numpy loadtxt() and savetxt() methods to read and write text-based files; we'll talk about those in the next section.
In the space below: write a piece of code that creates a file named your_last_name.txt (with your actual last name), and then write several sentences of information to it with some facts about yourself that end in a newline ('\n'). (Note that opening a file in mode 'w' will create the file!) Look at the file with a text editor (TextEdit in Mac; WordPad or NotePad in Windows) and see what it looks like! Then, read the file in line-by-line, split each line into a list of strings, and print out the first item in the list. You should get the first word in each sentence back!
End of explanation
# put your code here!
Explanation: Part 2: Reading and writing CSV files
You're likely to use two primary means of reading and writing CSV files in Python: the Python CSV module and the Numpy loadtxt() and savetxt().
The Python CSV module has reader() and writer() methods that let you read and write to a file with lists. Once you open a file (as you would a normal text file in Python), you have to tell Python that it's a CSV file using csv.reader(), which returns a reader object that you can then iterate over to get rows in the file in the same way you do with lists. In fact, each line of a CSV file is returned as a list. So, to read and print out individual lines of a CSV file, you would do the following:
```
import csv
csvfile = open('my_file.csv','r')
csvreader = csv.reader(csvfile,delimiter=',')
for row in csvreader:
print(row)
csvfile.close()
```
Writing is similarly straightforward. You open the file, use the writer() method to create a file object, and then you can write individual rows with the writerow() method. The writerow() method taks in a list and writes it as a single row in the file. For example, the following code creates two arrays and then writes them into a CSV file:
```
import csv
x = y = np.arange(0,10,1) # create a couple of arrays to work with
csvfile = open('my_written_file.csv','w',newline='')
csvwriter = csv.writer(csvfile,delimiter=',')
for i in range(x.size):
csvwriter.writerow([x[i], y[i]])
csvfile.close()
```
Numpy's loadtxt() and savetxt() methods do similar things as the CSV reader() and writer() methods, but directly work with arrays. In particular, you could load the two arrays written via the small piece of code directly above this into two numpy arrays as follows:
import numpy as np
xnew, ynew = np.loadtxt('my_written_file.csv',delimiter=',',unpack=True)
or you can read the two arrays into a single numpy array:
import numpy as np
combined_arrays = np.loadtxt('my_written_file.csv',delimiter=',',unpack=True)
where xnew can be accessed as combined_arrays[0] and ynew is combined_arrays[1].
In the space below: write a piece of code that creates two numpy arrays - an array of approximately 100 x-values that are evenly spaced between -10 and 10, and an array of y-values that are equal to sin(x). Use the Python CSV writer to write those to a CSV file with a name of your choosing, and then use numpy to load them into two arrays (with different names) and plot them using pyplot's plot() method. Does it look like a sine wave?
End of explanation
# put your code here!
Explanation: Part 3: Reading and writing binary files
There are an astounding number of ways to read and write binary files. For our purposes, we can use the numpy save() and savez() methods to write one or multiple numpy arrays into a binary file, respectively, and the load() method to load them. For example, if you have arrays named arr1, arr2, and arr3, you would write them to a file as follows:
np.savez('mydatafile',arr1,arr2,arr3)
and if you wanted to save them with specific names - say "first_array", "second_array", and "third_array", you would do so using this syntax:
np.savez('mydatafile',first_array=arr1,second_array=arr2,third_array=arr3)
In both circumstances, you would get a file named mydatafile.npz unless you manually put an extension different than .npz on the file.
You would then read in the data files using the Numpy load() method:
all_data = np.load('mydatafile.npz')
all_data then contains all of the array information. To find out what arrays are in there, you can print all_data.files:
print(all_data.files)
and access an individual array by name:
array_one = all_data['first_array']
In the space below: redo what you did in Part 2, but using the numpy savez() and load() methods and thus using a Numpy array file instead of a CSV file. When saving the arrays, make sure to give them descriptive names. If you plot the x and y values, do you still get a sine wave?
End of explanation
from IPython.display import HTML
HTML(
<iframe
src="https://goo.gl/forms/cGV5yNRzgxzx6naf2?embedded=true"
width="80%"
height="1200px"
frameborder="0"
marginheight="0"
marginwidth="0">
Loading...
</iframe>
)
Explanation: Assignment wrapup
Please fill out the form that appears when you run the code below. You must completely fill this out in order to receive credit for the assignment!
End of explanation |
15,001 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Siegert neuron integration
Alexander van Meegen, 2020-12-03
This notebook describes how NEST handles the numerical integration of the 'Siegert' function.
For an alternative approach, which was implemented in NEST before, see Appendix A.1 in (Hahne et al., 2017). The current approach seems to be faster and more stable, in particular in the noise-free limit.
Let's start with some imports
Step1: Introduction
We want to determine the firing rate of an integrate-and-fire neuron with exponentially decaying post–synaptic currents driven by a mean input $\mu$ and white noise of strength $\sigma$. For small synaptic time constant $\tau_{\mathrm{s}}$ compared to the membrane time constant $\tau_{\mathrm{m}}$, the firing rate is given by the 'Siegert' (Fourcaud and Brunel, 2002)
$$ \phi(\mu,\sigma) = \left(\tau_{\mathrm{ref}}+\tau_{\mathrm{m}}\sqrt{\pi}I(\tilde{V}{\mathrm{th}},\tilde{V}{\mathrm{r}})\right)^{-1} $$
with the refractory period $\tau_{\mathrm{ref}}$ and the integral
$$ I(\tilde{V}{\mathrm{th}},\tilde{V}{\mathrm{r}}) = \int_{\tilde{V}{\mathrm{r}}}^{\tilde{V}{\mathrm{th}}}e^{s^{2}}(1+\mathrm{erf}(s))ds $$
involving the shifted and scaled threshold voltage $\tilde{V}{\mathrm{th}}=\frac{V{\mathrm{th}}-\mu}{\sigma}+\frac{\alpha}{2}\sqrt{\frac{\tau_{\mathrm{s}}}{\tau_{\mathrm{m}}}}$, the shifted and scaled reset voltage $\tilde{V}{\mathrm{r}}=\frac{V{\mathrm{r}}-\mu}{\sigma}+\frac{\alpha}{2}\sqrt{\frac{\tau_{\mathrm{s}}}{\tau_{\mathrm{m}}}}$, and the constant $\alpha=\sqrt{2}\left|\zeta(1/2)\right|$ where $\zeta(x)$ denotes the Riemann zeta function.
Numerically, the integral in $I(\tilde{V}{\mathrm{th}},\tilde{V}{\mathrm{r}})$ is problematic due to the interplay of $e^{s^{2}}$ and $\mathrm{erf}(s)$ in the integrand. Already for moderate values of $s$, it causes numerical problems (note the order of magnitude)
Step2: The main trick here is to use the scaled complementary error function
$$\mathrm{erf}(s)=1-e^{-s^{2}}\mathrm{erfcx}(s)$$
to extract the leading exponential contribution. For positive $s$, we have $0\le\mathrm{erfcx}(s)\le1$, i.e. the exponential contribution is in the prefactor $e^{-s^{2}}$ which nicely cancels with the $e^{s^{2}}$ in the integrand. In the following, we separate three cases according to the sign of $\tilde{V}{\mathrm{th}}$ and $\tilde{V}{\mathrm{r}}$ because for a negative arguments, the integrand simplifies to $e^{s^{2}}(1+\mathrm{erf}(-s))=\mathrm{erfcx}(s)$. Eventually, only integrals of $\mathrm{erfcx}(s)$ for positive $s\ge0$ need to be solved numerically which are certainly better behaved
Step3: Mathematical Reformulation
Strong Inhibition
We have to consider three different cases; let us start with strong inhibitory input such that $0<\tilde{V}{\mathrm{r}}<\tilde{V}{\mathrm{th}}$ or equivalently $\mu<V_{\mathrm{r}}+\frac{\alpha}{2}\sigma\sqrt{\frac{\tau_{\mathrm{s}}}{\tau_{\mathrm{m}}}}$. In this regime, the error function in the integrand is positive. Expressing it in terms of $\mathrm{erfcx}(s)$, we get
$$I(\tilde{V}{\mathrm{th}},\tilde{V}{\mathrm{r}})=2\int_{\tilde{V}{\mathrm{r}}}^{\tilde{V}{\mathrm{th}}}e^{s^{2}}ds-\int_{\tilde{V}{\mathrm{r}}}^{\tilde{V}{\mathrm{th}}}\mathrm{erfcx}(s)ds. $$
The first integral can be solved in terms of the Dawson function $D(s)$, which is bound between $\pm1$ and conveniently implemented in GSL; the second integral gives a small correction which has to be evaluated numerically. We get
$$I(\tilde{V}{\mathrm{th}},\tilde{V}{\mathrm{r}})=2e^{\tilde{V}{\mathrm{th}}^{2}}D(\tilde{V}{\mathrm{th}})-2e^{\tilde{V}{\mathrm{r}}^{2}}D(\tilde{V}{\mathrm{r}})-\int_{\tilde{V}{\mathrm{r}}}^{\tilde{V}{\mathrm{th}}}\mathrm{erfcx}(s)ds.$$
We extract the leading contribution $e^{\tilde{V}_{\mathrm{th}}^{2}}$ from the denominator and arrive at
$$\phi(\mu,\sigma)=\frac{e^{-\tilde{V}{\mathrm{th}}^{2}}}{e^{-\tilde{V}{\mathrm{th}}^{2}}\tau_{\mathrm{ref}}+\tau_{\mathrm{m}}\sqrt{\pi}\left(2D(\tilde{V}{\mathrm{th}})-2e^{-\tilde{V}{\mathrm{th}}^{2}+\tilde{V}{\mathrm{r}}^{2}}D(\tilde{V}{\mathrm{r}})-e^{-\tilde{V}{\mathrm{th}}^{2}}\int{\tilde{V}{\mathrm{r}}}^{\tilde{V}{\mathrm{th}}}\mathrm{erfcx}(s)ds\right)}$$
as a numerically safe expression for $0<\tilde{V}{\mathrm{r}}<\tilde{V}{\mathrm{th}}$. Extracting $e^{\tilde{V}{\mathrm{th}}^{2}}$ from the denominator reduces the latter to $2\tau{\mathrm{m}}\sqrt{\pi}D(\tilde{V}{\mathrm{th}})$ and exponentially small correction terms because $\tilde{V}{\mathrm{r}}<\tilde{V}_{\mathrm{th}}$, thereby preventing overflow.
Strong Excitation
Now let us consider the case of strong excitatory input such that $\tilde{V}{\mathrm{r}}<\tilde{V}{\mathrm{th}}<0$ or $\mu>V_{\mathrm{th}}+\frac{\alpha}{2}\sigma\sqrt{\frac{\tau_{\mathrm{s}}}{\tau_{\mathrm{m}}}}$. In this regime, we can change variables $s\to-s$ to make the domain of integration positive again. Using $\mathrm{erf}(-s)=-\mathrm{erf}(s)$ as well as $\mathrm{erfcx}(s)$, we get
$$I(\tilde{V}{\mathrm{th}},\tilde{V}{\mathrm{r}})=\int_{|\tilde{V}{\mathrm{th}}|}^{|\tilde{V}{\mathrm{r}}|}\mathrm{erfcx}(s)ds.$$
In particular, there is no exponential contribution involved in this regime. Thus, we get
$$\phi(\mu,\sigma)=\frac{1}{\tau_{\mathrm{ref}}+\tau_{\mathrm{m}}\sqrt{\pi}\int_{|\tilde{V}{\mathrm{th}}|}^{|\tilde{V}{\mathrm{r}}|}\mathrm{erfcx}(s)ds}$$
as a numerically safe expression for $\tilde{V}{\mathrm{r}}<\tilde{V}{\mathrm{th}}<0$.
Intermediate Regime
In the intermediate regime, we have $\tilde{V}{\mathrm{r}}\le0\le\tilde{V}{\mathrm{th}}$ or $V_{\mathrm{r}}+\frac{\alpha}{2}\sigma\sqrt{\frac{\tau_{\mathrm{s}}}{\tau_{\mathrm{m}}}}\le\mu\le V_{\mathrm{th}}+\frac{\alpha}{2}\sigma\sqrt{\frac{\tau_{\mathrm{s}}}{\tau_{\mathrm{m}}}}$. Thus, we split the integral at zero and use the previous steps for the respective parts to get
$$I(\tilde{V}{\mathrm{th}},\tilde{V}{\mathrm{r}})=2e^{\tilde{V}{\mathrm{th}}^{2}}D(\tilde{V}{\mathrm{th}})+\int_{\tilde{V}{\mathrm{th}}}^{|\tilde{V}{\mathrm{r}}|}\mathrm{erfcx}(s)ds.$$
Note that the sign of the second integral depends on whether $\left|\tilde{V}{\mathrm{r}}\right|>\tilde{V}{\mathrm{th}}$ (+) or not (-). Again, we extract the leading contribution $e^{\tilde{V}_{\mathrm{th}}^{2}}$ from the denominator and arrive at
$$\phi(\mu,\sigma) = \frac{e^{-\tilde{V}{\mathrm{th}}^{2}}}{e^{-\tilde{V}{\mathrm{th}}^{2}}\tau_{\mathrm{ref}}+\tau_{\mathrm{m}}\sqrt{\pi}\left(2D(\tilde{V}{\mathrm{th}})+e^{-\tilde{V}{\mathrm{th}}^{2}}\int_{\tilde{V}{\mathrm{th}}}^{|\tilde{V}{\mathrm{r}}|}\mathrm{erfcx}(s)ds\right)}$$
as a numerically safe expressions for $\tilde{V}{\mathrm{r}}\le0\le\tilde{V}{\mathrm{th}}$.
Noise-free Limit
Even the noise-free limit $\sigma\ll\mu$, where the implementation from (Hahne et al., 2017) eventually breaks, works flawlessly. In this limit, $\left|\tilde{V}{\mathrm{th}}\right|\gg1$ as long as $\mu\neq V{\mathrm{th}}$; thus, we get both in the 'strong inhibition' and in the 'itermediate' regime $\phi(\mu,\sigma)\sim e^{-\tilde{V}{\mathrm{th}}^{2}}\approx0$ for $\tilde{V}{\mathrm{th}}\ge0$. Accordingly, the only interesting case is the 'strong excitation' regime $\tilde{V}{\mathrm{r}}<\tilde{V}{\mathrm{th}}<0$. Since also $\left|\tilde{V}_{\mathrm{r}}\right|\gg1$, the integrand $\mathrm{erfcx}(s)$ is only evaluated at $s\gg1$. Using the only the first term of the asymptotic expansion
$$\mathrm{erfcx}(s)=\frac{1}{s\sqrt{\pi}}\sum_{n=0}^{\infty}(-1)^{n}\frac{(2n-1)!!}{(2s^{2})^{n}}$$
leads to the analytically solvable integral
$$I(\tilde{V}{\mathrm{th}},\tilde{V}{\mathrm{r}})=\int_{|\tilde{V}{\mathrm{th}}|}^{|\tilde{V}{\mathrm{r}}|}\mathrm{erfcx}(s)ds\approx\frac{1}{\sqrt{\pi}}\int_{|\tilde{V}{\mathrm{th}}|}^{|\tilde{V}{\mathrm{r}}|}\frac{1}{s}ds=\frac{1}{\sqrt{\pi}}\log\frac{\left|\tilde{V}{\mathrm{r}}\right|}{\left|\tilde{V}{\mathrm{th}}\right|}.$$
Inserting this into $\phi(\mu,\sigma)$ and using $\tilde{V}{\mathrm{th}}\approx\frac{V{\mathrm{th}}-\mu}{\sigma}, \tilde{V}{\mathrm{r}}\approx\frac{V{\mathrm{r}}-\mu}{\sigma}$ yields
$$\phi(\mu,\sigma)\approx\begin{cases}
0 & \mu\le V_{\mathrm{th}}\
\frac{1}{\tau_{\mathrm{ref}}+\tau_{\mathrm{m}}\log\frac{\mu-V_{\mathrm{r}}}{\mu-V_{\mathrm{th}}}} & \mu>V_{\mathrm{th}}\end{cases}$$
as it should. Thus, as long as the numerical solution of the integral $\frac{1}{\sqrt{\pi}}\int_{|\tilde{V}{\mathrm{th}}|}^{|\tilde{V}{\mathrm{r}}|}\frac{1}{s}ds$ is precise, the deterministic limit is also numerically safe.
Relevance of Noise-free Limit
Let us briefly estimate for which values the noise-free limit becomes relevant. We have $\left|\tilde{V}{\mathrm{r}}\right|>\left|\tilde{V}{\mathrm{th}}\right|\gg1$, thus the integrand $\mathrm{erfcx}(s)$ is only evaluated for arguments $s>\left|\tilde{V}{\mathrm{th}}\right|\gg1$. Looking at the difference between $\mathrm{erfcx}(s)$ and the first order asymptotics shown below, we see that the absolute difference to the asymptotics is only $O(10^{-7})$ for moderate values $\left|\tilde{V}{\mathrm{th}}\right|\approx O(100)$. Since we saw above that the noise free limit is equivalent to the first order asymptotics, we can conclude that it is certainly relevant for $\left|\tilde{V}{\mathrm{th}}\right|\approx\frac{\mu-V{\mathrm{th}}}{\sigma}\approx O(100)$; e.g. for $\mu-V_{\mathrm{th}}\approx10$mV a noise strength of $\sigma\approx0.1$mV corresponds to the noise-free limit. | Python Code:
import numpy as np
from scipy.special import erf, erfcx
import matplotlib.pyplot as plt
Explanation: Siegert neuron integration
Alexander van Meegen, 2020-12-03
This notebook describes how NEST handles the numerical integration of the 'Siegert' function.
For an alternative approach, which was implemented in NEST before, see Appendix A.1 in (Hahne et al., 2017). The current approach seems to be faster and more stable, in particular in the noise-free limit.
Let's start with some imports:
End of explanation
s = np.linspace(-10, 10, 1001)
plt.plot(s, np.exp(s**2) * (1 + erf(s)), c='black')
plt.xlim(s[0], s[-1])
plt.yscale('log')
plt.title(r'$e^{s^{2}}(1+\mathrm{erf}(s))$')
plt.show()
Explanation: Introduction
We want to determine the firing rate of an integrate-and-fire neuron with exponentially decaying post–synaptic currents driven by a mean input $\mu$ and white noise of strength $\sigma$. For small synaptic time constant $\tau_{\mathrm{s}}$ compared to the membrane time constant $\tau_{\mathrm{m}}$, the firing rate is given by the 'Siegert' (Fourcaud and Brunel, 2002)
$$ \phi(\mu,\sigma) = \left(\tau_{\mathrm{ref}}+\tau_{\mathrm{m}}\sqrt{\pi}I(\tilde{V}{\mathrm{th}},\tilde{V}{\mathrm{r}})\right)^{-1} $$
with the refractory period $\tau_{\mathrm{ref}}$ and the integral
$$ I(\tilde{V}{\mathrm{th}},\tilde{V}{\mathrm{r}}) = \int_{\tilde{V}{\mathrm{r}}}^{\tilde{V}{\mathrm{th}}}e^{s^{2}}(1+\mathrm{erf}(s))ds $$
involving the shifted and scaled threshold voltage $\tilde{V}{\mathrm{th}}=\frac{V{\mathrm{th}}-\mu}{\sigma}+\frac{\alpha}{2}\sqrt{\frac{\tau_{\mathrm{s}}}{\tau_{\mathrm{m}}}}$, the shifted and scaled reset voltage $\tilde{V}{\mathrm{r}}=\frac{V{\mathrm{r}}-\mu}{\sigma}+\frac{\alpha}{2}\sqrt{\frac{\tau_{\mathrm{s}}}{\tau_{\mathrm{m}}}}$, and the constant $\alpha=\sqrt{2}\left|\zeta(1/2)\right|$ where $\zeta(x)$ denotes the Riemann zeta function.
Numerically, the integral in $I(\tilde{V}{\mathrm{th}},\tilde{V}{\mathrm{r}})$ is problematic due to the interplay of $e^{s^{2}}$ and $\mathrm{erf}(s)$ in the integrand. Already for moderate values of $s$, it causes numerical problems (note the order of magnitude):
End of explanation
s = np.linspace(0, 100, 1001)
plt.plot(s, erfcx(s), c='black')
plt.xlim(s[0], s[-1])
plt.yscale('log')
plt.title(r'$\mathrm{erfcx}(s)$')
plt.show()
Explanation: The main trick here is to use the scaled complementary error function
$$\mathrm{erf}(s)=1-e^{-s^{2}}\mathrm{erfcx}(s)$$
to extract the leading exponential contribution. For positive $s$, we have $0\le\mathrm{erfcx}(s)\le1$, i.e. the exponential contribution is in the prefactor $e^{-s^{2}}$ which nicely cancels with the $e^{s^{2}}$ in the integrand. In the following, we separate three cases according to the sign of $\tilde{V}{\mathrm{th}}$ and $\tilde{V}{\mathrm{r}}$ because for a negative arguments, the integrand simplifies to $e^{s^{2}}(1+\mathrm{erf}(-s))=\mathrm{erfcx}(s)$. Eventually, only integrals of $\mathrm{erfcx}(s)$ for positive $s\ge0$ need to be solved numerically which are certainly better behaved:
End of explanation
s = np.linspace(0.1, 100, 1000)
plt.plot(s, erfcx(s), c='black', label=r'$\mathrm{erfcx}(s)$')
plt.plot(s, 1/(np.sqrt(np.pi)*s), ls='--', label=r'$1/\sqrt{\pi}s$')
plt.xlim(s[0], s[-1])
plt.xscale('log')
plt.yscale('log')
plt.legend()
plt.title(r'First order asymptotics of $\mathrm{erfcx}(s)$')
plt.show()
s = np.linspace(0.1, 100, 1000)
plt.plot(s, 1/(np.sqrt(np.pi)*s)-erfcx(s), c='black')
plt.xlim(s[0], s[-1])
plt.xscale('log')
plt.yscale('log')
plt.title(r'Absolute error of first order asymptotics of $\mathrm{erfcx}(s)$')
plt.show()
Explanation: Mathematical Reformulation
Strong Inhibition
We have to consider three different cases; let us start with strong inhibitory input such that $0<\tilde{V}{\mathrm{r}}<\tilde{V}{\mathrm{th}}$ or equivalently $\mu<V_{\mathrm{r}}+\frac{\alpha}{2}\sigma\sqrt{\frac{\tau_{\mathrm{s}}}{\tau_{\mathrm{m}}}}$. In this regime, the error function in the integrand is positive. Expressing it in terms of $\mathrm{erfcx}(s)$, we get
$$I(\tilde{V}{\mathrm{th}},\tilde{V}{\mathrm{r}})=2\int_{\tilde{V}{\mathrm{r}}}^{\tilde{V}{\mathrm{th}}}e^{s^{2}}ds-\int_{\tilde{V}{\mathrm{r}}}^{\tilde{V}{\mathrm{th}}}\mathrm{erfcx}(s)ds. $$
The first integral can be solved in terms of the Dawson function $D(s)$, which is bound between $\pm1$ and conveniently implemented in GSL; the second integral gives a small correction which has to be evaluated numerically. We get
$$I(\tilde{V}{\mathrm{th}},\tilde{V}{\mathrm{r}})=2e^{\tilde{V}{\mathrm{th}}^{2}}D(\tilde{V}{\mathrm{th}})-2e^{\tilde{V}{\mathrm{r}}^{2}}D(\tilde{V}{\mathrm{r}})-\int_{\tilde{V}{\mathrm{r}}}^{\tilde{V}{\mathrm{th}}}\mathrm{erfcx}(s)ds.$$
We extract the leading contribution $e^{\tilde{V}_{\mathrm{th}}^{2}}$ from the denominator and arrive at
$$\phi(\mu,\sigma)=\frac{e^{-\tilde{V}{\mathrm{th}}^{2}}}{e^{-\tilde{V}{\mathrm{th}}^{2}}\tau_{\mathrm{ref}}+\tau_{\mathrm{m}}\sqrt{\pi}\left(2D(\tilde{V}{\mathrm{th}})-2e^{-\tilde{V}{\mathrm{th}}^{2}+\tilde{V}{\mathrm{r}}^{2}}D(\tilde{V}{\mathrm{r}})-e^{-\tilde{V}{\mathrm{th}}^{2}}\int{\tilde{V}{\mathrm{r}}}^{\tilde{V}{\mathrm{th}}}\mathrm{erfcx}(s)ds\right)}$$
as a numerically safe expression for $0<\tilde{V}{\mathrm{r}}<\tilde{V}{\mathrm{th}}$. Extracting $e^{\tilde{V}{\mathrm{th}}^{2}}$ from the denominator reduces the latter to $2\tau{\mathrm{m}}\sqrt{\pi}D(\tilde{V}{\mathrm{th}})$ and exponentially small correction terms because $\tilde{V}{\mathrm{r}}<\tilde{V}_{\mathrm{th}}$, thereby preventing overflow.
Strong Excitation
Now let us consider the case of strong excitatory input such that $\tilde{V}{\mathrm{r}}<\tilde{V}{\mathrm{th}}<0$ or $\mu>V_{\mathrm{th}}+\frac{\alpha}{2}\sigma\sqrt{\frac{\tau_{\mathrm{s}}}{\tau_{\mathrm{m}}}}$. In this regime, we can change variables $s\to-s$ to make the domain of integration positive again. Using $\mathrm{erf}(-s)=-\mathrm{erf}(s)$ as well as $\mathrm{erfcx}(s)$, we get
$$I(\tilde{V}{\mathrm{th}},\tilde{V}{\mathrm{r}})=\int_{|\tilde{V}{\mathrm{th}}|}^{|\tilde{V}{\mathrm{r}}|}\mathrm{erfcx}(s)ds.$$
In particular, there is no exponential contribution involved in this regime. Thus, we get
$$\phi(\mu,\sigma)=\frac{1}{\tau_{\mathrm{ref}}+\tau_{\mathrm{m}}\sqrt{\pi}\int_{|\tilde{V}{\mathrm{th}}|}^{|\tilde{V}{\mathrm{r}}|}\mathrm{erfcx}(s)ds}$$
as a numerically safe expression for $\tilde{V}{\mathrm{r}}<\tilde{V}{\mathrm{th}}<0$.
Intermediate Regime
In the intermediate regime, we have $\tilde{V}{\mathrm{r}}\le0\le\tilde{V}{\mathrm{th}}$ or $V_{\mathrm{r}}+\frac{\alpha}{2}\sigma\sqrt{\frac{\tau_{\mathrm{s}}}{\tau_{\mathrm{m}}}}\le\mu\le V_{\mathrm{th}}+\frac{\alpha}{2}\sigma\sqrt{\frac{\tau_{\mathrm{s}}}{\tau_{\mathrm{m}}}}$. Thus, we split the integral at zero and use the previous steps for the respective parts to get
$$I(\tilde{V}{\mathrm{th}},\tilde{V}{\mathrm{r}})=2e^{\tilde{V}{\mathrm{th}}^{2}}D(\tilde{V}{\mathrm{th}})+\int_{\tilde{V}{\mathrm{th}}}^{|\tilde{V}{\mathrm{r}}|}\mathrm{erfcx}(s)ds.$$
Note that the sign of the second integral depends on whether $\left|\tilde{V}{\mathrm{r}}\right|>\tilde{V}{\mathrm{th}}$ (+) or not (-). Again, we extract the leading contribution $e^{\tilde{V}_{\mathrm{th}}^{2}}$ from the denominator and arrive at
$$\phi(\mu,\sigma) = \frac{e^{-\tilde{V}{\mathrm{th}}^{2}}}{e^{-\tilde{V}{\mathrm{th}}^{2}}\tau_{\mathrm{ref}}+\tau_{\mathrm{m}}\sqrt{\pi}\left(2D(\tilde{V}{\mathrm{th}})+e^{-\tilde{V}{\mathrm{th}}^{2}}\int_{\tilde{V}{\mathrm{th}}}^{|\tilde{V}{\mathrm{r}}|}\mathrm{erfcx}(s)ds\right)}$$
as a numerically safe expressions for $\tilde{V}{\mathrm{r}}\le0\le\tilde{V}{\mathrm{th}}$.
Noise-free Limit
Even the noise-free limit $\sigma\ll\mu$, where the implementation from (Hahne et al., 2017) eventually breaks, works flawlessly. In this limit, $\left|\tilde{V}{\mathrm{th}}\right|\gg1$ as long as $\mu\neq V{\mathrm{th}}$; thus, we get both in the 'strong inhibition' and in the 'itermediate' regime $\phi(\mu,\sigma)\sim e^{-\tilde{V}{\mathrm{th}}^{2}}\approx0$ for $\tilde{V}{\mathrm{th}}\ge0$. Accordingly, the only interesting case is the 'strong excitation' regime $\tilde{V}{\mathrm{r}}<\tilde{V}{\mathrm{th}}<0$. Since also $\left|\tilde{V}_{\mathrm{r}}\right|\gg1$, the integrand $\mathrm{erfcx}(s)$ is only evaluated at $s\gg1$. Using the only the first term of the asymptotic expansion
$$\mathrm{erfcx}(s)=\frac{1}{s\sqrt{\pi}}\sum_{n=0}^{\infty}(-1)^{n}\frac{(2n-1)!!}{(2s^{2})^{n}}$$
leads to the analytically solvable integral
$$I(\tilde{V}{\mathrm{th}},\tilde{V}{\mathrm{r}})=\int_{|\tilde{V}{\mathrm{th}}|}^{|\tilde{V}{\mathrm{r}}|}\mathrm{erfcx}(s)ds\approx\frac{1}{\sqrt{\pi}}\int_{|\tilde{V}{\mathrm{th}}|}^{|\tilde{V}{\mathrm{r}}|}\frac{1}{s}ds=\frac{1}{\sqrt{\pi}}\log\frac{\left|\tilde{V}{\mathrm{r}}\right|}{\left|\tilde{V}{\mathrm{th}}\right|}.$$
Inserting this into $\phi(\mu,\sigma)$ and using $\tilde{V}{\mathrm{th}}\approx\frac{V{\mathrm{th}}-\mu}{\sigma}, \tilde{V}{\mathrm{r}}\approx\frac{V{\mathrm{r}}-\mu}{\sigma}$ yields
$$\phi(\mu,\sigma)\approx\begin{cases}
0 & \mu\le V_{\mathrm{th}}\
\frac{1}{\tau_{\mathrm{ref}}+\tau_{\mathrm{m}}\log\frac{\mu-V_{\mathrm{r}}}{\mu-V_{\mathrm{th}}}} & \mu>V_{\mathrm{th}}\end{cases}$$
as it should. Thus, as long as the numerical solution of the integral $\frac{1}{\sqrt{\pi}}\int_{|\tilde{V}{\mathrm{th}}|}^{|\tilde{V}{\mathrm{r}}|}\frac{1}{s}ds$ is precise, the deterministic limit is also numerically safe.
Relevance of Noise-free Limit
Let us briefly estimate for which values the noise-free limit becomes relevant. We have $\left|\tilde{V}{\mathrm{r}}\right|>\left|\tilde{V}{\mathrm{th}}\right|\gg1$, thus the integrand $\mathrm{erfcx}(s)$ is only evaluated for arguments $s>\left|\tilde{V}{\mathrm{th}}\right|\gg1$. Looking at the difference between $\mathrm{erfcx}(s)$ and the first order asymptotics shown below, we see that the absolute difference to the asymptotics is only $O(10^{-7})$ for moderate values $\left|\tilde{V}{\mathrm{th}}\right|\approx O(100)$. Since we saw above that the noise free limit is equivalent to the first order asymptotics, we can conclude that it is certainly relevant for $\left|\tilde{V}{\mathrm{th}}\right|\approx\frac{\mu-V{\mathrm{th}}}{\sigma}\approx O(100)$; e.g. for $\mu-V_{\mathrm{th}}\approx10$mV a noise strength of $\sigma\approx0.1$mV corresponds to the noise-free limit.
End of explanation |
15,002 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
14 - Kaggle Competition
Fraud Detection
https
Step1: Estimate aggregated features
Step2: Split for each account and create the date as index
Step3: Create Aggregated Features for one account
Step4: All accounts
Step5: Split train and test
Step6: Simple Random Forest
Step7: KFold cross-validation
Step8: Train with all
Predict and send to Kaggle | Python Code:
import pandas as pd
import zipfile
with zipfile.ZipFile('../datasets/fraud_transactions_kaggle.csv.zip', 'r') as z:
f = z.open('fraud_transactions_kaggle.csv')
data = pd.read_csv(f, index_col=0)
data.head()
data.tail()
data.fraud.value_counts(dropna=False)
Explanation: 14 - Kaggle Competition
Fraud Detection
https://inclass.kaggle.com/c/easy-ml-class
by Alejandro Correa Bahnsen
version 0.1, May 2016
Part of the class Machine Learning for Security Informatics
This notebook is licensed under a [Creative Commons Attribution-ShareAlike 3.0 Unported License]
pip install tqdm
Fraud Detection
End of explanation
from datetime import datetime, timedelta
from tqdm import tqdm
Explanation: Estimate aggregated features
End of explanation
card_numbers = data['card_number'].unique()
data['trx_id'] = data.index
data.index = pd.DatetimeIndex(data['date'])
data_ = []
for card_number in tqdm(card_numbers):
data_.append(data.query('card_number == ' + str(card_number)))
Explanation: Split for each account and create the date as index
End of explanation
res_agg = pd.DataFrame(index=data['trx_id'].values,
columns=['Trx_sum_7D', 'Trx_count_1D'])
trx = data_[0]
for i in range(trx.shape[0]):
date = trx.index[i]
trx_id = int(trx.ix[i, 'trx_id'])
# Sum 7 D
agg_ = trx[date-pd.datetools.to_offset('7D').delta:date-timedelta(0,0,1)]
res_agg.loc[trx_id, 'Trx_sum_7D'] = agg_['amount'].sum()
# Count 1D
agg_ = trx[date-pd.datetools.to_offset('1D').delta:date-timedelta(0,0,1)]
res_agg.loc[trx_id, 'Trx_count_1D'] = agg_['amount'].shape[0]
res_agg.mean()
Explanation: Create Aggregated Features for one account
End of explanation
for trx in tqdm(data_):
for i in range(trx.shape[0]):
date = trx.index[i]
trx_id = int(trx.ix[i, 'trx_id'])
# Sum 7 D
agg_ = trx[date-pd.datetools.to_offset('7D').delta:date-timedelta(0,0,1)]
res_agg.loc[trx_id, 'Trx_sum_7D'] = agg_['amount'].sum()
# Count 1D
agg_ = trx[date-pd.datetools.to_offset('1D').delta:date-timedelta(0,0,1)]
res_agg.loc[trx_id, 'Trx_count_1D'] = agg_['amount'].shape[0]
res_agg.head()
data.index = data.trx_id
data = data.join(res_agg)
data.sample(15, random_state=42).sort_index()
Explanation: All accounts
End of explanation
X = data.loc[~data.fraud.isnull()]
y = X.fraud
X = X.drop(['fraud', 'date', 'card_number'], axis=1)
X_kaggle = data.loc[data.fraud.isnull()]
X_kaggle = X_kaggle.drop(['fraud', 'date', 'card_number'], axis=1)
X_kaggle.head()
Explanation: Split train and test
End of explanation
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(n_estimators=100, n_jobs=-1, class_weight='balanced')
from sklearn.metrics import fbeta_score
Explanation: Simple Random Forest
End of explanation
from sklearn.cross_validation import KFold
kf = KFold(X.shape[0], n_folds=5)
res = []
for train, test in kf:
X_train, X_test, y_train, y_test = X.iloc[train], X.iloc[test], y.iloc[train], y.iloc[test]
clf.fit(X_train, y_train)
y_pred_proba = clf.predict_proba(X_test)[:, 1]
y_pred = (y_pred_proba>0.05).astype(int)
res.append(fbeta_score(y_test, y_pred, beta=2))
pd.Series(res).describe()
Explanation: KFold cross-validation
End of explanation
clf.fit(X, y)
y_pred = clf.predict_proba(X_kaggle)[:, 1]
y_pred = (y_pred>0.05).astype(int)
y_pred = pd.Series(y_pred,name='fraud', index=X_kaggle.index)
y_pred.head(10)
y_pred.to_csv('fraud_transactions_kaggle_1.csv', header=True, index_label='ID')
Explanation: Train with all
Predict and send to Kaggle
End of explanation |
15,003 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
An Astronomical Application of Machine Learning
Step1: Problem 1) Examine the Training Data
For this problem the training set, i.e. sources with known labels, includes stars and galaxies that have been confirmed with spectroscopic observations. The machine learning model is needed because there are $\gg 10^8$ sources with photometric observations in SDSS, and only $4 \times 10^6$ sources with spectroscopic observations. The model will allow us to translate our knowledge from the spectroscopic observations to the entire data set. The features include each $r$-band magnitude measurement made by SDSS (don't worry if you don't know what this means...). This yields 8 features to train the models (significantly fewer than the 454 properties measured for each source in SDSS).
If you are curious (and it is fine if you are not) this training set was constructed by running the following query on the SDSS database
Step2: Problem 1b
Based on your plots of the data, which feature do you think will be the most important for separating stars and galaxies? Why?
write your answer here - do not change it after later completing the problem
The final data preparation step it to create an independent test set to evalute the generalization error of the final tuned model. Independent test sets are generated by witholding a fraction of the training set. No hard and fast rules apply for the fraction to be withheld, though typical choices vary between $\sim{0.2}-0.5$.
sklearn.model_selection has a useful helper function train_test_split.
Problem 1c Split the 20k spectroscopic sources 70-30 into training and test sets. Save the results in arrays called
Step3: We will now ignore everything in the test set until we have fully optimized the machine learning model.
Problem 2) Model Building
After curating the data, you must select a specific machine learning algorithm. With experience, it is possible to develop intuition for the best ML algorithm given a specific problem.
Short of that? Try two (or three, or four, or five) different models and choose whichever works the best.
Problem 2a
Train a $k$-nearest neighbors model on the star-galaxy training set. Select $k$ = 25 for this model.
Hint - the KNeighborsClassifier object in the sklearn.neighbors module may be useful for this task.
Step4: Problem 2b
Train a Random Forest (RF) model (Breiman 2001) on the training set. Include 50 trees in the forest using the n_estimators parameter. Again, set random_state = rs.
Hint - use the RandomForestClassifier object from the sklearn.ensemble module. Also - be sure to set n_jobs = -1 in every call of RandomForestClassifier.
Step5: A nice property of RF, relative to $k$NN, is that RF naturally provides an estimate of the most important features in a model.
RF feature importance is measured by randomly shuffling the values of a particular feature, and measuring the decrease in the model's overall accuracy. The relative feature importances can be accessed using the .feature_importances_ attribute associated with the RandomForestClassifer() object. The higher the value, the more important feature.
Problem 2c
Calculate the relative importance of each feature.
Which feature is most important? Does this match your answer from 1c?
Step6: write your answer here
Problem 3) Model Evaluation
To evaluate the performance of the model we establish a baseline (or figure of merit) that we would like to exceed. For our current application we want to maximize the accuracy of the model.
If the model does not improve upon the baseline (or reach the desired figure of merit) then one must iterate on previous steps (feature engineering, algorithm selection, etc) to accomplish the desired goal.
The SDSS photometric pipeline uses a simple parametric model to classify sources as either stars or galaxies. If we are going to the trouble of building a complex ML model, then it stands to reason that its performance should exceed that of the simple model. Thus, we adopt the SDSS photometric classifier as our baseline.
The SDSS photometric classifier uses a single hard cut to separate stars and galaxies in imaging data
Step7: Problem 3b
Use 10-fold cross validation to estimate the FoM for the $k$NN model. Take the mean value across all folds as the FoM estimate.
Hint - the cross_val_score function from the sklearn.model_selection module performs the necessary calculations.
Step8: Problem 3c
Use 10-fold cross validation to estimate the FoM for the random forest model.
Step9: Problem 3d
Do the machine-learning models outperform the SDSS photometric classifier?
write your answer here
Problem 4) Model Optimization
While the "off-the-shelf" model provides an improvement over the SDSS photometric classifier, we can further refine and improve the performance of the machine learning model by adjusting the model tuning parameters. A process known as model optimization.
All machine-learning models have tuning parameters. In brief, these parameters capture the smoothness of the model in the multidimentional-feature space. Whether the model is smooth or coarse is application dependent -- be weary of over-fitting or under-fitting the data. Generally speaking, RF (and most tree-based methods) have 3 flavors of tuning parameter
Step10: write your answer here
Problem 4b
Determine the 10-fold cross validation accuracy for RF models with $N_\mathrm{tree}$ = 1, 10, 30, 100, and 300.
How do you expect changing the number of trees to affect the results?
Step11: write your answer here
Now you are ready for the moment of truth!
Problem 5) Model Predictions
Problem 5a
Calculate the FoM for the SDSS photometric model on the test set.
Step12: Problem 5b
Using the optimal number of trees from 4b calculate the FoM for the random forest model.
Hint - remember that the model should be trained on the training set, but the predictions are for the test set.
Step13: Problem 5c
Calculate the confusion matrix for the test set. Is there symmetry to the misclassifications?
Hint - the confusion_matrix function in sklearn.metrics will help.
Step14: write your answer here
Problem 5d
Calculate (and plot the region of interest) the ROC curve assumming that stars are the positive class.
Hint 1 - you will need to calculate probabilistic classifications for the test set using the predict_proba() method.
Hint 2 - the roc_curve function in the sklearn.metrics module will be useful.
Step15: Problem 5e
Suppose that (like me) you really care about supernovae. In this case you want a model that correctly classifies 99% of all stars, so that stellar flares do not fool you into thinking you have found a new supernova.
What classification threshold should be adopted for this model?
What fraction of galaxies does this model misclassify?
Step16: Problem 6) Classify New Data
Run the cell below to load in some new data (which in this case happens to have known labels, but in practice this will almost never be the case...)
Step17: Problem 6a
Create a feature and label array for the new data.
Hint - copy the code you developed above in Problem 2.
Step18: Problem 6b
Calculate the accuracy of the model predictions on the new data.
Step19: Problem 6c
Can you explain why the accuracy for the new data is significantly lower than what you calculated previously?
If you can build and train a better model (using the trianing data) for classifying the new data - I will be extremely impressed.
write your answer here
Challenge Problem) Full RF Optimization
Now we will optimize the model over all tuning parameters. How does one actually determine the optimal set of tuning parameters? Brute force.
We will optimize the model via a grid search that performs CV at each point in the 3D grid. The final model will adopt the point with the highest accuracy.
It is important to remember two general rules of thumb | Python Code:
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: An Astronomical Application of Machine Learning:
Separating Stars and Galaxies from SDSS
Version 0.1
By AA Miller 2018 Nov 06
The problems in the following notebook develop an end-to-end machine learning model using actual astronomical data to separate stars and galaxies. There are 5 steps in this machine learning workflow:
Data Preparation
Model Building
Model Evaluation
Model Optimization
Model Predictions
The data come from the Sloan Digital Sky Survey (SDSS), an imaging survey that has several similarities to LSST (though the telescope was significantly smaller and the survey did not cover as large an area).
Science background: Many (nearly all?) of the science applications for LSST data will rely on the accurate separation of stars and galaxies in the LSST imaging data. As an example, imagine measuring the structure of the Milky Way without knowing which sources are galaxies and which are stars.
During this exercise, we will utilize supervised machine learning methods to separate extended sources (galaxies) and point sources (stars) in imaging data. These methods are highly flexible, and as a result can classify sources at higher fidelity than methods that simply make cuts in a low-dimensional space.
End of explanation
sdss_df = pd.read_hdf("sdss_training_set.h5")
sns.pairplot(sdss_df, hue = 'class', diag_kind = 'hist')
Explanation: Problem 1) Examine the Training Data
For this problem the training set, i.e. sources with known labels, includes stars and galaxies that have been confirmed with spectroscopic observations. The machine learning model is needed because there are $\gg 10^8$ sources with photometric observations in SDSS, and only $4 \times 10^6$ sources with spectroscopic observations. The model will allow us to translate our knowledge from the spectroscopic observations to the entire data set. The features include each $r$-band magnitude measurement made by SDSS (don't worry if you don't know what this means...). This yields 8 features to train the models (significantly fewer than the 454 properties measured for each source in SDSS).
If you are curious (and it is fine if you are not) this training set was constructed by running the following query on the SDSS database:
SELECT TOP 20000
p.psfMag_r, p.fiberMag_r, p.fiber2Mag_r, p.petroMag_r,
p.deVMag_r, p.expMag_r, p.modelMag_r, p.cModelMag_r,
s.class
FROM PhotoObjAll AS p JOIN specObjAll s ON s.bestobjid = p.objid
WHERE p.mode = 1 AND s.sciencePrimary = 1 AND p.clean = 1 AND s.class != 'QSO'
ORDER BY p.objid ASC
First download the training set and the blind test set for this problem.
Problem 1a
Visualize the training set data. The data have 8 features ['psfMag_r', 'fiberMag_r', 'fiber2Mag_r', 'petroMag_r', 'deVMag_r', 'expMag_r', 'modelMag_r', 'cModelMag_r'], and a 9th column ['class'] corresponding to the labels ('STAR' or 'GALAXY' in this case).
Hint - just execute the cell below.
End of explanation
from sklearn.model_selection import train_test_split
rs = 1851
feats = list(sdss_df.columns)
feats.remove('class')
X = np.array(sdss_df[feats])
y = np.array(sdss_df['class'])
train_X, test_X, train_y, test_y = train_test_split( X, y, test_size = 0.3, random_state = rs)
Explanation: Problem 1b
Based on your plots of the data, which feature do you think will be the most important for separating stars and galaxies? Why?
write your answer here - do not change it after later completing the problem
The final data preparation step it to create an independent test set to evalute the generalization error of the final tuned model. Independent test sets are generated by witholding a fraction of the training set. No hard and fast rules apply for the fraction to be withheld, though typical choices vary between $\sim{0.2}-0.5$.
sklearn.model_selection has a useful helper function train_test_split.
Problem 1c Split the 20k spectroscopic sources 70-30 into training and test sets. Save the results in arrays called: train_X, train_y, test_X, test_y, respectively. Use rs for the random_state in train_test_split.
Hint - recall that sklearn utilizes X, a 2D np.array(), and y as the features and labels arrays, respecitively.
End of explanation
from sklearn.neighbors import KNeighborsClassifier
knn_clf = KNeighborsClassifier(n_neighbors=25)
knn_clf.fit(train_X, train_y)
Explanation: We will now ignore everything in the test set until we have fully optimized the machine learning model.
Problem 2) Model Building
After curating the data, you must select a specific machine learning algorithm. With experience, it is possible to develop intuition for the best ML algorithm given a specific problem.
Short of that? Try two (or three, or four, or five) different models and choose whichever works the best.
Problem 2a
Train a $k$-nearest neighbors model on the star-galaxy training set. Select $k$ = 25 for this model.
Hint - the KNeighborsClassifier object in the sklearn.neighbors module may be useful for this task.
End of explanation
from sklearn.ensemble import RandomForestClassifier
rf_clf = RandomForestClassifier(n_estimators=50, random_state=rs, n_jobs=-1)
rf_clf.fit(train_X, train_y)
Explanation: Problem 2b
Train a Random Forest (RF) model (Breiman 2001) on the training set. Include 50 trees in the forest using the n_estimators parameter. Again, set random_state = rs.
Hint - use the RandomForestClassifier object from the sklearn.ensemble module. Also - be sure to set n_jobs = -1 in every call of RandomForestClassifier.
End of explanation
feat_str = ',\n'.join(['{}'.format(feat) for feat in np.array(feats)[np.argsort(rf_clf.feature_importances_)[::-1]]])
print('From most to least important: \n{}'.format(feat_str))
Explanation: A nice property of RF, relative to $k$NN, is that RF naturally provides an estimate of the most important features in a model.
RF feature importance is measured by randomly shuffling the values of a particular feature, and measuring the decrease in the model's overall accuracy. The relative feature importances can be accessed using the .feature_importances_ attribute associated with the RandomForestClassifer() object. The higher the value, the more important feature.
Problem 2c
Calculate the relative importance of each feature.
Which feature is most important? Does this match your answer from 1c?
End of explanation
from sklearn.metrics import accuracy_score
phot_y = np.empty_like(train_y)
phot_gal = np.logical_not(train_X[:,0] - train_X[:,-1] < 0.145)
phot_y[phot_gal] = 'GALAXY'
phot_y[~phot_gal] = 'STAR'
print("The baseline FoM = {:.4f}".format(accuracy_score(train_y, phot_y)))
Explanation: write your answer here
Problem 3) Model Evaluation
To evaluate the performance of the model we establish a baseline (or figure of merit) that we would like to exceed. For our current application we want to maximize the accuracy of the model.
If the model does not improve upon the baseline (or reach the desired figure of merit) then one must iterate on previous steps (feature engineering, algorithm selection, etc) to accomplish the desired goal.
The SDSS photometric pipeline uses a simple parametric model to classify sources as either stars or galaxies. If we are going to the trouble of building a complex ML model, then it stands to reason that its performance should exceed that of the simple model. Thus, we adopt the SDSS photometric classifier as our baseline.
The SDSS photometric classifier uses a single hard cut to separate stars and galaxies in imaging data:
$$\mathtt{psfMag_r} - \mathtt{cModelMag_r} > 0.145.$$
Sources that satisfy this criteria are considered galaxies.
Problem 3a
Determine the baseline figure of merit by measuring the accuracy of the SDSS photometric classifier on the training set.
Hint - the accuracy_score function in the sklearn.metrics module may be useful.
End of explanation
from sklearn.model_selection import cross_val_score
knn_cv = cross_val_score(knn_clf, train_X, train_y, cv=10)
print('The kNN model FoM = {:.4f} +/- {:.4f}'.format(np.mean(knn_cv), np.std(knn_cv, ddof=1)))
Explanation: Problem 3b
Use 10-fold cross validation to estimate the FoM for the $k$NN model. Take the mean value across all folds as the FoM estimate.
Hint - the cross_val_score function from the sklearn.model_selection module performs the necessary calculations.
End of explanation
rf_cv = cross_val_score(rf_clf, train_X, train_y, cv=10)
print('The RF model FoM = {:.4f} +/- {:.4f}'.format(np.mean(rf_cv), np.std(rf_cv, ddof=1)))
Explanation: Problem 3c
Use 10-fold cross validation to estimate the FoM for the random forest model.
End of explanation
for k in [1,10,100]:
knn_cv = cross_val_score(KNeighborsClassifier(n_neighbors=k), train_X, train_y, cv=10)
print('With k = {:d}, the kNN FoM = {:.4f} +/- {:.4f}'.format(k, np.mean(knn_cv), np.std(knn_cv, ddof=1)))
Explanation: Problem 3d
Do the machine-learning models outperform the SDSS photometric classifier?
write your answer here
Problem 4) Model Optimization
While the "off-the-shelf" model provides an improvement over the SDSS photometric classifier, we can further refine and improve the performance of the machine learning model by adjusting the model tuning parameters. A process known as model optimization.
All machine-learning models have tuning parameters. In brief, these parameters capture the smoothness of the model in the multidimentional-feature space. Whether the model is smooth or coarse is application dependent -- be weary of over-fitting or under-fitting the data. Generally speaking, RF (and most tree-based methods) have 3 flavors of tuning parameter:
$N_\mathrm{tree}$ - the number of trees in the forest n_estimators (default: 10) in sklearn
$m_\mathrm{try}$ - the number of (random) features to explore as splitting criteria at each node max_features (default: sqrt(n_features)) in sklearn
Pruning criteria - defined stopping criteria for ending continued growth of the tree, there are many choices for this in sklearn (My preference is min_samples_leaf (default: 1) which sets the minimum number of sources allowed in a terminal node, or leaf, of the tree)
Just as we previously evaluated the model using CV, we must optimize the tuning paramters via CV. Until we "finalize" the model by fixing all the input parameters, we cannot evalute the accuracy of the model with the test set as that would be "snooping."
Before globally optimizing the model, let's develop some intuition for how the tuning parameters affect the final model predictions.
Problem 4a
Determine the 10-fold cross validation accuracy for $k$NN models with $k$ = 1, 10, 100.
How do you expect changing the number of neighbors to affect the results?
End of explanation
for ntree in [1,10,30,100,300]:
rf_cv = cross_val_score(RandomForestClassifier(n_estimators=ntree), train_X, train_y, cv=10)
print('With {:d} trees the FoM = {:.4f} +/- {:.4f}'.format(ntree, np.mean(rf_cv), np.std(rf_cv, ddof=1)))
Explanation: write your answer here
Problem 4b
Determine the 10-fold cross validation accuracy for RF models with $N_\mathrm{tree}$ = 1, 10, 30, 100, and 300.
How do you expect changing the number of trees to affect the results?
End of explanation
phot_y = np.empty_like(test_y)
phot_gal = np.logical_not(test_X[:,0] - test_X[:,-1] < 0.145)
phot_y[phot_gal] = 'GALAXY'
phot_y[~phot_gal] = 'STAR'
print("The baseline FoM = {:.4f}".format(accuracy_score(test_y, phot_y)))
Explanation: write your answer here
Now you are ready for the moment of truth!
Problem 5) Model Predictions
Problem 5a
Calculate the FoM for the SDSS photometric model on the test set.
End of explanation
rf_clf = RandomForestClassifier(n_estimators=300, n_jobs=-1)
rf_clf.fit(train_X, train_y)
test_preds = rf_clf.predict(test_X)
print("The RF model has FoM = {:.4f}".format(accuracy_score(test_y, test_preds)))
Explanation: Problem 5b
Using the optimal number of trees from 4b calculate the FoM for the random forest model.
Hint - remember that the model should be trained on the training set, but the predictions are for the test set.
End of explanation
from sklearn.metrics import confusion_matrix
print(confusion_matrix(test_y, test_preds))
Explanation: Problem 5c
Calculate the confusion matrix for the test set. Is there symmetry to the misclassifications?
Hint - the confusion_matrix function in sklearn.metrics will help.
End of explanation
from sklearn.metrics import roc_curve
test_y_int = np.ones_like(test_y, dtype=int)
test_y_int[np.where(test_y == 'GALAXY')] = 0
test_preds_proba = rf_clf.predict_proba(test_X)
fpr, tpr, thresh = roc_curve(test_y_int, test_preds_proba[:,1])
fig, ax = plt.subplots()
ax.plot(fpr, tpr)
ax.set_xlabel('FPR')
ax.set_ylabel('TPR')
ax.set_xlim(2e-3,.2)
ax.set_ylim(0.3,1)
Explanation: write your answer here
Problem 5d
Calculate (and plot the region of interest) the ROC curve assumming that stars are the positive class.
Hint 1 - you will need to calculate probabilistic classifications for the test set using the predict_proba() method.
Hint 2 - the roc_curve function in the sklearn.metrics module will be useful.
End of explanation
tpr_99_thresh = thresh[np.argmin(np.abs(0.99 - tpr))]
print('This model requires a classification threshold of {:.4f}'.format(tpr_99_thresh))
fpr_at_tpr_99 = fpr[np.argmin(np.abs(0.99 - tpr))]
print('This model misclassifies {:.2f}% of galaxies'.format(fpr_at_tpr_99*100))
Explanation: Problem 5e
Suppose that (like me) you really care about supernovae. In this case you want a model that correctly classifies 99% of all stars, so that stellar flares do not fool you into thinking you have found a new supernova.
What classification threshold should be adopted for this model?
What fraction of galaxies does this model misclassify?
End of explanation
new_data_df = pd.read_hdf("blind_test_set.h5")
Explanation: Problem 6) Classify New Data
Run the cell below to load in some new data (which in this case happens to have known labels, but in practice this will almost never be the case...)
End of explanation
new_X = np.array(new_data_df[feats])
new_y = np.array(new_data_df['class'])
Explanation: Problem 6a
Create a feature and label array for the new data.
Hint - copy the code you developed above in Problem 2.
End of explanation
new_preds = rf_clf.predict(new_X)
print("The model has an accuracy of {:.4f}".format(accuracy_score(new_y, new_preds)))
Explanation: Problem 6b
Calculate the accuracy of the model predictions on the new data.
End of explanation
from sklearn.model_selection import GridSearchCV
grid_results = GridSearchCV(RandomForestClassifier(n_jobs=-1),
{'n_estimators': [30, 100, 300],
'max_features': [1, 3, 7],
'min_samples_leaf': [1,10,30]},
cv = 3)
grid_results.fit(train_X, train_y)
print('The best model has {}'.format(grid_results.best_params_))
Explanation: Problem 6c
Can you explain why the accuracy for the new data is significantly lower than what you calculated previously?
If you can build and train a better model (using the trianing data) for classifying the new data - I will be extremely impressed.
write your answer here
Challenge Problem) Full RF Optimization
Now we will optimize the model over all tuning parameters. How does one actually determine the optimal set of tuning parameters? Brute force.
We will optimize the model via a grid search that performs CV at each point in the 3D grid. The final model will adopt the point with the highest accuracy.
It is important to remember two general rules of thumb: (i) if the model is optimized at the edge of the grid, refit a new grid centered on that point, and (ii) the results should be stable in the vicinity of the grid maximum. If this is not the case the model is likely overfit.
Use GridSearchCV to perform a 3-fold CV grid search to optimize the RF star-galaxy model. Remember the rules of thumb.
What are the optimal tuning parameters for the model?
Hint 1 - think about the computational runtime based on the number of points in the grid. Do not start with a very dense or large grid.
Hint 2 - if the runtime is long, don't repeat the grid search even if the optimal model is on an edge of the grid
End of explanation |
15,004 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step2: Ridge regression and model selection
Modified from the github repo
Step3: Ridge Regression
Step4: The above plot shows that the Ridge coefficients get larger when we decrease lambda.
Exercises
Exercise 1 Plot the LOO risk and the empirical risk as a function of lambda.
Step5: Exercise 2 Implement and test forward stagewise regression (recall that stagewise and stepwise are different).
Step6: I'll implement a different variant of forward stagewise, where the correlation updates the current beta vector by adding them. | Python Code:
# %load ../standard_import.txt
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import scale
from sklearn.linear_model import LinearRegression, Ridge, RidgeCV, Lasso, LassoCV
from sklearn.decomposition import PCA
from sklearn.metrics import mean_squared_error
%matplotlib inline
plt.style.use('ggplot')
datafolder = "../data/"
def loo_risk(X,y,regmod):
Construct the leave-one-out square error risk for a regression model
Input: design matrix, X, response vector, y, a regression model, regmod
Output: scalar LOO risk
loo = LeaveOneOut()
loo_losses = []
for train_index, test_index in loo.split(X):
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
regmod.fit(X_train,y_train)
y_hat = regmod.predict(X_test)
loss = np.sum((y_hat - y_test)**2)
loo_losses.append(loss)
return np.mean(loo_losses)
def emp_risk(X,y,regmod):
Return the empirical risk for square error loss
Input: design matrix, X, response vector, y, a regression model, regmod
Output: scalar empirical risk
regmod.fit(X,y)
y_hat = regmod.predict(X)
return np.mean((y_hat - y)**2)
# In R, I exported the dataset from package 'ISLR' to a csv file.
df = pd.read_csv(datafolder+'Hitters.csv', index_col=0).dropna()
df.index.name = 'Player'
df.info()
df.head()
dummies = pd.get_dummies(df[['League', 'Division', 'NewLeague']])
dummies.info()
print(dummies.head())
y = df.Salary
# Drop the column with the independent variable (Salary), and columns for which we created dummy variables
X_ = df.drop(['Salary', 'League', 'Division', 'NewLeague'], axis=1).astype('float64')
# Define the feature set X.
X = pd.concat([X_, dummies[['League_N', 'Division_W', 'NewLeague_N']]], axis=1)
X.info()
X.head(5)
Explanation: Ridge regression and model selection
Modified from the github repo: https://github.com/JWarmenhoven/ISLR-python which is based on the book by James et al. Intro to Statistical Learning.
Loading data
End of explanation
alphas = 10**np.linspace(10,-2,100)*0.5
ridge = Ridge()
coefs = []
for a in alphas:
ridge.set_params(alpha=a)
ridge.fit(scale(X), y)
coefs.append(ridge.coef_)
ax = plt.gca()
ax.plot(alphas, coefs)
ax.set_xscale('log')
ax.set_xlim(ax.get_xlim()[::-1]) # reverse axis
plt.axis('tight')
plt.xlabel('lambda')
plt.ylabel('weights')
plt.title('Ridge coefficients as a function of the regularization');
Explanation: Ridge Regression
End of explanation
alphas = np.linspace(30,1,100)
rcv = RidgeCV(alphas = alphas, store_cv_values=True,normalize=True)
rcv.fit(X,y)
cv_vals = rcv.cv_values_
LOOr = cv_vals.mean(axis=0)
EMPr = []
for a in alphas:
ridge.set_params(alpha=a)
ridge.fit(scale(X), y)
EMPr.append(emp_risk(X,y,ridge))
plt.plot(alphas,LOOr)
plt.xlabel('lambda')
plt.ylabel('Risk')
plt.title('LOO Risk for Ridge');
plt.show()
plt.plot(alphas,EMPr)
plt.xlabel('lambda')
plt.ylabel('Risk')
plt.title('Emp Risk for Ridge');
plt.show()
Explanation: The above plot shows that the Ridge coefficients get larger when we decrease lambda.
Exercises
Exercise 1 Plot the LOO risk and the empirical risk as a function of lambda.
End of explanation
n,p = X.shape
Xsc = scale(X)
ysc = scale(y)
Explanation: Exercise 2 Implement and test forward stagewise regression (recall that stagewise and stepwise are different).
End of explanation
MSEiter = []
res = ysc
beta = np.zeros(p)
tol = 1e-2
corrmax = 1.
while corrmax > tol:
res_corr = Xsc.T.dot(scale(res)) / n
jmax, corrmax = max(enumerate(np.abs(res_corr)), key=lambda x: x[1])
beta[jmax] = beta[jmax] + res_corr[jmax]
res = ysc - Xsc.dot(beta)
MSE = np.sum(res**2.)
MSEiter.append(MSE)
beta
lm = LinearRegression()
lm.fit(Xsc,ysc)
lm.coef_
Explanation: I'll implement a different variant of forward stagewise, where the correlation updates the current beta vector by adding them.
End of explanation |
15,005 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Loading, reshaping, visualizing data using pycroscopy
Suhas Somnath, Chris R. Smith and Stephen Jesse
The Center for Nanophase Materials Science and The Institute for Functional Imaging for Materials <br>
Oak Ridge National Laboratory<br>
8/01/2017
Here, we will demonstrate how to load, reshape, and visualize multidimensional imaging datasets. For this example, we will load a three dimensional Band Excitation imaging dataset acquired from an atomic force microscope.
Step1: Load pycroscopy compatible file
For simplicity we will use a dataset that has already been transalated form its original data format into a pycroscopy compatible hierarchical data format (HDF5 or H5) file
HDF5 or H5 files
Step2: Inspect the contents of this h5 data file
The file contents are stored in a tree structure, just like files on a contemporary computer. The file contains datagroups (similar to file folders) and datasets (similar to spreadsheets).
There are several datasets in the file and these store
Step3: Accessing datasests and datagroups
Datasets and datagroups can be accessed by specifying the path, just like a webpage or a file in a directory
Step4: The output above shows that the "Raw_Data" dataset is a two dimensional dataset, and has complex value (a +bi) entries at each element in the 2D matrix.
This dataset is contained in a datagroup called "Channel_000" which itself is contained in a datagroup called "Measurement_000"
The datagroup "Channel_000" contains several "members", where these members could be datasets like "Raw_Data" or datagroups like "Channel_000"
Attributes
HDF5 datasets and datagroups can also store metadata such as experimental parameters. These metadata can be text, numbers, small lists of numbers or text etc. These metadata can be very important for understanding the datasets and guide the analysis routines
Step5: In the case of the spectral dataset under investigation, a spectra with a single peak was collected at each spatial location on a two dimensional grid of points. Thus, this dataset has two position dimensions and one spectroscopic dimension (spectra).
In pycroscopy, all spatial dimensions are collapsed to a single dimension and similarly, all spectroscopic dimensions are also collapsed to a single dimension. Thus, the data is stored as a two-dimensional (N x P) matrix with N spatial locations each with P spectroscopic datapoints.
This general and intuitive format allows imaging data from any instrument, measurement scheme, size, or dimensionality to be represented in the same way.
Such an instrument independent data format enables a single set of anaysis and processing functions to be reused for multiple image formats or modalities.
Step6: Each main dataset is always accompanied by four ancillary datasets that explain
Step7: Visualizing the position dimensions
Step8: Inspecting the measurement at a single spatial pixel
Step9: Inspecting the spatial distribution of the amplitude at a single frequency
If the frequency is fixed, the spatial distribution would result in a 2D spatial map.
Note that the spatial dimensions are collapsed to a single dimension in all pycroscopy datasets. Thus, the 1D vector at the specified frequency needs to be reshaped back to a 2D map
Step10: Reshaping data back to N dimensions
There are several utility functions in pycroscopy that make it easy to access and reshape datasets. Here we show you how to return your dat to the N dimensional form in one easy step.
While this data is a simple example and can be reshaped manually, such reshape operations become especially useful for 5,6,7 or larger dimensional datasets.
Step11: The same data investigation can be performed on the N dimensional dataset
Step12: Closing the HDF5 file after data processing or visualization | Python Code:
# Make sure pycroscopy and wget are installed
!pip install pycroscopy
!pip install -U wget
# Ensure python 3 compatibility
from __future__ import division, print_function, absolute_import
# Import necessary libraries:
import h5py
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.patches as patches
from IPython.display import display
from os import remove
import pycroscopy as px
# set up notebook to show plots within the notebook
% matplotlib inline
Explanation: Loading, reshaping, visualizing data using pycroscopy
Suhas Somnath, Chris R. Smith and Stephen Jesse
The Center for Nanophase Materials Science and The Institute for Functional Imaging for Materials <br>
Oak Ridge National Laboratory<br>
8/01/2017
Here, we will demonstrate how to load, reshape, and visualize multidimensional imaging datasets. For this example, we will load a three dimensional Band Excitation imaging dataset acquired from an atomic force microscope.
End of explanation
# Downloading the example file from the pycroscopy Github project
url = 'https://raw.githubusercontent.com/pycroscopy/pycroscopy/master/data/BELine_0004.h5'
h5_path = 'temp.h5'
_ = wget.download(url, h5_path)
print('Working on:\n' + h5_path)
# Open the file in read-only mode
h5_file = h5py.File(h5_path, mode='r')
# Here, h5_file is an active handle to the open file
Explanation: Load pycroscopy compatible file
For simplicity we will use a dataset that has already been transalated form its original data format into a pycroscopy compatible hierarchical data format (HDF5 or H5) file
HDF5 or H5 files:
are like smart containers that can store matrices with data, folders to organize these datasets, images, metadata like experimental parameters, links or shortcuts to datasets, etc.
are readily compatible with high-performance computing facilities
scale very efficiently from few kilobytes to several terabytes
can be read and modified using any language including Python, Matlab, C/C++, Java, Fortran, Igor Pro, etc.
Python uses the h5py libaray to read, write, and access HDF5 files
End of explanation
print('Datasets and datagroups within the file:\n------------------------------------')
px.hdf_utils.print_tree(h5_file)
Explanation: Inspect the contents of this h5 data file
The file contents are stored in a tree structure, just like files on a contemporary computer. The file contains datagroups (similar to file folders) and datasets (similar to spreadsheets).
There are several datasets in the file and these store:
* the actual measurement collected from the experiment,
* spatial location on the sample where each measurement was collected,
* information to support and explain the spectral data collected at each location
* Since pycroscopy stores results from processing and analyses performed on the data in the same file, these datasets and datagroups are present as well
* any other relevant ancillary information
End of explanation
print('Datagroup corresponding to a channel of information:')
print(h5_file['/Measurement_000/Channel_000/'])
print('\nDataset containing the raw data collected from the microscope:')
print(h5_file['/Measurement_000/Channel_000/Raw_Data'])
Explanation: Accessing datasests and datagroups
Datasets and datagroups can be accessed by specifying the path, just like a webpage or a file in a directory
End of explanation
print('\nMetadata or attributes in a datagroup\n------------------------------------')
for key in h5_file['/Measurement_000'].attrs:
print('{} : {}'.format(key, px.hdf_utils.get_attr(h5_file['/Measurement_000'], key)))
Explanation: The output above shows that the "Raw_Data" dataset is a two dimensional dataset, and has complex value (a +bi) entries at each element in the 2D matrix.
This dataset is contained in a datagroup called "Channel_000" which itself is contained in a datagroup called "Measurement_000"
The datagroup "Channel_000" contains several "members", where these members could be datasets like "Raw_Data" or datagroups like "Channel_000"
Attributes
HDF5 datasets and datagroups can also store metadata such as experimental parameters. These metadata can be text, numbers, small lists of numbers or text etc. These metadata can be very important for understanding the datasets and guide the analysis routines
End of explanation
h5_chan_grp = h5_file['/Measurement_000/']
h5_main = h5_chan_grp['Channel_000/Raw_Data']
print('\nThe main dataset:\n------------------------------------')
print(h5_main)
print('Original three dimensional matrix had {} rows and {} columns \
each having {} spectral points'.format(h5_chan_grp.attrs['grid_num_rows'],
h5_chan_grp.attrs['grid_num_cols'],
h5_chan_grp.attrs['num_bins']))
print('Collapsing the position dimensions: ({}x{}, {}) -> ({}, {})'.format(
h5_chan_grp.attrs['grid_num_rows'],
h5_chan_grp.attrs['grid_num_cols'],
h5_chan_grp.attrs['num_bins'],
h5_chan_grp.attrs['grid_num_rows'] * h5_chan_grp.attrs['grid_num_cols'],
h5_chan_grp.attrs['num_bins']))
Explanation: In the case of the spectral dataset under investigation, a spectra with a single peak was collected at each spatial location on a two dimensional grid of points. Thus, this dataset has two position dimensions and one spectroscopic dimension (spectra).
In pycroscopy, all spatial dimensions are collapsed to a single dimension and similarly, all spectroscopic dimensions are also collapsed to a single dimension. Thus, the data is stored as a two-dimensional (N x P) matrix with N spatial locations each with P spectroscopic datapoints.
This general and intuitive format allows imaging data from any instrument, measurement scheme, size, or dimensionality to be represented in the same way.
Such an instrument independent data format enables a single set of anaysis and processing functions to be reused for multiple image formats or modalities.
End of explanation
print('\nThe ancillary datasets:\n------------------------------------')
print(h5_file['/Measurement_000/Channel_000/Position_Indices'])
print(h5_file['/Measurement_000/Channel_000/Position_Values'])
print(h5_file['/Measurement_000/Channel_000/Spectroscopic_Indices'])
print(h5_file['/Measurement_000/Channel_000/Spectroscopic_Values'])
print('\nSpatial dimensions:', px.hdf_utils.get_attr(
h5_file['/Measurement_000/Channel_000/Position_Values'], 'labels'))
print('Spectroscopic dimensions:', px.hdf_utils.get_attr(
h5_file['/Measurement_000/Channel_000/Spectroscopic_Values'], 'labels'))
Explanation: Each main dataset is always accompanied by four ancillary datasets that explain:
* the position and spectroscopic value of any given element in the dataset
* the original dimensionality of the dataset
* how to reshape the data back to its N dimensional form
In the case of the 3d dataset under investigation, the positions will be arranged as row0-col0, row0-col1.... row0-colN, row1-col0....
The spectroscopic information remains unchanged.
End of explanation
fig, axis = plt.subplots(figsize=(6,6))
axis.plot(h5_file['/Measurement_000/Channel_000/Position_Indices'][:, 0],
'orange', label='column')
axis.plot(h5_file['/Measurement_000/Channel_000/Position_Indices'][:, 1],
'black', label='row')
axis.legend()
axis.set_title('Position Indices');
Explanation: Visualizing the position dimensions
End of explanation
# specify a pixel index of interest
pixel_ind = 128
# ensuring that this index is within the bounds of the dataset
pixel_ind = max(0, min(int(pixel_ind), h5_main.shape[0]))
# Extracting the frequency vector (x-axis) to plot the spectra against
freq_vec = h5_file['/Measurement_000/Channel_000/Bin_Frequencies'][()] * 1E-3
fig, axis = plt.subplots(figsize=(6,6))
axis.plot(freq_vec, np.abs(h5_main[pixel_ind]))
axis.set_xlabel('Frequency (kHz)')
axis.set_ylabel('Amplitude (a. u.)')
axis.set_title('Spectra at position {}'.format(pixel_ind));
Explanation: Inspecting the measurement at a single spatial pixel:
End of explanation
# specify a pixel index of interest
freq_ind = 40
# ensuring that this index is within the bounds of the dataset
freq_ind = max(0, min(int(freq_ind), h5_main.shape[1]))
# extracting the position data (1D) at the spcified frequency index
data_vec = np.abs(h5_main[:, freq_ind])
# Constructing the 2D spatial map from the 1D vector:
spat_map = np.reshape(data_vec, (h5_chan_grp.attrs['grid_num_rows'],
h5_chan_grp.attrs['grid_num_cols']))
fig, axis = plt.subplots(figsize=(6,6))
axis.imshow(spat_map, cmap='inferno')
axis.set_xlabel('Columns')
axis.set_ylabel('Rows')
freq_vec = h5_file['/Measurement_000/Channel_000/Bin_Frequencies'][()] * 1E-3
axis.set_title('Amplitude at frequency {} kHz '.format(np.round(freq_vec[freq_ind], 2)));
Explanation: Inspecting the spatial distribution of the amplitude at a single frequency
If the frequency is fixed, the spatial distribution would result in a 2D spatial map.
Note that the spatial dimensions are collapsed to a single dimension in all pycroscopy datasets. Thus, the 1D vector at the specified frequency needs to be reshaped back to a 2D map
End of explanation
ndim_data, success = px.hdf_utils.reshape_to_Ndims(h5_main)
if not success:
print('There was a problem automatically reshaping the dataset. \
Attempting to reshape manually')
ndim_data = np.reshape(h5_main[()], (h5_chan_grp.attrs['grid_num_rows'],
h5_chan_grp.attrs['grid_num_cols'],
h5_chan_grp.attrs['num_bins']))
else:
print('Collapsed dataset originally of shape: ', h5_main.shape)
print('Reshaped dataset of shape: ', ndim_data.shape)
Explanation: Reshaping data back to N dimensions
There are several utility functions in pycroscopy that make it easy to access and reshape datasets. Here we show you how to return your dat to the N dimensional form in one easy step.
While this data is a simple example and can be reshaped manually, such reshape operations become especially useful for 5,6,7 or larger dimensional datasets.
End of explanation
# specify a pixel index of interest
freq_ind = 40
# ensuring that this index is within the bounds of the dataset
freq_ind = max(0, min(int(freq_ind), h5_main.shape[1]))
# Constructing the 2D spatial map from the 3D dataset
spat_map = np.abs(ndim_data[:, :, freq_ind])
fig, axis = plt.subplots(figsize=(6,6))
axis.imshow(spat_map, cmap='inferno')
axis.set_xlabel('Columns')
axis.set_ylabel('Rows')
freq_vec = h5_file['/Measurement_000/Channel_000/Bin_Frequencies'][()] * 1E-3
axis.set_title('Amplitude at frequency {} kHz '.format(np.round(freq_vec[freq_ind], 2)));
Explanation: The same data investigation can be performed on the N dimensional dataset:
Here we will plot the spatial maps of the sample at a given frequency again. The reshape operation is no longer necessary and we get the same spatial map again.
End of explanation
h5_file.close()
# Removing the temporary data file:
remove(h5_path)
Explanation: Closing the HDF5 file after data processing or visualization
End of explanation |
15,006 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Properties of Rectangular Waveguide
Introduction
This example demonstrates how to use scikit-rf to calculate some properties of rectangular waveguide. For more information regarding the theoretical basis for these calculations, see the References.
Object Creation
This first section imports necessary modules and creates several RectangularWaveguide objects for some standard waveguide bands.
Step1: Conductor Loss
Step2: Phase and Group Velocity
Step3: Propagation Constant | Python Code:
%matplotlib inline
import skrf as rf
rf.stylely()
# imports
from scipy.constants import mil,c
from skrf.media import RectangularWaveguide, Freespace
from skrf.frequency import Frequency
import matplotlib.pyplot as plt
import numpy as np
# plot formatting
plt.rcParams['lines.linewidth'] = 2
# create frequency objects for standard bands
f_wr5p1 = Frequency(140,220,1001, 'ghz')
f_wr3p4 = Frequency(220,330,1001, 'ghz')
f_wr2p2 = Frequency(330,500,1001, 'ghz')
f_wr1p5 = Frequency(500,750,1001, 'ghz')
f_wr1 = Frequency(750,1100,1001, 'ghz')
# create rectangular waveguide objects
wr5p1 = RectangularWaveguide(f_wr5p1.copy(), a=51*mil, b=25.5*mil, rho = 'au')
wr3p4 = RectangularWaveguide(f_wr3p4.copy(), a=34*mil, b=17*mil, rho = 'au')
wr2p2 = RectangularWaveguide(f_wr2p2.copy(), a=22*mil, b=11*mil, rho = 'au')
wr1p5 = RectangularWaveguide(f_wr1p5.copy(), a=15*mil, b=7.5*mil, rho = 'au')
wr1 = RectangularWaveguide(f_wr1.copy(), a=10*mil, b=5*mil, rho = 'au')
# add names to waveguide objects for use in plot legends
wr5p1.name = 'WR-5.1'
wr3p4.name = 'WR-3.4'
wr2p2.name = 'WR-2.2'
wr1p5.name = 'WR-1.5'
wr1.name = 'WR-1.0'
# create a list to iterate through
wg_list = [wr5p1, wr3p4,wr2p2,wr1p5,wr1]
# creat a freespace object too
freespace = Freespace(Frequency(125,1100, 1001))
freespace.name = 'Free Space'
Explanation: Properties of Rectangular Waveguide
Introduction
This example demonstrates how to use scikit-rf to calculate some properties of rectangular waveguide. For more information regarding the theoretical basis for these calculations, see the References.
Object Creation
This first section imports necessary modules and creates several RectangularWaveguide objects for some standard waveguide bands.
End of explanation
fig, ax = plt.subplots()
for wg in wg_list:
wg.frequency.plot(rf.np_2_db(wg.alpha), label=wg.name)
ax.legend()
ax.set_xlabel('Frequency(GHz)')
ax.set_ylabel('Loss (dB/m)')
ax.set_title('Loss in Rectangular Waveguide (Au)');
fig, ax = plt.subplots()
resistivity_list = np.linspace(1,10,5)*1e-8 # ohm meter
for rho in resistivity_list:
wg = RectangularWaveguide(f_wr1.copy(), a=10*mil, b=5*mil,
rho = rho)
wg.frequency.plot(rf.np_2_db(wg.alpha),label=r'$ \rho $=%.e$ \Omega m$'%rho )
ax.legend()
ax.set_xlabel('Frequency(GHz)')
ax.set_ylabel('Loss (dB/m)')
ax.set_title('Loss vs. Resistivity in\nWR-1.0 Rectangular Waveguide');
Explanation: Conductor Loss
End of explanation
fig, ax = plt.subplots()
for wg in wg_list:
wg.frequency.plot(100*wg.v_p.real/c, label=wg.name )
ax.legend()
ax.set_ylim(50,200)
ax.set_xlabel('Frequency(GHz)')
ax.set_ylabel('Phase Velocity (\%c)')
ax.set_title('Phase Velocity in Rectangular Waveguide');
fig, ax = plt.subplots()
for wg in wg_list:
plt.plot(wg.frequency.f_scaled[1:],
100/c*np.diff(wg.frequency.w)/np.diff(wg.beta),
label=wg.name )
ax.legend()
ax.set_ylim(50,100)
ax.set_xlabel('Frequency(GHz)')
ax.set_ylabel('Group Velocity (\%c)')
ax.set_title('Group Velocity in Rectangular Waveguide');
Explanation: Phase and Group Velocity
End of explanation
fig, ax = plt.subplots()
for wg in wg_list+[freespace]:
wg.frequency.plot(wg.beta, label=wg.name )
ax.legend()
ax.set_xlabel('Frequency(GHz)')
ax.set_ylabel('Propagation Constant (rad/m)')
ax.set_title('Propagation Constant \nin Rectangular Waveguide');
ax.semilogy();
Explanation: Propagation Constant
End of explanation |
15,007 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Algorithms.md
Describe Approach
Algorithm
Description
Step3: 1. Generate simulated data for both settings
Step5: 2. Plotting code for simulated data
Step7: 3. Algorithm code
Step9: 4. Qualitative evaluation (eg, figures) code per trial
Step11: 5. Quantitative evaluation code per trial
Step13: 6. Qualitative evaluation (eg, figures) over all experiments
Step16: 7. Quantitative evaluation code over all experiments
Step17: Run Experiments
Simulated Data
Simulation of success
I expect the simulated data to look like two highly concentrated classes that are clearly separated.
Step18: My simulation data looks as I expected it to.
Simulation of failure
I expect my simulation data to contain two classes overlapping, one with a tight variance and the other with a large variance.
Step19: My simulation data looks as I expected it to.
Simulated Analysis
Simulation of success
I expect the algorithm to easily identify the two clusters as their means are considerably different and the two populations are non-overlapping in $\mathbb{R}^1$.
Step21: Happily, I observe that my algorithm perfomed as expected on this data.
Simulation of failure
I expect the algorithm to fail to identify the two clusters as their means are the same thus despite a different variane they overlap heavily in $\mathbb{R}^1$.
Step22: Not surprisingly, the k-means algorithm performed poorly in this scenario.
Demonstrate Some Real Data Utility
1. Describe in as much detail as possible, including number of samples, the space each sample lives in, and any indiosyncracies (eg, missing data, NaNs, etc.), for both motivating dataset and previously tested datasets
The data being processed is human subject height and weight data. All of the entries for both heigh and weight live in $\mathbb{R}^{1,+}$. The weight is being thresholded at the 50th percentile and it is being used as a label which we are trying to recover from clustering.
Previously tested data (and simulated data above) are similar in that real-valued numbers are being mapped to binary labels.
2. Plot raw data
Step23: 3. Predict performance of the algorithm
Looking at the real data I notice there is a lot of overlap between the two clusters, leading me to suspect that the algorithm will not perform particularly well.
4. Run exact same code on real data as ran on simulations | Python Code:
import numpy as np
from sklearn.cluster import KMeans
import matplotlib.pyplot as plt
%matplotlib inline
import warnings
warnings.filterwarnings("ignore")
# Fix random seed
np.random.seed(123456789)
Explanation: Algorithms.md
Describe Approach
Algorithm
Description: Groups scalars into $k$ clusters. Initalizes centers as random selection of $k$ datapoints within the dataset, assigns data into cluster with nearest center. Recomputes cluster centers with current group membership. Reassigns data to clusters based on nearest center. Repeats last two steps until no changes are made between iterations.
Inputs:
1. $x_i \in \mathbb{R}^d$ for $i \in [n]={1,\ldots, n}$, organized into a a matrix of datapoints with rows being observations and columns being dimensions, $A~\in~\mathbb{R}^{n \times d}$.
2. $k \in \mathbb{Z}_+$, the number of clusters to group data into.
Outputs: Vector of cluster assignments for each observation, $v \in [k]^n$.
function KMEANS1D ($A$,$k$)
1. Initialize $v = $zeros$(n, 1)$ < create empty cluster vector assignment
2. Initialize $v' = $ones$(n, 1)$ < create another empty cluster vector assignment
3. Get $r = $int$($int$rand(k) \times n)$ < Pick random $k$ points to set as clusters centers
4. Set $c = A(r, :)$ < Get initial cluster centers
5. while $v'$ is not $v$} < while last two cluster assignments aren't the same
1. $v = v'$ < save last cluster assignment
1. for$i = 1,...,n$ < for each observation
1. Set $d = $zeros$(k, 1)$ < set distances from point to centers as zero
2. for{$j = 1,...,k$} < for each cluster
1. $d(j) = $distance$(A(i, :), c(j, :))$ < compute distance
3.endfor
4. $v'(j) = $where$($min$(d))$ < assign current observation to nearest cluster
2. endfor
3. for $j = 1,...,k$
1. $c(j, :) = $mean$(A($where$(v' == j), :))$ < update centers with mean of cluster
4. endfor
6. endwhile
endfunction
Simulation
Data will be sampled from a Uniform Distribution (generated by numpy) with a scale and offset determined by the generating class.
Success
1-D vector of 20 observations, two clusters with
10 of them class 0 with values between 0 and 1, and
10 of them class 1 with values between 4 and 5
We expect k-means to do well in this situation because the means of the two classes are different.
Failure
1-D vector of 20 observations, two clusters with
10 of them class 0 with values between -1 and 1, and
10 of them class 1 with values between -5 and 5
We expect k-means to fail in this case because the means of the two classes are the same, though the distributions from which they're sampled has a different variance.
Analysis
Qualitative visualizations
We will use a scatter plot in which the clusters are shown as the colour of the points belonging to them, with a legend identifying cluster ID. We will plot, next to this, the true cluster memberships.
Quantitative score
We will use the truth function (i.e. sum of 0-1 loss divided by N), which returns the sum of 1s if values match and 0s if they do not, divided by the total number of points. Because cluster value is arbitrary, we will first sort the clusters to be in ascending order based on their center.
Summary plot
We will use a histogram of the performance (i.e. portion of the points labeled correctly) to summarize the quantitative performance over a series of simulations.
P-value & test statistic
We will create a null distribution by running a permutation test (i.e. randomly assigning cluster labels to points and recording the distribution of performance as defined by the misclassification rate), and the p-value will be reported as 1 minus the fraction of times the algorithm performs better than points in the null distribution.
Write Code
End of explanation
def simulate_s(n):
Simulates data with different mean but same variance
inp:
@n: number of samples
outp:
@A: data vector
@y: true labels
A = np.random.random(n)
A[int(n/2):] += 4
y = np.repeat(0, n)
y[int(n/2):] += 1
return (A, y)
def simulate_f(n):
Simulates data with same mean but different variance
inp:
@n: number of samples
outp:
@A: data vector
@y: true labels
A = np.append(np.random.random(int(np.floor(n/2)))*10-5, np.random.random(int(np.ceil(n/2)))*2-1)
y = np.repeat(0, n)
y[int(np.ceil(n/2)):] += 1
return (A, y)
Explanation: 1. Generate simulated data for both settings
End of explanation
def plotsimdata(A, y):
Plots simulated data vector coloured by label
inp:
@A: data vector
@y: true labels
plt.figure(figsize=(4, 5))
plt.scatter(np.zeros(sum(y==0)), A[y==0], color='blue')
plt.scatter(np.zeros(sum(y==1)), A[y==1], color='red')
plt.title('Truth (Simulation 1)')
plt.ylabel('Value')
plt.yticks([np.floor(np.min(A)), np.ceil(np.max(A))])
plt.xticks([0])
plt.ylim([np.floor(np.min(A)), np.ceil(np.max(A))])
plt.legend(['Class 0', 'Class 1'], bbox_to_anchor=(1.5, 1))
plt.show()
Explanation: 2. Plotting code for simulated data
End of explanation
def kmeans_1d(dat, k):
Clusters entries in A into k clusters
inp:
@dat: data matrix
@k: number of classes
outp:
@c: estimated labels
classif = KMeans(n_clusters=k, random_state=0).fit(dat.reshape(-1,1))
c = classif.labels_
cent = classif.cluster_centers_
a = sorted(range(len(cent)), key=lambda k: cent[k])
# Super janky, toggles labels if permutated in 2-class case
c = np.array([0 if a[0] == item else a[0] for item in c])
return c
Explanation: 3. Algorithm code
End of explanation
def plotclusters(A, y, c):
Plots data coloured by both true and estimated label
inp:
@A: data vector
@y: true labels
@c: estimated labels
plt.figure(figsize=(4, 5))
plt.scatter(np.zeros(sum(y==0)), A[y==0], color='blue')
plt.scatter(np.zeros(sum(y==1)), A[y==1], color='red')
plt.ylabel('Value')
plt.yticks([np.floor(np.min(A)), np.ceil(np.max(A))])
plt.xticks([0])
plt.ylim([np.floor(np.min(A)), np.ceil(np.max(A))])
plt.scatter(np.ones(sum(c==0)), A[c==0], color='blue')
plt.scatter(np.ones(sum(c==1)), A[c==1], color='red')
plt.legend(['Class 0', 'Class 1'], bbox_to_anchor=(2, 1))
plt.title('Portion successfully labeled: %.3f' % truth(y,c))
plt.yticks([np.floor(np.min(A)), np.ceil(np.max(A))])
plt.xticks([0, 1], ['Truth', 'Estimated'])
plt.xlim([-.5, 1.5])
plt.ylim([np.floor(np.min(A)), np.ceil(np.max(A))])
plt.show()
Explanation: 4. Qualitative evaluation (eg, figures) code per trial
End of explanation
def truth(y, c):
Computes the percent of successful label assignments
inp:
@y: true labels
@c: estimated labels
outp:
@p: percent corrent labels (ranging from [0, 1])
p = 1.0*(y == c)
p = np.sum(p)/len(y)
return p
Explanation: 5. Quantitative evaluation code per trial
End of explanation
def plotmeans(true, est):
Plots data coloured by both true and estimated label
inp:
@true: true cluster means from simulations as a list of lists
@estimated: estimated cluster means from simulations as a list of lists
plt.figure(figsize=(4, 5))
xs = [0, 0.5]
cols = ['blue', 'red']
for idx, sim in enumerate(true):
dat0 = [true[idx][0], est[idx][0]]
dat1 = [true[idx][1], est[idx][1]]
plt.plot(xs, dat0, 'bo--')
plt.plot(xs, dat1, 'ro--')
plt.title('Qualitative Evaluation')
plt.ylabel('Value')
plt.yticks([np.floor(np.min([true, est])), np.ceil(np.max([true, est]))])
plt.xlim([xs[0]-0.2, xs[1]+0.2])
plt.xticks(xs, ['Truth', 'Estimated'])
plt.ylim([np.floor(np.min([true, est])), np.ceil(np.max([true, est]))])
plt.show()
Explanation: 6. Qualitative evaluation (eg, figures) over all experiments
End of explanation
def plothist(score, n_score, pval, deg):
Plots the histogram of the null distribution and shows the estimation performance
inp:
@score: score of estimation
@n_score: scores from null distribution
@pval: calculated pvalue
@deg: degree of confidence
plt.hist(n_score, bins=15, color='black')
plt.hold(True)
plt.vlines(score, 0, len(n_score)/4, color='red')
plt.xlim([-0.1, 1.1])
plt.title('P-value <= %.*f' % (abs(int(deg)), pval))
plt.xlabel('performance')
plt.ylabel('count')
plt.legend(['estimation', 'null distribution'], bbox_to_anchor=(1.6, 1))
plt.show()
def get_pval(y, c, d=1000):
Samples null distribution given a true set of labels and computes pval
inp:
@y: true labels
@c: estimated labels
@d: number of times to sample from null
outp:
@pval: p value
@deg: degree of confidence in answer (in the form of 10^-${deg})
score = truth(y, c)
n_score = []
for i in range(d):
t = np.random.permutation(y)
n_score += [truth(y, t)]
sums = 1.0*(score < n_score)
pval = sum(sums)/len(sums)
deg = -np.log10(d)
plothist(score, n_score, pval, deg)
return pval, deg
# A, y = simulate_f(800)
# c = my_kmeans_1d(A, 2)
# pval, deg = get_pval(y, c, 1000)
Explanation: 7. Quantitative evaluation code over all experiments
End of explanation
A, y = simulate_s(50)
plotsimdata(A, y)
Explanation: Run Experiments
Simulated Data
Simulation of success
I expect the simulated data to look like two highly concentrated classes that are clearly separated.
End of explanation
A, y = simulate_f(50)
plotsimdata(A, y)
Explanation: My simulation data looks as I expected it to.
Simulation of failure
I expect my simulation data to contain two classes overlapping, one with a tight variance and the other with a large variance.
End of explanation
clusters = []
true_centers = []
est_centers = []
for idx in range(10):
A, y = simulate_s(50)
c = kmeans_1d(A, 2)
plotclusters(A, y, c)
true_centers += [[np.mean(A[np.where(y==0)]), np.mean(A[np.where(y==1)])]]
est_centers += [[np.mean(A[np.where(c==0)]), np.mean(A[np.where(c==1)])]]
clusters += [[c]]
plotmeans(true_centers, est_centers)
med_c = np.median(clusters[0], 0)
pval, conf = get_pval(y, med_c)
Explanation: My simulation data looks as I expected it to.
Simulated Analysis
Simulation of success
I expect the algorithm to easily identify the two clusters as their means are considerably different and the two populations are non-overlapping in $\mathbb{R}^1$.
End of explanation
def replace_nans(est, val):
Replaces nan values with whatever value you want - necessary if a cluster is empty
inp:
@est: estimated clusters
@val: value to replace
for idx, cent in enumerate(est):
if np.isnan(cent[1]):
est[idx][1] = val
elif np.isnan(cent[1]):
est[idx][1] = val
return est
clusters = []
true_centers = []
est_centers = []
for idx in range(10):
A, y = simulate_f(50)
c = kmeans_1d(A, 2)
plotclusters(A, y, c)
true_centers += [[np.mean(A[np.where(y==0)]), np.mean(A[np.where(y==1)])]]
est_centers += [[np.mean(A[np.where(c==0)]), np.mean(A[np.where(c==1)])]]
clusters += [[c]]
replace_nans(est_centers, 6)
plotmeans(true_centers, est_centers)
mean_c = np.mean(clusters[0], 0)
pval, conf = get_pval(y, mean_c)
Explanation: Happily, I observe that my algorithm perfomed as expected on this data.
Simulation of failure
I expect the algorithm to fail to identify the two clusters as their means are the same thus despite a different variane they overlap heavily in $\mathbb{R}^1$.
End of explanation
import csv
f = open('height_weight.csv')
A = []
y = []
reader = csv.reader(f)
for line in reader:
A += [float(line[0])]
y += [float(line[1])]
y = 1*(y > np.median(y))
A = np.array(A)
y = np.array(y)
plotsimdata(A, y)
Explanation: Not surprisingly, the k-means algorithm performed poorly in this scenario.
Demonstrate Some Real Data Utility
1. Describe in as much detail as possible, including number of samples, the space each sample lives in, and any indiosyncracies (eg, missing data, NaNs, etc.), for both motivating dataset and previously tested datasets
The data being processed is human subject height and weight data. All of the entries for both heigh and weight live in $\mathbb{R}^{1,+}$. The weight is being thresholded at the 50th percentile and it is being used as a label which we are trying to recover from clustering.
Previously tested data (and simulated data above) are similar in that real-valued numbers are being mapped to binary labels.
2. Plot raw data
End of explanation
c = kmeans_1d(A, 2)
plotclusters(A, y, c)
true_centers = [[np.mean(A[np.where(y==0)]), np.mean(A[np.where(y==1)])]]
est_centers = [[np.mean(A[np.where(c==0)]), np.mean(A[np.where(c==1)])]]
clusters = [[c]]
replace_nans(est_centers, 0)
plotmeans(true_centers, est_centers)
med_c = np.median(clusters[0], 0)
pval, conf = get_pval(y, med_c)
Explanation: 3. Predict performance of the algorithm
Looking at the real data I notice there is a lot of overlap between the two clusters, leading me to suspect that the algorithm will not perform particularly well.
4. Run exact same code on real data as ran on simulations
End of explanation |
15,008 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Moving the sensor away from the ESP8266
The previous experiment showed that adding a piece of foam core board between the ESP8266 board and SHT30 temperature sensor board reduced spread in temperature readings, but those values were still high compared to the room thermostat. The burning question
Step1: Wow. Moving the sensor away from the ESP8266 drops the temperature reading over 10F, producing values that are consistent with the room thermostat. It also appears to reduce jitter in the readings.
How are humidity readings affected? | Python Code:
%matplotlib inline
import matplotlib
matplotlib.rcParams['figure.figsize'] = (12, 5)
import pandas as pd
df = pd.read_csv('movesensors.csv', header=None, names=['time', 'mac', 'f', 'h'], parse_dates=[0])
per_sensor_f = df.pivot(index='time', columns='mac', values='f')
downsampled_f = per_sensor_f.resample('2T').mean()
downsampled_f.plot();
Explanation: Moving the sensor away from the ESP8266
The previous experiment showed that adding a piece of foam core board between the ESP8266 board and SHT30 temperature sensor board reduced spread in temperature readings, but those values were still high compared to the room thermostat. The burning question: How much is heat from the ESP8266 board influencing the sensor?
For this experiment, I chose two probes (:A0 and:2B) that seemed to be behaving very closely with respect to temperature. :2B will be the control.
Initial conditions where a 75F reading on the room thermostat, and 76F on a digital cooking thermometer placed atop the thermostat. Moving the tip of the cooking thermometer to the back of the CPU board showed a temperature of 86F. Something on the ESP8266 board is putting off significant heat.
Just before noon, I pulled the sensor board off of the :A0 ESP8266, and reattached it using 4" jumper wires. (I was concerned about this being a fragile configuration, but the jumpers attached firmly.) That changed the configuration to
(The SHT30 is an I2C device, so only four jumpers are needed for a connection.)
The code below is explained in the InitialTemperatureValues notebook.
End of explanation
per_sensor_h = df.pivot(index='time', columns='mac', values='h')
downsampled_h = per_sensor_h.resample('2T').mean()
downsampled_h.plot();
Explanation: Wow. Moving the sensor away from the ESP8266 drops the temperature reading over 10F, producing values that are consistent with the room thermostat. It also appears to reduce jitter in the readings.
How are humidity readings affected?
End of explanation |
15,009 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using Selenium to Test User Interactions
Where were we at the end of the last chapter? Let’s rerun the test and find out
Step1: Did you try it, and get an error saying Problem loading page or Unable to connect? So did I. It’s because we forgot to spin up the dev server first using manage.py runserver. Do that, and you’ll get the failure message we’re after.
One of the great things about TDD is that you never have to worry about forgetting what to do next—just rerun your tests and they will tell you what you need to work on.
“Finish the test”, it says, so let’s do just that! Open up functional_tests.py and we’ll extend our FT
Step2: We’re using several of the methods that Selenium provides to examine web pages
Step3: Decoding that, the test is saying it can’t find an <h1> element on the page. Let’s see what we can do to add that to the HTML of our home page.
Big changes to a functional test are usually a good thing to commit on their own. I failed to do so in my first draft, and I regretted it later when I changed my mind and had the change mixed up with a bunch of others. The more atomic your commits, the better
Step4: Great! We’ll start by taking our HTML string and putting it into its own file. Create a directory called lists/templates to keep templates in, and then open a file at lists/templates/home.html, to which we’ll transfer our HTML
Step5: Now to change our views
Step6: Instead of building our own HttpResponse, we now use the Django render function. It takes the request as its first parameter (for reasons we’ll go into later) and the name of the template to render. Django will automatically search folders called templates inside any of your apps' directories. Then it builds an HttpResponse for you, based on the content of the template.
Templates are a very powerful feature of Django’s, and their main strength consists of substituting Python variables into HTML text. We’re not using this feature yet, but we will in future chapters. That’s why we use render and (later) render_to_string rather than, say, manually reading the file from disk with the built-in open.
Let's see if it works
Step7: Another chance to analyse a traceback
Step8: You can see there’s lots of apps already in there by default. We just need to add ours, lists, to the bottom of the list. Don’t forget the trailing comma—it may not be required, but one day you’ll be really annoyed when you forget it and Python concatenates two strings on different lines…
Now we can try running the tests again
Step9: Darn, not quite.
Depending on whether your text editor insists on adding newlines to the end of files, you may not even see this error. If so, you can safely ignore the next bit, and skip straight to where you can see the listing says OK.
But it did get further! It seems it’s managed to find our template, but the last of the three assertions is failing. Apparently there’s something wrong at the end of the output. I had to do a little print(repr(response.content)) to debug this, but it turns out that the switch to templates has introduced an additional newline (\n) at the end. We can get them to pass like this
Step10: Our refactor of the code is now complete, and the tests mean we’re happy that behaviour is preserved. Now we can change the tests so that they’re no longer testing constants; instead, they should just check that we’re rendering the right template. Another Django helper function called render_to_string is our friend here
Step11: We use .decode() to convert the response.content bytes into a Python unicode string, which allows us to compare strings with strings, instead of bytes with bytes as we did earlier.
The main point, though, is that instead of testing constants we’re testing our implementation. Great!
Django has a test client with tools for testing templates, which we’ll use in later chapters. For now we’ll use the low-level tools to make sure we’re comfortable with how everything works. No magic!
On Refactoring
That was an absolutely trivial example of refactoring. But, as Kent Beck puts it in Test-Driven Development
Step12: Let’s see if our functional test likes it a little better | Python Code:
%cd ../examples/superlists/
!python3 functional_tests.py
Explanation: Using Selenium to Test User Interactions
Where were we at the end of the last chapter? Let’s rerun the test and find out:
End of explanation
%%writefile functional_tests.py
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import unittest
class NewVisitorTest(unittest.TestCase):
def setUp(self):
self.browser = webdriver.Firefox()
self.browser.implicitly_wait(3)
def tearDown(self):
self.browser.quit()
def test_can_start_a_list_and_retrieve_it_later(self):
# Edith has heard about a cool new online to-do app. She goes
# to check out its homepage
self.browser.get('http://localhost:8000')
# She notices the page title and header mention to-do lists
self.assertIn('To-Do', self.browser.title)
header_text = self.browser.find_element_by_tag_name('h1').text
self.assertIn('To-Do', header_text)
# She is invited to enter a to-do item straight away
inputbox = self.browser.find_element_by_id('id_new_item')
self.assertEqual(
inputbox.get_attribute('placeholder'),
'Enter a to-do item'
)
# She types "Buy peacock feathers" into a text box (Edith's hobby
# is tying fly-fishing lures)
inputbox.send_keys('Buy peacock feathers')
# When she hits enter, the page updates, and now the page lists
# "1: Buy peacock feathers" as an item in a to-do list table
inputbox.send_keys(Keys.ENTER)
table = self.browser.find_element_by_id('id_list_table')
rows = table.find_elements_by_tag_name('tr')
self.assertTrue(
any(row.text == '1: Buy peacock feathers' for row in rows)
)
# There is still a text box inviting her to add another item. She
# enters "Use peacock feathers to make a fly" (Edith is very
# methodical)
self.fail('Finish the test!')
# The page updates again, and now shows both items on her list
# Edith wonders whether the site will remember her list. Then she sees
# that the site has generated a unique URL for her -- there is some
# explanatory text to that effect.
# She visits that URL - her to-do list is still there.
# Satisfied, she goes back to sleep
if __name__ == '__main__':
unittest.main(warnings='ignore')
Explanation: Did you try it, and get an error saying Problem loading page or Unable to connect? So did I. It’s because we forgot to spin up the dev server first using manage.py runserver. Do that, and you’ll get the failure message we’re after.
One of the great things about TDD is that you never have to worry about forgetting what to do next—just rerun your tests and they will tell you what you need to work on.
“Finish the test”, it says, so let’s do just that! Open up functional_tests.py and we’ll extend our FT:
End of explanation
!python3 functional_tests.py
Explanation: We’re using several of the methods that Selenium provides to examine web pages: find_element_by_tag_name, find_element_by_id, and find_elements_by_tag_name (notice the extra s, which means it will return several elements rather than just one). We also use send_keys, which is Selenium’s way of typing into input elements. You’ll also see the Keys class (don’t forget to import it), which lets us send special keys like Enter, but also modifiers like Ctrl.
Watch out for the difference between the Selenium find_element_by... and find_elements_by... functions. One returns an element, and raises an exception if it can’t find it, whereas the other returns a list, which may be empty.
Also, just look at that any function. It’s a little-known Python built-in. I don’t even need to explain it, do I? Python is such a joy.
Although, if you’re one of my readers who doesn’t know Python, what’s happening inside the any is a generator expression, which is like a list comprehension but awesomer. You need to read up on this. If you Google it, you’ll find Guido himself explaining it nicely. Come back and tell me that’s not pure joy!
Let’s see how it gets on:
End of explanation
!python3 manage.py test
Explanation: Decoding that, the test is saying it can’t find an <h1> element on the page. Let’s see what we can do to add that to the HTML of our home page.
Big changes to a functional test are usually a good thing to commit on their own. I failed to do so in my first draft, and I regretted it later when I changed my mind and had the change mixed up with a bunch of others. The more atomic your commits, the better:
$ git diff # should show changes to functional_tests.py
$ git commit -am "Functional test now checks we can input a to-do item"
The "Don't Test Constants" Rule, and Templates to the Rescue
Let’s take a look at our unit tests, lists/tests.py. Currently we’re looking for specific HTML strings, but that’s not a particularly efficient way of testing HTML. In general, one of the rules of unit testing is Don’t test constants, and testing HTML as text is a lot like testing a constant.
In other words, if you have some code that says:
python
wibble = 3
There’s not much point in a test that says:
from myprogram import wibble
assert wibble == 3
Unit tests are really about testing logic, flow control, and configuration. Making assertions about exactly what sequence of characters we have in our HTML strings isn’t doing that.
What’s more, mangling raw strings in Python really isn’t a great way of dealing with HTML. There’s a much better solution, which is to use templates. Quite apart from anything else, if we can keep HTML to one side in a file whose name ends in .html, we’ll get better syntax highlighting! There are lots of Python templating frameworks out there, and Django has its own which works very well. Let’s use that.
Refactoring to Use a Template
What we want to do now is make our view function return exactly the same HTML, but just using a different process. That’s a refactor—when we try to improve the code without changing its functionality.
That last bit is really important. If you try and add new functionality at the same time as refactoring, you’re much more likely to run into trouble. Refactoring is actually a whole discipline in itself, and it even has a reference book: Martin Fowler’s Refactoring.
The first rule is that you can’t refactor without tests. Thankfully, we’re doing TDD, so we’re way ahead of the game. Let’s check our tests pass; they will be what makes sure that our refactoring is behaviour preserving:
End of explanation
!mkdir lists/templates
%%writefile lists/templates/home.html
<html>
<title>To-Do lists</title>
</html>
Explanation: Great! We’ll start by taking our HTML string and putting it into its own file. Create a directory called lists/templates to keep templates in, and then open a file at lists/templates/home.html, to which we’ll transfer our HTML:
End of explanation
%%writefile lists/views.py
from django.shortcuts import render
def home_page(request):
return render(request, 'home.html')
Explanation: Now to change our views
End of explanation
!python3 manage.py test
Explanation: Instead of building our own HttpResponse, we now use the Django render function. It takes the request as its first parameter (for reasons we’ll go into later) and the name of the template to render. Django will automatically search folders called templates inside any of your apps' directories. Then it builds an HttpResponse for you, based on the content of the template.
Templates are a very powerful feature of Django’s, and their main strength consists of substituting Python variables into HTML text. We’re not using this feature yet, but we will in future chapters. That’s why we use render and (later) render_to_string rather than, say, manually reading the file from disk with the built-in open.
Let's see if it works:
End of explanation
...
# Application definition
INSTALLED_APPS = (
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'lists',
)
...
Explanation: Another chance to analyse a traceback:
We start with the error: it can’t find the template.
Then we double-check what test is failing: sure enough, it’s our test of the view HTML.
Then we find the line in our tests that caused the failure: it’s when we call the home_page function.
Finally, we look for the part of our own application code that caused the failure: it’s when we try and call render.
So why can’t Django find the template? It’s right where it’s supposed to be, in the lists/templates folder.
The thing is that we haven’t yet officially registered our lists app with Django. Unfortunately, just running the startapp command and having what is obviously an app in your project folder isn’t quite enough. You have to tell Django that you really mean it, and add it to settings.py as well. Belt and braces. Open it up and look for a variable called INSTALLED_APPS, to which we’ll add lists:
End of explanation
!python3 manage.py test
Explanation: You can see there’s lots of apps already in there by default. We just need to add ours, lists, to the bottom of the list. Don’t forget the trailing comma—it may not be required, but one day you’ll be really annoyed when you forget it and Python concatenates two strings on different lines…
Now we can try running the tests again:
End of explanation
%%writefile lists/tests.py
from django.core.urlresolvers import resolve
from django.test import TestCase
from django.http import HttpRequest
from lists.views import home_page
class HomePageTest(TestCase):
def test_root_url_resolves_to_home_page_view(self):
found = resolve('/')
self.assertEqual(found.func, home_page)
def test_home_page_returns_correct_html(self):
request = HttpRequest()
response = home_page(request)
self.assertTrue(response.content.strip().startswith(b'<html>')) #<---- Fix offending newline here
self.assertIn(b'<title>To-Do lists</title>', response.content)
self.assertTrue(response.content.strip().endswith(b'</html>')) #<---- Fix offending newline here
!python3 manage.py test
Explanation: Darn, not quite.
Depending on whether your text editor insists on adding newlines to the end of files, you may not even see this error. If so, you can safely ignore the next bit, and skip straight to where you can see the listing says OK.
But it did get further! It seems it’s managed to find our template, but the last of the three assertions is failing. Apparently there’s something wrong at the end of the output. I had to do a little print(repr(response.content)) to debug this, but it turns out that the switch to templates has introduced an additional newline (\n) at the end. We can get them to pass like this:
End of explanation
# %load lists/tests.py
from django.core.urlresolvers import resolve
from django.test import TestCase
from django.http import HttpRequest
from django.template.loader import render_to_string
from lists.views import home_page
class HomePageTest(TestCase):
def test_root_url_resolves_to_home_page_view(self):
found = resolve('/')
self.assertEqual(found.func, home_page)
def test_home_page_returns_correct_html(self):
request = HttpRequest()
response = home_page(request)
expected_html = render_to_string('home.html')
self.assertEqual(response.content.decode(), expected_html)
Explanation: Our refactor of the code is now complete, and the tests mean we’re happy that behaviour is preserved. Now we can change the tests so that they’re no longer testing constants; instead, they should just check that we’re rendering the right template. Another Django helper function called render_to_string is our friend here:
End of explanation
%%writefile lists/templates/home.html
<html>
<head>
<title>To-Do lists</title>
</head>
<body>
<h1>Your To-Do list</h1>
</body>
</html>
Explanation: We use .decode() to convert the response.content bytes into a Python unicode string, which allows us to compare strings with strings, instead of bytes with bytes as we did earlier.
The main point, though, is that instead of testing constants we’re testing our implementation. Great!
Django has a test client with tools for testing templates, which we’ll use in later chapters. For now we’ll use the low-level tools to make sure we’re comfortable with how everything works. No magic!
On Refactoring
That was an absolutely trivial example of refactoring. But, as Kent Beck puts it in Test-Driven Development: By Example, "Am I recommending that you actually work this way? No. I’m recommending that you be able to work this way."
In fact, as I was writing this my first instinct was to dive in and change the test first—make it use the render_to_string function straight away, delete the three superfluous assertions, leaving just a check of the contents against the expected render, and then go ahead and make the code change. But notice how that actually would have left space for me to break things: I could have defined the template as containing any arbitrary string, instead of the string with the right <html> and <title> tags.
When refactoring, work on either the code or the tests, but not both at once.
There’s always a tendency to skip ahead a couple of steps, to make a couple of tweaks to the behaviour while you’re refactoring, but pretty soon you’ve got changes to half a dozen different files, you’ve totally lost track of where you are, and nothing works any more. If you don’t want to end up like Refactoring Cat (below), stick to small steps; keep refactoring and functionality changes entirely separate.
We’ll come across “Refactoring Cat” again during this book, as an example of what happens when we get carried away and want to change too many things at once. Think of it as the little cartoon demon counterpart to the Testing Goat, popping up over your other shoulder and giving you bad advice…
It’s a good idea to do a commit after any refactoring:
$ git status # see tests.py, views.py, settings.py, + new templates folder
$ git add . # will also add the untracked templates folder
$ git diff --staged # review the changes we're about to commit
$ git commit -m "Refactor home page view to use a template"
A Little More of Our Front Page
In the meantime, our functional test is still failing. Let’s now make an actual code change to get it passing. Because our HTML is now in a template, we can feel free to make changes to it, without needing to write any extra unit tests. We wanted an <h1>:
End of explanation
!python3 functional_tests.py
Explanation: Let’s see if our functional test likes it a little better:
End of explanation |
15,010 | Given the following text description, write Python code to implement the functionality described.
Description: | Python Code:
def closest_integer(value):
'''
Create a function that takes a value (string) representing a number
and returns the closest integer to it. If the number is equidistant
from two integers, round it away from zero.
Examples
>>> closest_integer("10")
10
>>> closest_integer("15.3")
15
Note:
Rounding away from zero means that if the given number is equidistant
from two integers, the one you should return is the one that is the
farthest from zero. For example closest_integer("14.5") should
return 15 and closest_integer("-14.5") should return -15.
'''
from math import floor, ceil
if value.count('.') == 1:
# remove trailing zeros
while (value[-1] == '0'):
value = value[:-1]
num = float(value)
if value[-2:] == '.5':
if num > 0:
res = ceil(num)
else:
res = floor(num)
elif len(value) > 0:
res = int(round(num))
else:
res = 0
return res |
15,011 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
$\newcommand{\Reals}{\mathbb{R}}
\newcommand{\Nats}{\mathbb{N}}
\newcommand{\PDK}{{k}}
\newcommand{\IS}{\mathcal{X}}
\newcommand{\FM}{\Phi}
\newcommand{\Gram}{G}
\newcommand{\RKHS}{\mathcal{H}}
\newcommand{\prodDot}[2]{\left\langle#1,#2\right\rangle}
\DeclareMathOperator{\argmin}{arg\,min}
\DeclareMathOperator{\argmax}{arg\,max}$
<center>
<h1>Tutorial on modern kernel methods</h1>
<br/>
<br/>
<br/>
Ingmar Schuster<br/><br/>
<i>(Zalando Research)</i>
</center>
Overview
Introduction
Feature engineering and two simple classification algorithms
Kernels and feature space
Applications
mean embedding, conditional mean embedding and operators
Training GANs
Regression with uncertainty estimates
Conclusion and outlook
Introduction
Kernel methods
Support Vector Machines (SVMs) are a staple for classification
Gaussian Processes (GPs) very popular for regression
kernel mean embedding of distributions/conditional distributions
recent techniques for representation learning/unsupervised learning
Deep Gaussian Processes
Compositional Kernel Machines
very elegant mathematics
Step1: Gaussian Processes
model for functions/continuous output
for new input returns predicted output and uncertainty
Step2: Support Vector Machines
model for classification
map data nonlinearly to higher dimensionsal space
separate points of different classes using a plane (i.e. linearly)
Step3: Feature engineering and two classification algorithms
Feature engineering in Machine Learning
feature engineering
Step4: Working in Feature Space
want Feature Space $\RKHS$ (the codomain of $\FM$) to be vector space to get nice mathematical structure
definition of inner products induces norms and possibility to measure angles
can use linear algebra in $\RKHS$ to solve ML problems
inner products
angles
norms
distances
induces nonlinear operations on the Input Space (domain of $\FM$)
Two simple classification algorithms
given data points from mixture of two distributions with densities $p_0,p_1$
Step5: Classification using inner products in Feature Space
compute mean feature space embedding $$\mu_{0} = \frac{1}{N_0} \sum_{l_i = 0} \FM(x_i) ~~~~~~~~ \mu_{1} = \frac{1}{N_1} \sum_{l_i = 1} \FM(x_i)$$
assign test point to most similar class in terms of inner product between point and mean embedding $\prodDot{\FM(x)}{\mu_c}$
$$f_d(x) = \argmax_{c\in{0,~1}} \prodDot{\FM(x)}{\mu_c}$$
(remember in $\Reals^2$ canonically
Step6: Classification using density estimation
estimate density for each class by centering a gaussian, taking mixture as estimate
$$\widehat{p}0 = \frac{1}{N_0} \sum{l_i = 0} \mathcal{N}(\cdot; x_i,\Sigma) ~~~~~~~~ \widehat{p}1 = \frac{1}{N_1} \sum{l_i = 1} \mathcal{N}(\cdot; x_i,\Sigma)$$
Step7: Classification using density estimation
estimate density for each class by centering a gaussian, taking mixture as estimate
$$\widehat{p}0 = \frac{1}{N_0} \sum{l_i = 0} \mathcal{N}(\cdot; x_i,\Sigma) ~~~~~~~~ \widehat{p}1 = \frac{1}{N_1} \sum{l_i = 1} \mathcal{N}(\cdot; x_i,\Sigma)$$
assign test point $x$ to class $c$ that gives highest value for $\widehat{p}_c(x)$
$\widehat{p}_c$ is known as a kernel density estimate (KDE)
different but overlapping notion of 'kernel'
classification algorithm known as Parzen windows classification
<center><h3> For a certain feature map and inner product, both algorithms are the same!</h3></center>
<center>Let's construct this feature map and inner product.</center>
Kernels and feature space
Positive definite functions and feature spaces
let $\PDK
Step8: Applications
Kernel mean embedding
mean feature with canonical feature map $\frac{1}{N} \sum_{i = 1}^N \FM(x_i) = \frac{1}{N} \sum_{i = 1}^N \PDK(x_i, \cdot)$
this the estimate of the kernel mean embedding of the distribution/density $\rho$ of $x_i$
$$\mu_\rho(\cdot) = \int \PDK(x,\cdot) \mathrm{d}\rho(x)$$
using this we can define a distance between distributions $\rho, q$ as
$$\mathrm{MMD}(\rho, q)^2 = \|\mu_\rho - \mu_q \|^2_\RKHS$$
called the maximum mean discrepancy (MMD)
Has been used as the critic in generative adversial networks (i.e. generative network as usual, MMD as drop-in for discriminator)
Conditional mean embedding (1)
we can define operators on RKHSs
these are maps from RKHS elements to RKHS elements (i.e. mapping between functionals)
one such operator is the conditional mean embedding $\mathcal{D}_{Y|X}$
given the embedding of the input variables distribution, returns the embedding of the output variables distribution
Regression with uncertainty estimate
Conditional mean embedding (2)
An example
Step9: Conditional mean embedding (3)
closed form estimate given samples from input and output
$$\begin{bmatrix}\PDK_Y(y_1, \cdot),& \dots &, \PDK_Y(y_N, \cdot)\end{bmatrix} \Gram_X^{-1} \begin{bmatrix}\PDK_X(x_1, \cdot)\ \vdots \ \PDK_X(x_N, \cdot)\end{bmatrix}$$
closed form estimate of output embedding for new input $x^*$
$$\prodDot{\mathcal{D}_{Y|X}}{\PDK(x^,\cdot)} = \begin{bmatrix}\PDK_Y(y_1, \cdot),& \dots &, \PDK_Y(y_N, \cdot)\end{bmatrix} \Gram_X^{-1} \begin{bmatrix}\PDK_X(x_1, x^)\ \vdots \ \PDK_X(x_N, x^*)\end{bmatrix}$$
Conditional mean embedding (4)
Similar to Gaussian processes, but output distribution more flexible
mixture of Gaussians, Laplace, distributions on discrete objects
multimodality can be represented
multidimensional output
output can be combination of e.g. string and reals
Conditional mean embedding was used to construct Kernel Bayes Rule, enabling closed form Bayesian inference
other types of operators have been derived (see Stefan Klus' talk next week) | Python Code:
### FIRST SOME CODE ####
from __future__ import division, print_function, absolute_import
from IPython.display import SVG, display, Image, HTML
import numpy as np, scipy as sp, pylab as pl, matplotlib.pyplot as plt, scipy.stats as stats, sklearn, sklearn.datasets
from scipy.spatial.distance import squareform, pdist, cdist
import rkhs_operators as ro
import distributions as dist #commit 480cf98 of https://github.com/ingmarschuster/distributions
pl.style.use(u'seaborn-talk')
Explanation: $\newcommand{\Reals}{\mathbb{R}}
\newcommand{\Nats}{\mathbb{N}}
\newcommand{\PDK}{{k}}
\newcommand{\IS}{\mathcal{X}}
\newcommand{\FM}{\Phi}
\newcommand{\Gram}{G}
\newcommand{\RKHS}{\mathcal{H}}
\newcommand{\prodDot}[2]{\left\langle#1,#2\right\rangle}
\DeclareMathOperator{\argmin}{arg\,min}
\DeclareMathOperator{\argmax}{arg\,max}$
<center>
<h1>Tutorial on modern kernel methods</h1>
<br/>
<br/>
<br/>
Ingmar Schuster<br/><br/>
<i>(Zalando Research)</i>
</center>
Overview
Introduction
Feature engineering and two simple classification algorithms
Kernels and feature space
Applications
mean embedding, conditional mean embedding and operators
Training GANs
Regression with uncertainty estimates
Conclusion and outlook
Introduction
Kernel methods
Support Vector Machines (SVMs) are a staple for classification
Gaussian Processes (GPs) very popular for regression
kernel mean embedding of distributions/conditional distributions
recent techniques for representation learning/unsupervised learning
Deep Gaussian Processes
Compositional Kernel Machines
very elegant mathematics
End of explanation
display(Image(filename="GP_uq.png", width=630)) #source: http://scikit-learn.org/0.17/modules/gaussian_process.html
Explanation: Gaussian Processes
model for functions/continuous output
for new input returns predicted output and uncertainty
End of explanation
display(Image(filename="SVM.png", width=700)) #source: https://en.wikipedia.org/wiki/Support_vector_machine
Explanation: Support Vector Machines
model for classification
map data nonlinearly to higher dimensionsal space
separate points of different classes using a plane (i.e. linearly)
End of explanation
display(Image(filename="monomials_small.jpg", width=800)) #source: Berhard Schölkopf
Explanation: Feature engineering and two classification algorithms
Feature engineering in Machine Learning
feature engineering: map data to features with function $\FM:\IS\to \RKHS$
handle nonlinear relations with linear methods ($\FM$ nonlinear)
handle non-numerical data (e.g. text)
End of explanation
figkw = {"figsize":(4,4), "dpi":150}
np.random.seed(5)
samps_per_distr = 20
data = np.vstack([stats.multivariate_normal(np.array([-2,0]), np.eye(2)*1.5).rvs(samps_per_distr),
stats.multivariate_normal(np.array([2,0]), np.eye(2)*1.5).rvs(samps_per_distr)])
distr_idx = np.r_[[0]*samps_per_distr, [1]*samps_per_distr]
f = pl.figure(**figkw);
for (idx, c, marker) in [(0,'r', (0,3,0)), (1, "b", "x")]:
plt.scatter(*data[distr_idx==idx,:].T, c=c, marker=marker, alpha = 0.4)
pl.show()
Explanation: Working in Feature Space
want Feature Space $\RKHS$ (the codomain of $\FM$) to be vector space to get nice mathematical structure
definition of inner products induces norms and possibility to measure angles
can use linear algebra in $\RKHS$ to solve ML problems
inner products
angles
norms
distances
induces nonlinear operations on the Input Space (domain of $\FM$)
Two simple classification algorithms
given data points from mixture of two distributions with densities $p_0,p_1$:
$$x_i \sim 0.5 p_0 + 0.5 p_1$$
and label $l_i = 0$ if $x_i$ generated by $p_0$, $l_i = 1$ otherwise
End of explanation
pl.figure(**figkw)
for (idx, c, marker) in [(0,'r', (0,3,0)), (1, "b", "x")]:
pl.scatter(*data[distr_idx==idx,:].T, c=c, marker=marker, alpha=0.2)
pl.arrow(0, 0, *data[distr_idx==idx,:].mean(0), head_width=0.3, width=0.05, head_length=0.3, fc=c, ec=c)
pl.title(r"Mean embeddings for $\Phi(x)=x$");
pl.figure(**figkw)
for (idx, c, marker) in [(0,'r', (0,3,0)), (1, "b", "x")]:
pl.scatter(*data[distr_idx==idx,:].T, c=c, marker=marker, alpha=0.2)
pl.arrow(0, 0, *data[distr_idx==idx,:].mean(0), head_width=0.3, width=0.05, head_length=0.3, fc=c, ec=c)
pl.title(r"Mean embeddings for $\Phi(x)=x$");
pl.scatter(np.ones(1), np.ones(1), c='k', marker='D', alpha=0.8);
Explanation: Classification using inner products in Feature Space
compute mean feature space embedding $$\mu_{0} = \frac{1}{N_0} \sum_{l_i = 0} \FM(x_i) ~~~~~~~~ \mu_{1} = \frac{1}{N_1} \sum_{l_i = 1} \FM(x_i)$$
assign test point to most similar class in terms of inner product between point and mean embedding $\prodDot{\FM(x)}{\mu_c}$
$$f_d(x) = \argmax_{c\in{0,~1}} \prodDot{\FM(x)}{\mu_c}$$
(remember in $\Reals^2$ canonically: $\prodDot{a}{b} = a_1 b_1+a_2 b_2 $)
End of explanation
# Some plotting code
def apply_to_mg(func, *mg):
#apply a function to points on a meshgrid
x = np.vstack([e.flat for e in mg]).T
return np.array([func(i.reshape((1,2))) for i in x]).reshape(mg[0].shape)
def plot_with_contour(samps, data_idx, cont_func, method_name = None, delta = 0.025, pl = pl, colormesh_cmap = pl.cm.Pastel2, contour_classif = True):
x = np.arange(samps.T[0].min()-delta, samps.T[1].max()+delta, delta)
y = np.arange(samps.T[1].min()-delta, samps.T[1].max()+delta, delta)
X, Y = np.meshgrid(x, y)
Z = apply_to_mg(cont_func, X,Y)
Z = Z.reshape(X.shape)
fig = pl.figure(**figkw)
if colormesh_cmap is not None:
bound = np.abs(Z).max()
pl.pcolor(X, Y, Z , cmap=colormesh_cmap, alpha=0.5, edgecolors=None, vmin=-bound, vmax=bound)
if contour_classif is True:
c = pl.contour(X, Y, Z, colors=['k', ],
alpha = 0.5,
linestyles=[ '--'],
levels=[0],
linewidths=0.7)
else:
pl.contour(X, Y, Z, linewidths=0.7)
if method_name is not None:
pl.title(method_name)
for (idx, c, marker) in [(0,'r', (0,3,0)), (1, "b", "x")]:
pl.scatter(*data[distr_idx==idx,:].T, c=c, marker=marker, alpha = 0.4)
pl.show()
pl.close()
est_dens_1 = dist.mixt(2, [dist.mvnorm(x, np.eye(2)*0.1) for x in data[:4]], [1./4]*4)
plot_with_contour(data, distr_idx,
lambda x: exp(est_dens_1.logpdf(x)),
colormesh_cmap=None, contour_classif=False)
est_dens_1 = dist.mixt(2, [dist.mvnorm(x, np.eye(2)*0.1,10) for x in data[:samps_per_distr]], [1./samps_per_distr]*samps_per_distr)
plot_with_contour(data, distr_idx,
lambda x: exp(est_dens_1.logpdf(x)),
colormesh_cmap=None, contour_classif=False)
Explanation: Classification using density estimation
estimate density for each class by centering a gaussian, taking mixture as estimate
$$\widehat{p}0 = \frac{1}{N_0} \sum{l_i = 0} \mathcal{N}(\cdot; x_i,\Sigma) ~~~~~~~~ \widehat{p}1 = \frac{1}{N_1} \sum{l_i = 1} \mathcal{N}(\cdot; x_i,\Sigma)$$
End of explanation
class KMEclassification(object):
def __init__(self, samps1, samps2, kernel):
self.de1 = ro.RKHSDensityEstimator(samps1, kernel, 0.1)
self.de2 = ro.RKHSDensityEstimator(samps2, kernel, 0.1)
def classification_score(self, test):
return (self.de1.eval_kme(test) - self.de2.eval_kme(test))
data, distr_idx = sklearn.datasets.make_circles(n_samples=400, factor=.3, noise=.05)
kc = KMEclassification(data[distr_idx==0,:], data[distr_idx==1,:], ro.LinearKernel())
plot_with_contour(data, distr_idx, kc.classification_score, 'Inner Product classif. '+"Linear", pl = plt, contour_classif = True, colormesh_cmap = pl.cm.bwr)
kc = KMEclassification(data[distr_idx==0,:], data[distr_idx==1,:], ro.GaussianKernel(0.3))
plot_with_contour(data, distr_idx, kc.classification_score, 'Inner Product classif. '+"Gaussian", pl = plt, contour_classif = True, colormesh_cmap = pl.cm.bwr)
Explanation: Classification using density estimation
estimate density for each class by centering a gaussian, taking mixture as estimate
$$\widehat{p}0 = \frac{1}{N_0} \sum{l_i = 0} \mathcal{N}(\cdot; x_i,\Sigma) ~~~~~~~~ \widehat{p}1 = \frac{1}{N_1} \sum{l_i = 1} \mathcal{N}(\cdot; x_i,\Sigma)$$
assign test point $x$ to class $c$ that gives highest value for $\widehat{p}_c(x)$
$\widehat{p}_c$ is known as a kernel density estimate (KDE)
different but overlapping notion of 'kernel'
classification algorithm known as Parzen windows classification
<center><h3> For a certain feature map and inner product, both algorithms are the same!</h3></center>
<center>Let's construct this feature map and inner product.</center>
Kernels and feature space
Positive definite functions and feature spaces
let $\PDK:\IS\times\IS \to \Reals$, called a kernel
if $\PDK$ is a symmetric and positive semi definite (psd)
then there exists $\FM: \IS \to \RKHS$ to a hilbert space $\RKHS$ such that $$\PDK(x_i, x_j) = \prodDot{\FM(x_i)}{\FM(x_j)}_\RKHS$$
i.e. $\PDK$ computes inner product after mapping to some $\RKHS$
Gram matrix (1)
If all matrices
$$\Gram_{X}=\begin{bmatrix}
\PDK(x_1, x_1) & \dots & \PDK(x_1, x_N)\
\PDK(x_2, x_1) & \ddots & \vdots\
\vdots & & \vdots\
\PDK(x_N, x_1) & \dots & \PDK(x_N, x_N)
\end{bmatrix}$$
are symmetric positive semidefinite, then $\PDK$ is a psd kernel
called a gram matrix
Gram matrix (2)
sometimes mixed gram matrices are needed
$$\Gram_{XY} = \begin{bmatrix}
\PDK(x_1, y_1) & \PDK(x_1, y_2) & \dots & \PDK(x_1, y_M)\
\PDK(x_2, y_1) & \ddots & &\vdots\
\vdots & & &\vdots\
\PDK(x_N, y_1) & \dots & &\PDK(x_N, y_M)
\end{bmatrix}
$$
Examples of psd kernels
Linear $\PDK_L(x,x') = \prodDot{x}{x'} = x_1 x'_1 + x_2 x'_2+ \dots$
Gaussian $\PDK_G(x,x') = \exp({-{ 0.5}(x-x' )^{\top }\Sigma ^{-1}(x-x' )})$
PSD kernels
easy to construct $\PDK$ given $\FM$: $\PDK(x_i, x_j) = \prodDot{\FM(x_i)}{\FM(x_j)}$
construction for $\FM$ given $\PDK$ not trivial but still elementary
given $\FM$ and inner product in feature space we
can endow space with norm induced by the inner product
$$\|g\|\RKHS = \sqrt{\prodDot{g}{g}}\RKHS~~~\textrm{for}~g \in \RKHS$$
can measure angles in the new space $$\prodDot{g}{f}\RKHS = \cos(\angle[g,f])~\|g\|\RKHS ~\|f\|_\RKHS$$
Construction of the canonical feature map (Aronszajn map)
Plan
- construction of $\FM$ from $\PDK$
- definition of inner product in new space $\RKHS$ such that in fact $\PDK(x,x') = \prodDot{\FM(x)}{\FM(x)}$
- feature for each $x \in \IS$ will be a function from $\IS$ to $\Reals$
$$\FM:\IS \to \Reals^\IS$$
Canonical feature map (Aronszajn map)
pick $\FM(x) = \PDK(\cdot, x)$
Linear kernel: $\FM_L(x) = \prodDot{\cdot}{x}$
Gaussian kernel: $\FM_G(x) = \exp\left(-0.5{\|\cdot -x \|^2}/{\sigma^2}\right)$.
let linear combinations of features also be in $\RKHS$
$$f(\cdot)=\sum_{i=1}^m a_i \PDK(\cdot, x_i) \in \RKHS$$
for $a_i \in \Reals$
$\RKHS$ a vector space over $\Reals$ : if $f(\cdot)$ and $g(\cdot)$ functions from $\IS$ to $\Reals$, so are $a~f(\cdot)$ for $a \in \Reals, f(\cdot)+g(\cdot)$
Canonical inner product (1)
for $f(\cdot)=\sum_{i=1}^m a_i \PDK(\cdot, x_i) \in \RKHS$ and $g(\cdot)=\sum_{j=1}^{m'} b_j \PDK(\cdot, x'j) \in \RKHS$ define inner product
$$\prodDot{f}{g} = \sum{i=1}^m \sum_{j=1}^{m'} b_j a_i \PDK(x'_j, x_i)$$
In particular $\prodDot{ \PDK(\cdot,x)}{ \PDK(\cdot,x')}=\PDK(x,x')$ (reproducing property of kernel in its $\RKHS$)
Canonical inner product (2)
$\RKHS$ a hilbert space with this inner product, as it is
positive definite
linear in its first argument
symmetric
complete
$\RKHS$ called Reproducing Kernel Hilbert Space (RKHS).
Equivalence of classification algorithms
Recall mean canonical feature and kernel density estimate
$$\widehat{p}0 = \frac{1}{N_0} \sum{l_i = 0} \mathcal{N}(\cdot; x_i,\Sigma) ~~~~~~~~ \mu_{0} = \frac{1}{N_0} \sum_{l_i = 0} \PDK(\cdot, x_i)$$
observe
$$\frac{1}{N_0} \sum_{l_i = 0} \mathcal{N}(x^; x_i,\Sigma) = \prodDot{\frac{1}{N_0} \sum_{l_i = 0} \PDK(\cdot, x_i)}{\PDK(\cdot, x^)}$$
if $\PDK$ is Gaussian density with covariance $\Sigma$
kernel mean and Parzen windows classification are equivalent!
Lets look at example classification output
End of explanation
out_samps = data[distr_idx==0,:1] + 1
inp_samps = data[distr_idx==0,1:] + 1
def plot_mean_embedding(cme, inp_samps, out_samps, p1 = 0., p2 = 1., offset = 0.5):
x = np.linspace(inp_samps.min()-offset,inp_samps.max()+offset,200)
fig = pl.figure(figsize=(10, 5))
ax = [pl.subplot2grid((2, 2), (0, 1)),
pl.subplot2grid((2, 2), (0, 0), rowspan=2),
pl.subplot2grid((2, 2), (1, 1))]
ax[1].scatter(out_samps, inp_samps, alpha=0.3, color = 'r')
ax[1].set_xlabel('Output')
ax[1].set_ylabel('Input')
ax[1].axhline(p1, 0, 8, color='g', linestyle='--')
ax[1].axhline(p2, 0, 8, color='b', linestyle='--')
ax[1].set_title("%d input-output pairs"%len(out_samps))
ax[1].set_yticks((p1, p2))
e = cme.lhood(np.array([[p1], [p2]]), x[:, None]).T
#ax[0].plot(x, d[0], '-', label='cond. density')
ax[2].plot(x, e[0], 'g--', label='cond. mean emb.')
ax[2].set_title(r"p(outp | inp=%.1f)"%p1)
ax[0].plot(x, e[1], 'b--', label='cond. mean emb.')
ax[0].set_title(r"p(outp | inp=%.1f)"%p2)
#ax[2].legend(loc='best')
fig.tight_layout()
cme = ro.ConditionMeanEmbedding(inp_samps, out_samps, ro.GaussianKernel(0.3), ro.GaussianKernel(0.3), 5)
plot_mean_embedding(cme, inp_samps, out_samps, 0.3, 2.,)
Explanation: Applications
Kernel mean embedding
mean feature with canonical feature map $\frac{1}{N} \sum_{i = 1}^N \FM(x_i) = \frac{1}{N} \sum_{i = 1}^N \PDK(x_i, \cdot)$
this the estimate of the kernel mean embedding of the distribution/density $\rho$ of $x_i$
$$\mu_\rho(\cdot) = \int \PDK(x,\cdot) \mathrm{d}\rho(x)$$
using this we can define a distance between distributions $\rho, q$ as
$$\mathrm{MMD}(\rho, q)^2 = \|\mu_\rho - \mu_q \|^2_\RKHS$$
called the maximum mean discrepancy (MMD)
Has been used as the critic in generative adversial networks (i.e. generative network as usual, MMD as drop-in for discriminator)
Conditional mean embedding (1)
we can define operators on RKHSs
these are maps from RKHS elements to RKHS elements (i.e. mapping between functionals)
one such operator is the conditional mean embedding $\mathcal{D}_{Y|X}$
given the embedding of the input variables distribution, returns the embedding of the output variables distribution
Regression with uncertainty estimate
Conditional mean embedding (2)
An example
End of explanation
HTML('<iframe width="560" height="315" src="https://www.youtube.com/embed/MpzaCCbX-z4?rel=0&showinfo=0&start=148" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>')
display(Image(filename="Pendulum_eigenfunctions.png", width=700))
display(Image(filename="KeywordClustering.png", width=700))
Explanation: Conditional mean embedding (3)
closed form estimate given samples from input and output
$$\begin{bmatrix}\PDK_Y(y_1, \cdot),& \dots &, \PDK_Y(y_N, \cdot)\end{bmatrix} \Gram_X^{-1} \begin{bmatrix}\PDK_X(x_1, \cdot)\ \vdots \ \PDK_X(x_N, \cdot)\end{bmatrix}$$
closed form estimate of output embedding for new input $x^*$
$$\prodDot{\mathcal{D}_{Y|X}}{\PDK(x^,\cdot)} = \begin{bmatrix}\PDK_Y(y_1, \cdot),& \dots &, \PDK_Y(y_N, \cdot)\end{bmatrix} \Gram_X^{-1} \begin{bmatrix}\PDK_X(x_1, x^)\ \vdots \ \PDK_X(x_N, x^*)\end{bmatrix}$$
Conditional mean embedding (4)
Similar to Gaussian processes, but output distribution more flexible
mixture of Gaussians, Laplace, distributions on discrete objects
multimodality can be represented
multidimensional output
output can be combination of e.g. string and reals
Conditional mean embedding was used to construct Kernel Bayes Rule, enabling closed form Bayesian inference
other types of operators have been derived (see Stefan Klus' talk next week)
End of explanation |
15,012 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Model 1 with FPC mask
Level 1
Step2: Model results
Rule learning and rule application in the matching task
Rule Learning > Rule Application
Step3: Rule Application > Rule Learning
Step4: Rule Learning > Baseline
Step5: Baseline > Rule Learning
Step6: Rule Application > Baseline
Step7: Baseline > Rule Application
Step8: Rule learning and rule application in the classification task
Rule Learning > Rule Application
Step9: Rule Learning > Baseline
Step10: Baseline > Rule Learning
Step11: Rule Application > Baseline
Step12: Baseline > Rule Application
Step13: Rule learning in the matching and classification tasks
Matching > Classification
Step14: Classification > Matching
Step15: Rule application in the matching and classification tasks
Matching > Classification
Step16: Classification > Matching | Python Code:
import os
from IPython.display import IFrame
from IPython.display import Image
# This function renders interactive brain images
def render(name,brain_list):
#prepare file paths
brain_files = []
for b in brain_list:
brain_files.append(os.path.join("data",b))
wdata =
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml" lang="en">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8"/>
<!-- iOS meta tags -->
<meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"/>
<meta name="apple-mobile-web-app-capable" content="yes">
<meta name="apple-mobile-web-app-status-bar-style" content="black-translucent">
<link rel="stylesheet" type="text/css" href="../papaya/papaya.css?build=1420" />
<script type="text/javascript" src="../papaya/papaya.js?build=1422"></script>
<title>Papaya Viewer</title>
<script type="text/javascript">
var params = [];
params["worldSpace"] = true;
params["atlas"] = "MNI (Nearest Grey Matter)";
params["images"] = %s;
</script>
</head>
<body>
<div class="papaya" data-params="params"></div>
</body>
</html>
% str(brain_files)
fname=name+"index.html"
with open (fname, 'w') as f: f.write (wdata)
return IFrame(fname, width=800, height=600)
# variables
l1cope="0"
l2cope="0"
l3cope="0"
def paths():
sliced_img = os.path.join("data", "img_"+l1cope+"_"+l2cope+"_"+l3cope+"_wb.png")
wb_img = "WB.nii.gz"
cluster_corr = "rand_"+l1cope+"_"+l2cope+"_"+l3cope+".nii.gz"
tstat_img = os.path.join("data", "imgt_"+l1cope+"_"+l2cope+"_"+l3cope+"_wb.png")
html_cl = l1cope+"_"+l2cope+"_"+l3cope
html_t = l1cope+"_"+l2cope+"_"+l3cope+"t"
return sliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t
Explanation: Model 1 with FPC mask
Level 1:
EVs:
stimulus appication
stimulus learning
stimulus na
feedback correct
feedback incorrect
feedback na
Contrasts:
stimulus appication>0
stimulus learning>0
stimulus appication>stimulus learning
Level 2:
task001 task1
task001 task2
task001 task1>task2
task001 task2>task1
Level 3:
positive contrast
negative contrast
FPC mask
*Images from randomise (cluster mass with t=2.49 and v=8) are thresholded at .95 and overlaid with unthresholded t-maps.
Prepare stuff
End of explanation
l1cope="3"
l2cope="1"
l3cope="2"
sliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths()
render(html_cl,[wb_img,cluster_corr])
Explanation: Model results
Rule learning and rule application in the matching task
Rule Learning > Rule Application
End of explanation
l1cope="3"
l2cope="1"
l3cope="1"
sliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths()
render(html_cl,[wb_img,cluster_corr])
Explanation: Rule Application > Rule Learning
End of explanation
l1cope="2"
l2cope="1"
l3cope="1"
sliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths()
render(html_cl,[wb_img,cluster_corr])
Explanation: Rule Learning > Baseline
End of explanation
l1cope="2"
l2cope="1"
l3cope="2"
sliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths()
render(html_cl,[wb_img,cluster_corr])
Explanation: Baseline > Rule Learning
End of explanation
l1cope="1"
l2cope="1"
l3cope="1"
sliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths()
render(html_cl,[wb_img,cluster_corr])
Explanation: Rule Application > Baseline
End of explanation
l1cope="1"
l2cope="1"
l3cope="2"
sliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths()
Image(sliced_img)
render(html_cl,[wb_img,cluster_corr])
Explanation: Baseline > Rule Application
End of explanation
l1cope="3"
l2cope="2"
l3cope="2"
sliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths()
render(html_cl,[wb_img,cluster_corr])
Explanation: Rule learning and rule application in the classification task
Rule Learning > Rule Application
End of explanation
l1cope="2"
l2cope="2"
l3cope="1"
sliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths()
render(html_cl,[wb_img,cluster_corr])
Explanation: Rule Learning > Baseline
End of explanation
l1cope="2"
l2cope="2"
l3cope="2"
sliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths()
render(html_cl,[wb_img,cluster_corr])
Explanation: Baseline > Rule Learning
End of explanation
l1cope="1"
l2cope="2"
l3cope="1"
sliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths()
render(html_cl,[wb_img,cluster_corr])
Explanation: Rule Application > Baseline
End of explanation
l1cope="1"
l2cope="2"
l3cope="2"
sliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths()
render(html_cl,[wb_img,cluster_corr])
Explanation: Baseline > Rule Application
End of explanation
l1cope="2"
l2cope="3"
l3cope="1"
sliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths()
render(html_cl,[wb_img,cluster_corr])
Explanation: Rule learning in the matching and classification tasks
Matching > Classification
End of explanation
l1cope="2"
l2cope="3"
l3cope="2"
sliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths()
render(html_cl,[wb_img,cluster_corr])
Explanation: Classification > Matching
End of explanation
l1cope="1"
l2cope="3"
l3cope="1"
sliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths()
render(html_cl,[wb_img,cluster_corr])
Explanation: Rule application in the matching and classification tasks
Matching > Classification
End of explanation
l1cope="1"
l2cope="3"
l3cope="2"
sliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths()
render(html_cl,[wb_img,cluster_corr])
Explanation: Classification > Matching
End of explanation |
15,013 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SAS Viya, CAS & Python Integration Workshop
Notebook Summary
Set Up
Exploring CAS Action Sets and the CASResults Object
Working with a SASDataFrame
Exploring the CAS File Structure
Loading Data Into CAS
Exploring Table Details
Data Exploration
Filtering Data
Data Preparation
SQL
Analyzing Data
Promote the Table to use in SAS Visual Analytics
SAS Viya
What is SAS Viya
SAS Viya extends the SAS Platform, operates in the cloud (as well as in hybrid and on-prem solutions) and is open source-friendly. For better performance while manipulating data and running analytical procedures, SAS Viya can run your code in Cloud Analytic Services (CAS). CAS operates on in-memory data, removing the read/write transfer overhead. Further, it enables everyone in an organization to collaborate and work with data by providing a variety of products and solutions running in CAS.
Cloud Analytic Services (CAS)
SAS Viya processes data and performs analytics using SAS Cloud Analytic Services, or CAS for short. CAS provides a powerful distributed computing environment designed to store large data sets in memory for fast and efficient processing. It uses scalable, high-performance, multi-threaded algorithms to rapidly perform analytical processing on in-memory data of any size.
For more information about Cloud Analytic Services, visit the documentation
Step1: b. Make a Connection to CAS</a>
To connect to the CAS server you will need
Step2: c. Obtain Data for the Demo
Step3: <a id='2'>2. Exploring CAS Action Sets and the CASResults Object</a>
Think of action sets as a package, and all the actions inside an action set as a method.
CAS actions interact with the CAS server and return a CASResults object.
A CASResults object is simply an ordered Python dictionary with a few extra methods and attributes added.
You can also use the SWAT package API to interact with the CAS server. The SWAT package contains many of the methods defined by Pandas DataFrames. Using methods from the SWAT API will typically return a CASTable, CASColumn, pandas.DataFrame, or pandas.Series object.
Documentation
Step4: View the available CAS actions in the builtins action set using the help function.
Step5: You do not need to specify the CAS action set prior to the CAS action. Moving forward, all actions will not include the CAS action set.
Step6: All CAS actions return a CASResults object.
Step7: b. CASResults Object
A CASResults object is an ordered Python dictionary with keys and values.
A CASResults object is local data returned by the CAS server.
While all CAS actions return a CASResults object, there are no rules about how many keys are contained in the object, or what values are returned.
View the keys in the CASResults object. This specific CASResults object contains a single key, and a single value.
Step8: Call the setinfo key to return the value.
Step9: The setinfo key holds a SASDataFrame object.
Step10: <a id='3'>3. Working with a SASDataFrame
A SASDataFrame object contains local data.
A SASDataFrame object is a subclass of a Pandas DataFrame. You can work with them as you normally do a Pandas DataFrame.
NOTE
Step11: A SASDataFrame is local data. Work with it as you would a Pandas DataFrame.
b. Use Pandas Methods on a SASDataFrame.
View the first 5 rows of the SASDataFrame using the pandas head method.
Step12: Find all rows where the value in the actionset column equals simple using the pandas loc method.
Step13: View counts of unique values using the pandas value_counts method and plot a bar chart.
Step14: <a id='4'> 4. Exploring the CAS File Structure</a>
Caslib Overview
Step15: b. View Available Files in the casuser Caslib
Step16: c. View All Available In-Memory Tables in the casuser Caslib
NOTE
Step17: <a id='5'>5. Loading Data Into CAS
There are various ways of loading data into CAS
Step18: There are two methods that can be used to load server-side data into CAS
Step19: load_path
Step20: b. Local vs CAS Data
A CASTable object is a reference to data in the CAS server. Actions or methods run on a CASTable object are processed in CAS.
Step21: View the first 5 rows of the in-memory table using the head method. The head method is not a CAS action, so it will not return a CASResults object. The head method is using the API to CAS. The API to CAS contains many of the pandas methods you are familiar with. These methods process the data in CAS and can return a variety of different objects locally.
SWAT API Reference
Step22: The results of using the head method returns a SASDataFrame. SASDataFrames are located on locally.
Step23: You can use the fetch CAS action to return similar results. The processing of the fetch CAS action occurs in CAS and returns a CASResults object to your local machine. When using a CAS action a CASResults object is always returned.
Step24: CASResults objects are local.
Step25: SASDataFrame objects can be contained in the CASResults object.
Step26: Turn on tracing.
Step27: <a id='6'>6. Exploring Table Details
a. View the Number of Rows and Columns in the In-Memory Table.
Use shape to return a tuple of the CAS data.
Step28: Use the numRows CAS action to shows the number of rows in a CAS table.
Step29: Use the tableInfo CAS action to show information about a CAS table.
Step30: Create a function to return the in-memory table name, number of rows and columns.
Step31: b. View the Column Information
Step32: <a id='7'>7. Data Exploration
a. Summary Statistics
Using the summary CAS action to generate descriptive statistics of numeric variables.
Step33: Using the describe method.
Step34: Turn off tracing.
Step35: b. Distinct Values
Use the distinct CAS action to calculate the number of distinct values in the cars table.
Step36: Plot the number of missing values for each column.
Step37: Use the distinct CAS action to calculate the number of distinct values in the Origin, Type and Make columns using the distinct CAS action.
Step38: Create a new CAS table named castblDistinct with the number of distinct values for the specified inputs.
Step39: View the available in-memory tables.
Step40: Using Pandas methods.
Step41: c. Frequency
View the frequency of the Origin column using the freq CAS action.
Step42: Plot the resuls of the freq CAS action in a bar chart.
Step43: Use the value_counts method. The value_counts method will process in CAS and return the summary locally. The plot method will create the graph locally.
Step44: Perform a frequency on mulitple columns. The final CASResults object will contain a SASDataFrame with a frequency of each of the specified columns in one table.
Step45: D. Create a Frequency Table of all Columns with Less Than 20 Distinct Values.
Use the distinct CAS action to find the number of distinct values for each column and filter for all columns with less than 20 distinct values.
Step46: Create a variable named distinctCars that holds the SASDataFrame from the results above.
Step47: Create a list of column names that have less than 20 distinct values named listCars.
Step48: Use the list from above to create a frequency table of columns with less than 20 distinct values.
Step49: <a id='8'>8. Filtering Data
a. Subset Using Pandas Indexing Expressions.
Step50: b. Subset Using the Query Method.
Step51: <a id='9'>9. Data Preparation
Create a new column that calculates the average of MPG_City and MPG_Highway. Processing done in CAS.
Step52: Remove the Model and MSRP columns.
Step53: <a id='10'>10. SQL
a. Load the fedSQL CAS Action Set
View all available (not just loaded) CAS action sets by using the all=True parameter.
Step54: Search the actionset column for any CAS action set that contains the string sql.
Step55: Load the fedSQL action set using the loadActionSet action.
Step57: b. Run SQL Queries in CAS
Run a query to view the first 10 rows of the cars table.
Step59: Find the average MSRP of each car make.
Step61: Create a table named make_avg that contains the average MSRP of each car make.
Step62: <a id='11'>11. Analyzing Data
Preview the table.
Step63: a. Correlation with a Heat Map
Use the correlation action and remove the simple statistics. Processing will be done in CAS and the summary table will be returned locally.
Step64: Store the SASDataFrame object in the dfCorr variable. A SASDataFrame object is local.
Step65: Replace the default index with the Variable column
Step66: Use seaborn to produce a heatmap.
Step67: b. Histogram
Run the histogram action to return a summary of the midpoints and percents. Processing occurs in CAS.
Step68: Store the BinDetails in the variable mpgHist.
Step69: Round the columns Percent and MidPoint.
Step70: Plot the histogram.
Step71: Specify multiple columns in the histogram action.
Step72: Store the results from the histogram CAS action in the carsHist variable.
Step73: Find the unique values in the carsHist SASDataFrame.
Step74: Run a loop through the list of unique values and plot a histogram for each.
Step75: <a id='12'>12. Promote the Table to use in SAS Visual Analytics
Step76: Two Options
Step77: View the available files in the casuser caslib. Notice the updatedCars.sashdat file is available.
Step78: b. Create a New In-Memory Table From the castbl Object.
The partition CAS action has a variety of options, but if we leave the defaults we can take the castbl object (reference to the cars table with a few columns dropped and the new avgMPG column) and create a new in-memory table without saving a physical file.
Here a new in-memory table will be created called cars_update in the casuser caslib from the castbl object.
Step79: View the new in-memory table cars_update.
Step80: View the files in the casuser caslib. Notice no new files were created.
Step81: c. Promote a Table to Global Scope.
View all the tables in the casuser caslib. Focus on the specified columns. Notice no table is global scope.
Step82: Use the promote CAS action to promote a table to global scope. Global scope allows other users and software like SAS Visual Analtyics to use the in-memory table. Currently, all the in-memory tables are session scope. That is, only this account on this connection to CAS can see the in-memory tables.
In this example, the cars_update table is promoted to global scope in the casuser caslib. This only allows the current account (student) to access this table since it is promoted in the casuser caslib. If a table is promoted to global scope in a shared caslib, other users can see that table.
DEMO
Step83: Notice only the cars_update table is global. | Python Code:
## Data Management
import swat
import pandas as pd
## Data Visualization
from matplotlib import pyplot as plt
import seaborn as sns
%matplotlib inline
## Global Options
swat.options.cas.trace_actions = False # Enabling tracing of actions (Default is False. Will change to true later)
swat.options.cas.trace_ui_actions = False # Display the actions behind “UI” methods (Default is False. Will change to true later)
pd.set_option('display.max_columns', 500) # Modify DataFrame max columns shown
pd.set_option('display.max_colwidth', 1000) # Modify DataFrame max column width
Explanation: SAS Viya, CAS & Python Integration Workshop
Notebook Summary
Set Up
Exploring CAS Action Sets and the CASResults Object
Working with a SASDataFrame
Exploring the CAS File Structure
Loading Data Into CAS
Exploring Table Details
Data Exploration
Filtering Data
Data Preparation
SQL
Analyzing Data
Promote the Table to use in SAS Visual Analytics
SAS Viya
What is SAS Viya
SAS Viya extends the SAS Platform, operates in the cloud (as well as in hybrid and on-prem solutions) and is open source-friendly. For better performance while manipulating data and running analytical procedures, SAS Viya can run your code in Cloud Analytic Services (CAS). CAS operates on in-memory data, removing the read/write transfer overhead. Further, it enables everyone in an organization to collaborate and work with data by providing a variety of products and solutions running in CAS.
Cloud Analytic Services (CAS)
SAS Viya processes data and performs analytics using SAS Cloud Analytic Services, or CAS for short. CAS provides a powerful distributed computing environment designed to store large data sets in memory for fast and efficient processing. It uses scalable, high-performance, multi-threaded algorithms to rapidly perform analytical processing on in-memory data of any size.
For more information about Cloud Analytic Services, visit the documentation: SAS® Cloud Analytic Services 3.5: Fundamentals
SAS Viya is Open
SAS Viya is open. Business analysts and data scientists can explore, prepare and manage data to provide insights, create visualizations or analytical models using the SAS programming language or a variety of open source languages like Python, R, Lua, or Java. Because of this, programmers can easily process data in CAS, using a language of their choice.
<a id='1'>1. Set Up
a. Import Packages
Visit the documentation for the SWAT (SAS Scripting Wrapper for Analytics Transfer) package.
End of explanation
conn = swat.CAS("server", 8777, "student", "Metadata0", protocol="http")
conn
Explanation: b. Make a Connection to CAS</a>
To connect to the CAS server you will need:
the host name,
the portnumber,
your user name, and your password.
Visit the documentation Getting Started with SAS® Viya® 3.5 for Python for more information about connecting to CAS.
Be aware that connecting to the CAS server can be implemented in various ways, so you might need to see your system administrator about how to make a connection. Please follow company policy regarding authentication.
End of explanation
conn.fileinfo()
## Download the data from github and load to the CAS server
conn.read_csv(r"https://raw.githubusercontent.com/sassoftware/sas-viya-programming/master/data/cars.csv",
casout={"name":"cars", "caslib":"casuser", "replace":True})
## Save the in-memory table as a physical file
conn.save(table="cars", name="cars.sashdat",
caslib="casuser",
replace=True)
## Drop the in-memory table
conn.droptable(name='cars', caslib="casuser")
Explanation: c. Obtain Data for the Demo
End of explanation
conn.builtins.actionSetInfo()
Explanation: <a id='2'>2. Exploring CAS Action Sets and the CASResults Object</a>
Think of action sets as a package, and all the actions inside an action set as a method.
CAS actions interact with the CAS server and return a CASResults object.
A CASResults object is simply an ordered Python dictionary with a few extra methods and attributes added.
You can also use the SWAT package API to interact with the CAS server. The SWAT package contains many of the methods defined by Pandas DataFrames. Using methods from the SWAT API will typically return a CASTable, CASColumn, pandas.DataFrame, or pandas.Series object.
Documentation:
- To view all CAS action sets and actions visit the documentation: SAS® Viya® 3.5 Actions and Action Sets by Name and Product
To view the SWAT API Reference visit: API Reference
a. View All the CAS Action Sets that are Loaded in CAS.
From the Builtins action set, use the actionSetInfo action, to view all loaded action sets.
CAS action sets and actions are case insensitive.
CAS actions return a CASResults object.
End of explanation
conn.help(actionSet="builtins")
Explanation: View the available CAS actions in the builtins action set using the help function.
End of explanation
conn.actionSetInfo()
Explanation: You do not need to specify the CAS action set prior to the CAS action. Moving forward, all actions will not include the CAS action set.
End of explanation
type(conn.actionSetInfo())
Explanation: All CAS actions return a CASResults object.
End of explanation
conn.actionSetInfo().keys()
Explanation: b. CASResults Object
A CASResults object is an ordered Python dictionary with keys and values.
A CASResults object is local data returned by the CAS server.
While all CAS actions return a CASResults object, there are no rules about how many keys are contained in the object, or what values are returned.
View the keys in the CASResults object. This specific CASResults object contains a single key, and a single value.
End of explanation
conn.actionSetInfo()['setinfo']
Explanation: Call the setinfo key to return the value.
End of explanation
type(conn.actionSetInfo()['setinfo'])
Explanation: The setinfo key holds a SASDataFrame object.
End of explanation
df = conn.actionSetInfo()['setinfo']
type(df)
Explanation: <a id='3'>3. Working with a SASDataFrame
A SASDataFrame object contains local data.
A SASDataFrame object is a subclass of a Pandas DataFrame. You can work with them as you normally do a Pandas DataFrame.
NOTE: When bringing data from CAS locally, remember that CAS can hold larger data than your local computer can handle.
a. Create a SASDataFrame Object Named df.
End of explanation
df.head()
Explanation: A SASDataFrame is local data. Work with it as you would a Pandas DataFrame.
b. Use Pandas Methods on a SASDataFrame.
View the first 5 rows of the SASDataFrame using the pandas head method.
End of explanation
df.loc[df['actionset']=='simple',['actionset','label']]
Explanation: Find all rows where the value in the actionset column equals simple using the pandas loc method.
End of explanation
df['product_name'].value_counts().plot(kind="bar")
Explanation: View counts of unique values using the pandas value_counts method and plot a bar chart.
End of explanation
conn.caslibInfo()
Explanation: <a id='4'> 4. Exploring the CAS File Structure</a>
Caslib Overview:
A caslib has two parts:
Data Source - Connection information about the data source gives access to a resource that contains data. These can be files that are located in a file system, a database, streaming data from an ESP (Event Stream Processing) server, or other data sources that SAS can access.
In-Memory Space - The in-memory portion of a caslib that contains data that is uploaded into memory and ready for processing.
Think of your active caslib as the current working directory of your CAS session, and it's only possible to have one active caslib.
When you want to work with data from your data source, you must load the data into the in-memory portion for processing. This loaded table is known as a CAS Table.
Types of Caslibs:
Personal Caslib - By default, all users are given access to their own caslib, named CASUSER, within a CAS session. This is a personal caslib and is only accessible to the user who owns the CAS session.
Pre-defined Caslib - These are defined by an administrator and are available to all CAS sessions (dependent on access controls). Think of these as different folders for different units of a business. You can have an HR caslib with HR data, Marketing caslib with Marketing data, etc.
Manually added Caslib - These can be added at any point to perform ad-hoc analysis within CAS.
Caslib Scope
Session Caslib - When a caslib is defined without including the GLOBAL option, the caslib is a session-scoped caslib. When a table is loaded to the CAS server with session-scoped caslib, the table is available to that specific CAS user session only. Think of session scope as local to that specific session only.
Global Caslib -These are available to anyone who has access to the CAS Server (dependent on access controls). The name of these caslibs must be unique across all CAS sessions on the server.
For additional information about caslibs:
- Watch SAS® Viya™ CAS Libraries (Caslibs) Simplified
- SAS® Cloud Analytic Services 3.5: Fundamentals - Caslibs
a. View all Available Caslibs
Depending on your CAS server setup, you might already have one or more caslibs configured and ready to use.
If you do not have ReadInfo permissions on a caslib, then you will not see the caslib.
View all available caslibs using the casLibInfo action.
End of explanation
conn.fileInfo(caslib="casuser")
Explanation: b. View Available Files in the casuser Caslib
End of explanation
conn.tableInfo(caslib="casuser")
Explanation: c. View All Available In-Memory Tables in the casuser Caslib
NOTE: Tables need to be in-memory to be processed by CAS.
End of explanation
conn.fileInfo(caslib="casuser")
Explanation: <a id='5'>5. Loading Data Into CAS
There are various ways of loading data into CAS:
1. server-side data
2. client-side parsed
3. client-side files uploaded and parsed on the server
They follow these naming conventions:
load*: Loads server-side data
read_*: Uses client-side parsers and then uploads the result into CAS
upload*: Uploads client-side files as is, which are parsed on the server
For more information about loading client side files to CAS: Two Simple Ways to Import Local Files with Python in CAS (Viya 3.5)
a. Loading Server-Side Data into Memory.
View the available files in the casuser caslib.
End of explanation
# 1. Load the table into CAS. Will return a CASResults object.
conn.loadtable(path="cars.sashdat", caslib="casuser",
casout={"caslib":"casuser","name":"cars", "replace":True})
conn.tableInfo(caslib="casuser")
# 2. Create a reference to the in-memory table
castbl = conn.CASTable("cars",caslib="casuser")
Explanation: There are two methods that can be used to load server-side data into CAS:
- loadtable - Loads a table into CAS and returns a CASResults object.
- load_path - Convenience method. Similar to loadtable, load_path loads a table into CAS and returns a reference to that CAS table in one step.
loadtable
End of explanation
# Load the table into CAS and create a reference to that table in one step.
##castbl = conn.load_path(path="cars.sashdat", caslib="casuser",
## casout={"caslib":"casuser","name":"cars", "replace":True})
Explanation: load_path
End of explanation
type(castbl)
print(castbl)
Explanation: b. Local vs CAS Data
A CASTable object is a reference to data in the CAS server. Actions or methods run on a CASTable object are processed in CAS.
End of explanation
castbl.head()
Explanation: View the first 5 rows of the in-memory table using the head method. The head method is not a CAS action, so it will not return a CASResults object. The head method is using the API to CAS. The API to CAS contains many of the pandas methods you are familiar with. These methods process the data in CAS and can return a variety of different objects locally.
SWAT API Reference
End of explanation
type(castbl.head())
Explanation: The results of using the head method returns a SASDataFrame. SASDataFrames are located on locally.
End of explanation
castbl.fetch(to=5)
Explanation: You can use the fetch CAS action to return similar results. The processing of the fetch CAS action occurs in CAS and returns a CASResults object to your local machine. When using a CAS action a CASResults object is always returned.
End of explanation
type(castbl.fetch(to=5))
Explanation: CASResults objects are local.
End of explanation
type(castbl.fetch(to=5)['Fetch'])
Explanation: SASDataFrame objects can be contained in the CASResults object.
End of explanation
swat.options.cas.trace_actions = True
swat.options.cas.trace_ui_actions = True
Explanation: Turn on tracing.
End of explanation
castbl.shape
Explanation: <a id='6'>6. Exploring Table Details
a. View the Number of Rows and Columns in the In-Memory Table.
Use shape to return a tuple of the CAS data.
End of explanation
castbl.numRows()
Explanation: Use the numRows CAS action to shows the number of rows in a CAS table.
End of explanation
castbl.tableInfo()
Explanation: Use the tableInfo CAS action to show information about a CAS table.
End of explanation
def details(tbl):
sasdf = tbl.tableInfo()["TableInfo"].set_index("Name").loc[:,["Rows","Columns"]]
return sasdf
details(castbl)
Explanation: Create a function to return the in-memory table name, number of rows and columns.
End of explanation
castbl.columnInfo()
castbl.dtypes
Explanation: b. View the Column Information
End of explanation
castbl.summary()
Explanation: <a id='7'>7. Data Exploration
a. Summary Statistics
Using the summary CAS action to generate descriptive statistics of numeric variables.
End of explanation
castbl.describe()
Explanation: Using the describe method.
End of explanation
swat.options.cas.trace_actions = False
swat.options.cas.trace_ui_actions = False
Explanation: Turn off tracing.
End of explanation
castbl.distinct()
Explanation: b. Distinct Values
Use the distinct CAS action to calculate the number of distinct values in the cars table.
End of explanation
castbl.distinct()['Distinct'] \
.set_index("Column") \
.loc[:,['NMiss']] \
.plot(kind='bar')
Explanation: Plot the number of missing values for each column.
End of explanation
castbl.distinct(inputs=["Origin","Type","Make"])
Explanation: Use the distinct CAS action to calculate the number of distinct values in the Origin, Type and Make columns using the distinct CAS action.
End of explanation
castbl.distinct(inputs=["Origin","Type","Make"],
casout={"caslib":"casuser", ## Create a new CAS table in casuser
"name":"castblDistinct", ## Name the table castblDistinct
"replace":True}) ## Replace if exists
Explanation: Create a new CAS table named castblDistinct with the number of distinct values for the specified inputs.
End of explanation
conn.tableInfo()
Explanation: View the available in-memory tables.
End of explanation
castbl.Cylinders.nunique()
castbl.Cylinders.isnull().sum()
Explanation: Using Pandas methods.
End of explanation
castbl.freq(inputs=["Origin"])
Explanation: c. Frequency
View the frequency of the Origin column using the freq CAS action.
End of explanation
## Perform the processing in CAS and store the summary in the originFreq object.
originFreq = castbl.freq(inputs=["Origin"])['Frequency']
## Graph the summarized local data.
originFreq.loc[:,["CharVar","Frequency"]] \
.sort_values(by="Frequency", ascending=False) \
.set_index("CharVar") \
.plot(kind="bar")
Explanation: Plot the resuls of the freq CAS action in a bar chart.
End of explanation
castbl['Origin'].value_counts().plot(kind='bar')
Explanation: Use the value_counts method. The value_counts method will process in CAS and return the summary locally. The plot method will create the graph locally.
End of explanation
castbl.freq(inputs=["Origin","Make","Type","DriveTrain"])
Explanation: Perform a frequency on mulitple columns. The final CASResults object will contain a SASDataFrame with a frequency of each of the specified columns in one table.
End of explanation
distinctCars = castbl.distinct()['Distinct']
distinctCars.loc[distinctCars["NDistinct"]<=20,:]
Explanation: D. Create a Frequency Table of all Columns with Less Than 20 Distinct Values.
Use the distinct CAS action to find the number of distinct values for each column and filter for all columns with less than 20 distinct values.
End of explanation
distinctCars = distinctCars.loc[distinctCars["NDistinct"]<=20,:]
Explanation: Create a variable named distinctCars that holds the SASDataFrame from the results above.
End of explanation
listCars = distinctCars.Column.unique().tolist()
print(listCars)
Explanation: Create a list of column names that have less than 20 distinct values named listCars.
End of explanation
castbl.freq(inputs=listCars)
Explanation: Use the list from above to create a frequency table of columns with less than 20 distinct values.
End of explanation
castbl[castbl["Make"]=="Toyota"].head()
castbl[(castbl["Make"]=="Toyota") & (castbl["Type"]=="Hybrid")].head()
Explanation: <a id='8'>8. Filtering Data
a. Subset Using Pandas Indexing Expressions.
End of explanation
castbl.query("Make='Toyota'").head()
castbl.query("Make='Toyota' and Type='Hybrid'").head()
Explanation: b. Subset Using the Query Method.
End of explanation
castbl["avgMPG"] = (castbl["MPG_City"] + castbl["MPG_Highway"])/2
castbl
castbl.head()
Explanation: <a id='9'>9. Data Preparation
Create a new column that calculates the average of MPG_City and MPG_Highway. Processing done in CAS.
End of explanation
cols = ['Make', 'Type', 'Origin', 'DriveTrain','Invoice',
'EngineSize', 'Cylinders', 'Horsepower', 'MPG_City',
'MPG_Highway', 'Weight', 'Wheelbase', 'Length', 'avgMPG']
castbl = castbl[cols]
castbl
castbl.head()
Explanation: Remove the Model and MSRP columns.
End of explanation
conn.actionSetInfo(all=True)['setinfo']
Explanation: <a id='10'>10. SQL
a. Load the fedSQL CAS Action Set
View all available (not just loaded) CAS action sets by using the all=True parameter.
End of explanation
actionSets = conn.actionSetInfo(all=True)['setinfo']
actionSets.loc[actionSets['actionset'].str.upper().str.contains("SQL")]
Explanation: Search the actionset column for any CAS action set that contains the string sql.
End of explanation
conn.loadActionSet(actionSet="fedSQL")
conn.actionSetInfo()
conn.help(actionSet="fedSQL")
Explanation: Load the fedSQL action set using the loadActionSet action.
End of explanation
conn.execdirect(select *
from cars
limit 10)
Explanation: b. Run SQL Queries in CAS
Run a query to view the first 10 rows of the cars table.
End of explanation
conn.execdirect(select Make, round(avg(MSRP)) as avgMSRP
from cars
group by Make)
Explanation: Find the average MSRP of each car make.
End of explanation
conn.execdirect(create table make_avg as
select Make, round(avg(MSRP)) as test
from cars
group by Make)
conn.tableInfo(caslib="casuser")
Explanation: Create a table named make_avg that contains the average MSRP of each car make.
End of explanation
castbl.head()
Explanation: <a id='11'>11. Analyzing Data
Preview the table.
End of explanation
castbl.correlation(inputs=["MSRP","EngineSize","HorsePower","MPG_City"], simple=False)
Explanation: a. Correlation with a Heat Map
Use the correlation action and remove the simple statistics. Processing will be done in CAS and the summary table will be returned locally.
End of explanation
dfCorr = castbl.correlation(inputs=["MSRP","EngineSize","HorsePower","MPG_City"], simple=False)['Correlation']
dfCorr
Explanation: Store the SASDataFrame object in the dfCorr variable. A SASDataFrame object is local.
End of explanation
dfCorr.set_index("Variable", inplace=True)
dfCorr
Explanation: Replace the default index with the Variable column
End of explanation
fig, ax = plt.subplots(figsize=(12,8))
ax = sns.heatmap(dfCorr, cmap="YlGnBu", annot=True)
ax.set_ylim(len(dfCorr),-.05) ## Truncation with defaults. Need to adjust limits. Fixed in newer verison of matplotlib.
Explanation: Use seaborn to produce a heatmap.
End of explanation
castbl.histogram(inputs=["avgMPG"])
Explanation: b. Histogram
Run the histogram action to return a summary of the midpoints and percents. Processing occurs in CAS.
End of explanation
mpgHist = castbl.histogram(inputs="avgMPG")['BinDetails']
Explanation: Store the BinDetails in the variable mpgHist.
End of explanation
mpgHist['Percent'] = mpgHist['Percent'].round(1)
mpgHist['MidPoint'] = mpgHist['MidPoint'].round(1)
mpgHist[["MidPoint","Percent"]].head()
Explanation: Round the columns Percent and MidPoint.
End of explanation
fig, ax = plt.subplots(figsize=(12,8))
ax = sns.barplot(x="MidPoint", y="Percent", data=mpgHist)
ax.set_title("Histogram of MPG")
Explanation: Plot the histogram.
End of explanation
castbl.histogram(inputs=["avgMPG", "HorsePower"])
Explanation: Specify multiple columns in the histogram action.
End of explanation
carsHist = castbl.histogram(inputs=["avgMPG", "HorsePower"])['BinDetails']
Explanation: Store the results from the histogram CAS action in the carsHist variable.
End of explanation
list(carsHist.Variable.unique())
Explanation: Find the unique values in the carsHist SASDataFrame.
End of explanation
for i in list(carsHist.Variable.unique()):
carsHist['Percent'] = carsHist['Percent'].round(1)
carsHist['MidPoint'] = carsHist['MidPoint'].round(1)
df = carsHist[carsHist["Variable"]==i]
df.plot.bar(x='MidPoint', y='Percent')
Explanation: Run a loop through the list of unique values and plot a histogram for each.
End of explanation
castbl.head()
castbl
Explanation: <a id='12'>12. Promote the Table to use in SAS Visual Analytics
End of explanation
castbl.save(name="updatedCars.sashdat", caslib="casuser")
Explanation: Two Options:
Save the castbl object as a physical file
Create a new in-memory table from the castbl object.
a. Save the castbl Object as a Physical File.
Use the save CAS action to save the castbl object as a physical file. Here we will save it as a sashdat file.
End of explanation
conn.fileInfo(caslib="casuser")
Explanation: View the available files in the casuser caslib. Notice the updatedCars.sashdat file is available.
End of explanation
castbl.partition(casout={"caslib":"casuser","name":"cars_update"})
Explanation: b. Create a New In-Memory Table From the castbl Object.
The partition CAS action has a variety of options, but if we leave the defaults we can take the castbl object (reference to the cars table with a few columns dropped and the new avgMPG column) and create a new in-memory table without saving a physical file.
Here a new in-memory table will be created called cars_update in the casuser caslib from the castbl object.
End of explanation
conn.tableInfo(caslib="casuser")
Explanation: View the new in-memory table cars_update.
End of explanation
conn.fileInfo(caslib="casuser")
Explanation: View the files in the casuser caslib. Notice no new files were created.
End of explanation
conn.tableInfo(caslib="casuser")['TableInfo'][['Name','Rows','Columns','Global']]
Explanation: c. Promote a Table to Global Scope.
View all the tables in the casuser caslib. Focus on the specified columns. Notice no table is global scope.
End of explanation
conn.promote(name="cars_update", caslib="casuser")
Explanation: Use the promote CAS action to promote a table to global scope. Global scope allows other users and software like SAS Visual Analtyics to use the in-memory table. Currently, all the in-memory tables are session scope. That is, only this account on this connection to CAS can see the in-memory tables.
In this example, the cars_update table is promoted to global scope in the casuser caslib. This only allows the current account (student) to access this table since it is promoted in the casuser caslib. If a table is promoted to global scope in a shared caslib, other users can see that table.
DEMO: Go to SAS Visual Analyics and see cars_update does not exist outside of this session.
Promote the cars_update in-memory table to global scope
End of explanation
conn.tableInfo(caslib="casuser")['TableInfo'][['Name','Rows','Columns','Global']]
Explanation: Notice only the cars_update table is global.
End of explanation |
15,014 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright (c) 2015, 2016 Sebastian Raschka
Li-Yi Wei
https
Step1: The use of watermark is optional. You can install this IPython extension via "pip install watermark". For more information, please see
Step2: Building, compiling, and running expressions with Theano
Developed by the LISA lab lead by Joshua Bengio started in 2008.
Harness multi/many-core CPU/GPU without the burden of
* parallel computing code
* memory management across processors
<img src='./images/13_01.png' width=80%>
What is Theano?
A machine learning library with interface in Python.
* with more speed/memory optimization
Focus on tensors as the core data structure.
Tensors are multi-dimensional arrays.
* rank 0 for scalars
* rank 1 for vectors
* rank 2 for matrices
Symbolic manipulation
* build computation graphs
* automatic and symbolic differentiation
* send compiled expressions/graphs to CPU/GPU for execution
First steps with Theano
http
Step3: Steps for using Theano
Step4: To change the float type globally, execute
export THEANO_FLAGS=floatX=float32
in your bash shell. Or execute Python script as
THEANO_FLAGS=floatX=float32 python your_script.py
Running Theano on GPU(s). For prerequisites, please see
Step5: You can run a Python script on CPU (e.g. for prototyping and debug) via
Step6: Memory management
shared
Variable with storage that is shared between functions that it appears in.
* can have initial or constant values, e.g. weights of a neural network
* retain value across function calls
* cannot be used as input to a function
* can be updated after each function call
Can be more efficient than input variable
* update in place instead of transferring around
* Theano can then optimize the storage across CPUs and GPUs
More info about memory management in Theano can be found under
Step7: given
input
Step8: Wrapping things up
Step9: Implement the training function
Notice
Step10: Plotting the sum of squared errors cost vs epochs.
Step11: Make prediction
Step12: Theano for neural networks
Also use Keras library
Choosing activation functions for feedforward neural networks
There are various activation functions for a multi-layer neural networks.
* in theory we can use any differential function
* in practice we want (1) non-linearity and (2) goood convergence for gradient descent
Sigmoid is one we have seen.
* mimics biological neurons
* converge slowly for deep networks (vanishing gradients)
Logistic function recap
The logistic function, often just called "sigmoid function" is in fact a special case of a sigmoid function.
Linear input $z$
Step13: Multiple outputs
One-hot encoding for multi-class classification.
* $K$ outputs for $K$ classes
Logistic activation outputs cannot be directly interpreted as probabilities.
Example
A MLP perceptron with
* 3 hidden units + 1 bias unit in the hidden unit
* 3 output units
Step14: The outputs do not sum to 1 and thus are not probabilities.
Need normalization for probability
* divide all outputs by their summation
* softmax which should be applied to z (linear inputs) before logistic activation
OK for classification
Step15: Estimating probabilities in multi-class classification via the softmax function
The softmax function
* is a generalization of the logistic function
* allows us to compute meaningful class probabilities in multi-class settings
* i.e. multinomial logistic regression
$z$
Step16: The class probabilities sum to 1.
The predicted class is the same as logistic regression.
Step17: Broadening the output spectrum using a hyperbolic tangent
Another special case of a sigmoid function, it can be interpreted as a rescaled version of the logistic function.
$$
\begin{align}
\phi_{tanh}(z) = \frac{e^{z}-e^{-z}}{e^{z}+e^{-z}}
\end{align}
$$
$\phi_{tanh}$ is a rescaled version of $\phi_{logistic}$
Step19: Different activation functions
<img src='./images/13_05.png' width=100%>
Training neural networks efficiently using Keras
A library (stared in early 2015) to facilitate neural network training.
* built on top of Theano
* intuitive and popular API
* front-end for Theano and TensorFlow
Once you have Theano installed, Keras can be installed via
pip install Keras
Loading MNIST
1) Download the 4 MNIST datasets from http
Step20: Multi-layer Perceptron in Keras
In order to run the following code via GPU, you can execute the Python script that was placed in this directory via
THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32 python mnist_keras_mlp.py
Step21: One-hot encoding of the class variable
Step22: Implement a neural network
fully connected with 2 hidden layers
tanh for hidden layers
softmax for output layer
cross entropy loss function (to match softmax output)
SGD optimization | Python Code:
%load_ext watermark
%watermark -a '' -u -d -v -p numpy,matplotlib,theano,keras
Explanation: Copyright (c) 2015, 2016 Sebastian Raschka
Li-Yi Wei
https://github.com/1iyiwei/pyml
MIT License
Python Machine Learning - Code Examples
Chapter 13 - Parallelizing Neural Network Training with Theano
We have seen how to write a multi-layer perceptron from scratch.
We can also just use existing libraries.
* Theano, Torch, TensorFlow, Caffe, etc.
Advantages of Theano
* Python interface
* Platform support
* GPU support for performance
Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
End of explanation
from IPython.display import Image
%matplotlib inline
Explanation: The use of watermark is optional. You can install this IPython extension via "pip install watermark". For more information, please see: https://github.com/rasbt/watermark.
Overview
Building, compiling, and running expressions with Theano
What is Theano?
First steps with Theano
Configuring Theano
Working with array structures
Wrapping things up – a linear regression example
Choosing activation functions for feedforward neural networks
Logistic function recap
Estimating probabilities in multi-class classification via the softmax function
Broadening the output spectrum by using a hyperbolic tangent
Training neural networks efficiently using Keras
Summary
End of explanation
import theano
from theano import tensor as T
import numpy as np
# define expression
# which can be visualized as a graph
x1 = T.scalar()
w1 = T.scalar()
w0 = T.scalar()
z1 = w1 * x1 + w0
# compile
net_input = theano.function(inputs=[w1, x1, w0], outputs=z1)
# execute
answer = net_input(2.0, 1.0, 0.5)
print(answer)
answer
# define
b = T.scalar('b')
x = T.vector('x')
W = T.matrix('W')
y = x.dot(W.transpose())
z = W.dot(x) + b
# similar to python function
# theano function can return multiple outputs
f = theano.function(inputs = [x, W, b], outputs = [y, z])
output_y, output_z = f([1, 2], [[3, 4], [5, 6]], 1)
# output_y, output_z = f([[1, 2]], [[3, 4]], 1) # won't work as x is a vector not matrix
# output_y, output_z = f([1, 2], [3, 4], 1) # won't work as W is a matrix not vector
# output_y, output_z = f([1, 2], [[3, 4]], [1]) # won't work as b is a scalar not a vector/matrix
print(output_y)
print(output_z)
# quadratic polynomial root example
# ax^2 + bx + c = 0
a = T.scalar('a')
b = T.scalar('b')
c = T.scalar('c')
core = b*b - 4*a*c
root_p = (-b + np.sqrt(core))/(2*a)
root_m = (-b - np.sqrt(core))/(2*a)
# compile
f = theano.function(inputs = [a, b, c], outputs = [root_p, root_m])
# run
polys = [[1, 2, 1],
[1, -7, 12],
[1, 0, 1]
]
for poly in polys:
a, b, c = poly
root1, root2 = f(a, b, c)
print(root1, root2)
Explanation: Building, compiling, and running expressions with Theano
Developed by the LISA lab lead by Joshua Bengio started in 2008.
Harness multi/many-core CPU/GPU without the burden of
* parallel computing code
* memory management across processors
<img src='./images/13_01.png' width=80%>
What is Theano?
A machine learning library with interface in Python.
* with more speed/memory optimization
Focus on tensors as the core data structure.
Tensors are multi-dimensional arrays.
* rank 0 for scalars
* rank 1 for vectors
* rank 2 for matrices
Symbolic manipulation
* build computation graphs
* automatic and symbolic differentiation
* send compiled expressions/graphs to CPU/GPU for execution
First steps with Theano
http://deeplearning.net/software/theano/introduction.html
Depending on your system setup, it is typically sufficient to install Theano via
pip install Theano
For more help with the installation, please see: http://deeplearning.net/software/theano/install.html
Introducing the TensorType variables. For a complete list, see http://deeplearning.net/software/theano/library/tensor/basic.html#all-fully-typed-constructors
End of explanation
# default configuration
print(theano.config.floatX)
# we can change it like this
theano.config.floatX = 'float32'
print(theano.config.floatX)
Explanation: Steps for using Theano:
* define symbols and functions
* compile the code
* execute the code
Each variable has a specific type (dtype)
* trade-off between accuracy and cost (speed and storage)
* we have to choose; good for control, bad as burden
Configuring Theano
Processors:
- Modern CPUs support 64-bit memory address.
- GPUs (and old CPUs) remain in 32-bit.
Theano supports both 32 and 64 bits.
We can configure Theano to use either: float32 (for 32-bit processors) or float64 (for 64-bit processors).
For more options, see
- http://deeplearning.net/software/theano/library/config.html
- http://deeplearning.net/software/theano/library/floatX.html
End of explanation
print(theano.config.device)
Explanation: To change the float type globally, execute
export THEANO_FLAGS=floatX=float32
in your bash shell. Or execute Python script as
THEANO_FLAGS=floatX=float32 python your_script.py
Running Theano on GPU(s). For prerequisites, please see: http://deeplearning.net/software/theano/tutorial/using_gpu.html
Note that float32 is recommended for GPUs; float64 on GPUs is currently still relatively slow.
End of explanation
import numpy as np
# define
# if you are running Theano on 64 bit mode,
# you need to use dmatrix instead of fmatrix
x = T.matrix(name='x') # tensor with arbitrary shape
x_sum = T.sum(x, axis=0)
# compile
calc_sum = theano.function(inputs=[x], outputs=x_sum)
# execute (Python list)
ary = [[1, 2, 3], [1, 2, 3]]
print('Column sum:', calc_sum(ary))
# execute (NumPy array)
ary = np.array(ary, dtype=theano.config.floatX)
print('Column sum:', calc_sum(ary))
# name can help debug
y = T.matrix(name='hello')
z = T.matrix()
print(y) # will print out variable name
print(z) # will print out variable type
print(y.type()) # will print out type
# explicit type specification
wf = T.fmatrix(name='wfmatrix')
wd = T.dmatrix(name='wdmatrix')
print(wf.type())
print(wd.type())
Explanation: You can run a Python script on CPU (e.g. for prototyping and debug) via:
THEANO_FLAGS=device=cpu,floatX=float64 python your_script.py
or GPU (e.g. for real computation) via:
THEANO_FLAGS=device=gpu,floatX=float32 python your_script.py
It may also be convenient to create a .theanorc file in your home directory to make those configurations permanent. For example, to always use float32, execute
echo -e "\n[global]\nfloatX=float32\n" >> ~/.theanorc
Or, create a .theanorc file manually with the following contents
[global]
floatX = float32
device = gpu
Working with array structures
This is an example code to work with tensors.
Create a $2 \times 3$ tensor, and calculate its column sum.
End of explanation
# initialize
x = T.matrix(name='x')
b = theano.shared(np.asarray([[1]], dtype=theano.config.floatX), name='b')
w = theano.shared(np.asarray([[0.0, 0.0, 0.0]],
dtype=theano.config.floatX))
# w = w + 1.0 # this will cause error
z = x.dot(w.T) + b
update = [[w, w + 1.0]] # update w after each function call
# compile
f = theano.function(inputs=[x],
updates=update,
outputs=z)
# won't compile as shared variable cannot be used as input
# g = theano.function(inputs=[x, b], outputs = z)
# execute
x_data = np.array([[1, 2, 3]], dtype=theano.config.floatX)
for i in range(5):
print('z%d:' % i, f(x_data))
Explanation: Memory management
shared
Variable with storage that is shared between functions that it appears in.
* can have initial or constant values, e.g. weights of a neural network
* retain value across function calls
* cannot be used as input to a function
* can be updated after each function call
Can be more efficient than input variable
* update in place instead of transferring around
* Theano can then optimize the storage across CPUs and GPUs
More info about memory management in Theano can be found under:
* http://deeplearning.net/software/theano/tutorial/aliasing.html
* https://www.quora.com/What-is-the-meaning-and-benefit-of-shared-variables-in-Theano
End of explanation
# define
num_samples = 10
samples = np.asarray([i for i in range(num_samples)],
dtype=theano.config.floatX)
# samples = theano.shared(samples)
x = T.lscalar(name='index')
#y = theano.shared(np.asscalar(np.array([1], dtype=theano.config.floatX)))
y = T.vector(name='samples')
w = theano.shared(np.asscalar(np.array([0], dtype=theano.config.floatX)))
z = y[x]*w
# compile
f = theano.function(inputs = [x],
updates = [[w, w+1]],
givens = {y: samples},
outputs = z)
# run
for i in range(np.prod(samples.shape)):
print(f(i))
# initialize
x_data = np.array([[1, 2, 3]], dtype=theano.config.floatX)
x = T.matrix(name='hi')
w = theano.shared(np.asarray([[0, 0, 0], [0, 0, 0], [0, 0, 0]], dtype=theano.config.floatX))
# an input variable can be given
b_data = np.array([[-1, 0, 1]], dtype=theano.config.floatX)
b = T.matrix(name='bias')
# a shared variable can be given
c_data = np.array([[4, 5, 6]], dtype=theano.config.floatX)
c = theano.shared(np.asarray([[0]], dtype=theano.config.floatX))
z = x.dot(w.T) + b + c
updates = [[w, w + 1.0]]
givens = {b: b_data, c: c_data}
# compile
net_input = theano.function(inputs=[x],
updates=updates,
givens=givens,
outputs=z)
# execute
for i in range(5):
print('z:', net_input(x_data))
Explanation: given
input: transfer from CPU to GPU multiple times
* e.g. multiple epochs
shared: retained values across functions, can be updated after each function call
* like static function variables
* not input for function call
* e.g. network weights
given: transfer from CPU to GPU once
* like constant variables
* shared between multiple function calls
* not input for function call
* values specified, for input or shared variables, during funcion compilation
* e.g. a mini-batch
If we use inputs, a datasets is transferred from the CPU to the GPU multiple times, for example, if we iterate over a dataset multiple times (epochs) during gradient descent.
We can use the givens variable to insert values into the graph before compiling it. Using this approach we can reduce the number of transfers from RAM (via CPUs) to GPUs to speed up learning with shared variables.
Via givens, we can keep the dataset on the GPU if it fits (e.g., a mini-batch).
End of explanation
import numpy as np
X_train = np.asarray([[0.0], [1.0], [2.0], [3.0], [4.0],
[5.0], [6.0], [7.0], [8.0], [9.0]],
dtype=theano.config.floatX)
y_train = np.asarray([1.0, 1.3, 3.1, 2.0, 5.0,
6.3, 6.6, 7.4, 8.0, 9.0],
dtype=theano.config.floatX)
Explanation: Wrapping things up: A linear regression example
Model:
$
y = \sum_{i=0}^n w_i x_i = \mathbf{w}^T \mathbf{x}
$
with $x_0 = 1$.
Given a collection of sample data ${\mathbf{x^{(i)}}, y^{(i)} }$, find the line $\mathbf{w}$ that minimizes the regression error:
$$
\begin{align}
L(X, Y, \mathbf{w})
= \sum_i \left( y^{(i)} - \hat{y}^{(i)} \right)^2
= \sum_i \left( y^{(i)} - \mathbf{w}^T \mathbf{x}^{(i)} \right)^2
\end{align}
$$
2D case
$
y = w_0 + w_1 x
$
<img src='./images/10_01.png' width=90%>
Create some training data
End of explanation
import theano
from theano import tensor as T
import numpy as np
def train_linreg(X_train, y_train, eta, epochs):
costs = []
# Initialize arrays
eta0 = T.scalar('eta0') # learning rate
y = T.vector(name='y')
X = T.matrix(name='X')
w = theano.shared(np.zeros(
shape=(X_train.shape[1] + 1),
dtype=theano.config.floatX),
name='w')
# calculate cost
y_pred = T.dot(X, w[1:]) + w[0]
errors = y - y_pred
cost = T.sum(T.pow(errors, 2))
# perform gradient update
gradient = T.grad(cost, wrt=w) # symbolic differentialtion
update = [(w, w - eta0 * gradient)]
# compile model
train = theano.function(inputs=[eta0],
outputs=cost,
updates=update,
givens={X: X_train,
y: y_train})
for _ in range(epochs):
# since eta is input
# we can gradually change the learning rate
costs.append(train(eta))
return costs, w
Explanation: Implement the training function
Notice:
* the symbolic differentiation for the gradient part
* how different variable types (input, shared, givens, output) are used
End of explanation
import matplotlib.pyplot as plt
costs, w = train_linreg(X_train, y_train, eta=0.001, epochs=10)
plt.plot(range(1, len(costs)+1), costs)
plt.tight_layout()
plt.xlabel('Epoch')
plt.ylabel('Cost')
plt.tight_layout()
# plt.savefig('./figures/cost_convergence.png', dpi=300)
plt.show()
Explanation: Plotting the sum of squared errors cost vs epochs.
End of explanation
def predict_linreg(X, w):
Xt = T.matrix(name='X')
y_pred = T.dot(Xt, w[1:]) + w[0]
predict = theano.function(inputs=[Xt], givens={w: w}, outputs=y_pred)
return predict(X)
plt.scatter(X_train, y_train, marker='s', s=50)
plt.plot(range(X_train.shape[0]),
predict_linreg(X_train, w),
color='gray',
marker='o',
markersize=4,
linewidth=3)
plt.xlabel('x')
plt.ylabel('y')
plt.tight_layout()
# plt.savefig('./figures/linreg.png', dpi=300)
plt.show()
Explanation: Make prediction
End of explanation
# note that first element (X[0] = 1) to denote bias unit
X = np.array([[1, 1.4, 1.5]])
w = np.array([0.0, 0.2, 0.4])
def net_input(X, w):
z = X.dot(w)
return z
def logistic(z):
return 1.0 / (1.0 + np.exp(-z))
def logistic_activation(X, w):
z = net_input(X, w)
return np.asscalar(logistic(z))
print('P(y=1|x) = %.3f' % logistic_activation(X, w))
Explanation: Theano for neural networks
Also use Keras library
Choosing activation functions for feedforward neural networks
There are various activation functions for a multi-layer neural networks.
* in theory we can use any differential function
* in practice we want (1) non-linearity and (2) goood convergence for gradient descent
Sigmoid is one we have seen.
* mimics biological neurons
* converge slowly for deep networks (vanishing gradients)
Logistic function recap
The logistic function, often just called "sigmoid function" is in fact a special case of a sigmoid function.
Linear input $z$:
$$
\begin{align}
z &= w_0x_{0} + \dots + w_mx_{m}
\
&= \sum_{j=0}^{m} x_{j}w_{j}
\
&= \mathbf{w}^T\mathbf{x}
\end{align}
$$
$w_0$ is the bias term, matching $x_0 = 1$
Logistic activation function:
$$\phi_{logistic}(z) = \frac{1}{1 + e^{-z}}$$
Output range: (0, 1)
* probability for the positive class $z > 0$
Concrete example
End of explanation
# W : array, shape = [n_output_units, n_hidden_units+1]
# Weight matrix for hidden layer -> output layer.
# note that first column (A[:][0] = 1) are the bias units
W = np.array([[1.1, 1.2, 1.3, 0.5],
[0.1, 0.2, 0.4, 0.1],
[0.2, 0.5, 2.1, 1.9]])
# A : array, shape = [n_hidden+1, n_samples]
# Activation of hidden layer.
# note that first element (A[0][0] = 1) is for the bias units
A = np.array([[1.0],
[0.1],
[0.3],
[0.7]])
# Z : array, shape = [n_output_units, n_samples]
# Net input of output layer.
Z = W.dot(A)
y_probas = logistic(Z)
print('Probabilities:\n', y_probas)
Explanation: Multiple outputs
One-hot encoding for multi-class classification.
* $K$ outputs for $K$ classes
Logistic activation outputs cannot be directly interpreted as probabilities.
Example
A MLP perceptron with
* 3 hidden units + 1 bias unit in the hidden unit
* 3 output units
End of explanation
y_class = np.argmax(Z, axis=0)
print('predicted class label: %d' % y_class[0])
Explanation: The outputs do not sum to 1 and thus are not probabilities.
Need normalization for probability
* divide all outputs by their summation
* softmax which should be applied to z (linear inputs) before logistic activation
OK for classification:
End of explanation
def softmax(z):
return np.exp(z) / np.sum(np.exp(z))
def softmax_activation(X, w):
z = net_input(X, w)
return softmax(z)
y_probas = softmax(Z) # same Z computed above
print('Probabilities:\n', y_probas)
y_probas.sum()
Explanation: Estimating probabilities in multi-class classification via the softmax function
The softmax function
* is a generalization of the logistic function
* allows us to compute meaningful class probabilities in multi-class settings
* i.e. multinomial logistic regression
$z$: linear input as usual
$K$: number of classes
$$
\begin{align}
P(y=j|z) =\phi_{softmax}(z) = \frac{e^{z_j}}{\sum_{k=1}^K e^{z_k}}
\end{align}
$$
$P(y=j | z)$: probability of class $j$ for input $z$ in range $(0, 1)$
<img src="./images/bonus_softmax_1.png" width=100%>
End of explanation
y_class = np.argmax(Z, axis=0)
y_class
Explanation: The class probabilities sum to 1.
The predicted class is the same as logistic regression.
End of explanation
def tanh(z):
e_p = np.exp(z)
e_m = np.exp(-z)
return (e_p - e_m) / (e_p + e_m)
import matplotlib.pyplot as plt
z = np.arange(-5, 5, 0.005)
log_act = logistic(z)
tanh_act = tanh(z)
# alternatives:
# from scipy.special import expit
# log_act = expit(z)
# tanh_act = np.tanh(z)
plt.ylim([-1.5, 1.5])
plt.xlabel('net input $z$')
plt.ylabel('activation $\phi(z)$')
plt.axhline(1, color='black', linestyle='--')
plt.axhline(0.5, color='black', linestyle='--')
plt.axhline(0, color='black', linestyle='--')
plt.axhline(-1, color='black', linestyle='--')
plt.plot(z, tanh_act,
linewidth=2,
color='black',
label='tanh')
plt.plot(z, log_act,
linewidth=2,
color='lightgreen',
label='logistic')
plt.legend(loc='lower right')
plt.tight_layout()
# plt.savefig('./figures/activation.png', dpi=300)
plt.show()
Explanation: Broadening the output spectrum using a hyperbolic tangent
Another special case of a sigmoid function, it can be interpreted as a rescaled version of the logistic function.
$$
\begin{align}
\phi_{tanh}(z) = \frac{e^{z}-e^{-z}}{e^{z}+e^{-z}}
\end{align}
$$
$\phi_{tanh}$ is a rescaled version of $\phi_{logistic}$:
$$
\begin{align}
\phi_{tanh}(z) = 2 \phi_{logistic}(z) - 1
\end{align}
$$
Output range: (-1, 1)
End of explanation
import os
import os.path
import struct
import gzip
import numpy as np
def open_mnist(full_path):
if full_path.find('.gz') >= 0:
return gzip.open(full_path, 'rb')
else:
return open(full_path, 'rb')
def pick_mnist(path, name, exts):
for ext in exts:
full_path = os.path.join(path, name + ext)
if os.path.isfile(full_path):
return full_path
# none of the exts options works
return None
def load_mnist(path, kind='train', exts=['', '.gz']):
Load MNIST data from `path`
labels_path = pick_mnist(path, kind + '-labels-idx1-ubyte', exts)
images_path = pick_mnist(path, kind + '-images-idx3-ubyte', exts)
with open_mnist(labels_path) as lbpath:
magic, n = struct.unpack('>II', lbpath.read(8))
if(magic != 2049):
raise IOError(str(magic) + ' != ' + str(2049))
# np.fromfile does not work with gzip open
# http://stackoverflow.com/questions/15966335/efficient-numpy-fromfile-on-zipped-files
# labels = np.fromfile(lbpath, dtype=np.uint8)
content = lbpath.read()
labels = np.frombuffer(content, dtype=np.uint8)
if(len(labels) != n):
raise IOError(str(len(labels)) + ' != ' + str(n))
with open_mnist(images_path) as imgpath:
magic, num, rows, cols = struct.unpack(">IIII", imgpath.read(16))
if(magic != 2051):
raise IOError(str(magic) + ' != ' + str(2051))
# images = np.fromfile(imgpath, dtype=np.uint8).reshape(num, rows*cols)
content = imgpath.read()
images = np.frombuffer(content, dtype=np.uint8).reshape(num, rows*cols)
if(num != len(labels)):
raise IOError(str(num) + ' != ' + str(len(labels)))
return images, labels
mnist_data_folder = os.path.join('..', 'datasets', 'mnist')
exts = ['', '.gz'] # for already gunzipped files and not yet gzipped files
X_train, y_train = load_mnist(mnist_data_folder, kind='train', exts=exts)
print('Rows: %d, columns: %d' % (X_train.shape[0], X_train.shape[1]))
X_test, y_test = load_mnist(mnist_data_folder, kind='t10k', exts=exts)
print('Rows: %d, columns: %d' % (X_test.shape[0], X_test.shape[1]))
Explanation: Different activation functions
<img src='./images/13_05.png' width=100%>
Training neural networks efficiently using Keras
A library (stared in early 2015) to facilitate neural network training.
* built on top of Theano
* intuitive and popular API
* front-end for Theano and TensorFlow
Once you have Theano installed, Keras can be installed via
pip install Keras
Loading MNIST
1) Download the 4 MNIST datasets from http://yann.lecun.com/exdb/mnist/
train-images-idx3-ubyte.gz: training set images (9912422 bytes)
train-labels-idx1-ubyte.gz: training set labels (28881 bytes)
t10k-images-idx3-ubyte.gz: test set images (1648877 bytes)
t10k-labels-idx1-ubyte.gz: test set labels (4542 bytes)
2) Unzip those files
3) Copy the unzipped files to a directory ./mnist
Li-Yi: I enhanced the functions below so that we can do everything (including downloading and decompression) inside ipynb.
End of explanation
import theano
theano.config.floatX = 'float32'
X_train = X_train.astype(theano.config.floatX)
X_test = X_test.astype(theano.config.floatX)
Explanation: Multi-layer Perceptron in Keras
In order to run the following code via GPU, you can execute the Python script that was placed in this directory via
THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32 python mnist_keras_mlp.py
End of explanation
from keras.utils import np_utils
print('First 3 labels: ', y_train[:3])
y_train_ohe = np_utils.to_categorical(y_train)
print('\nFirst 3 labels (one-hot):\n', y_train_ohe[:3])
Explanation: One-hot encoding of the class variable
End of explanation
from keras.models import Sequential
from keras.layers.core import Dense
from keras.optimizers import SGD
np.random.seed(1)
model = Sequential()
model.add(Dense(input_dim=X_train.shape[1],
output_dim=50,
init='uniform',
activation='tanh'))
model.add(Dense(output_dim=50,
init='uniform',
activation='tanh'))
model.add(Dense(output_dim=y_train_ohe.shape[1],
init='uniform',
activation='softmax'))
sgd = SGD(lr=0.001, decay=1e-7, momentum=.9)
model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=["accuracy"])
model.fit(X_train, y_train_ohe,
nb_epoch=50,
batch_size=300,
verbose=1,
validation_split=0.1 # 10% of training data for validation per epoch
)
y_train_pred = model.predict_classes(X_train, verbose=0)
print('First 3 predictions: ', y_train_pred[:3])
train_acc = np.sum(y_train == y_train_pred, axis=0) / X_train.shape[0]
print('Training accuracy: %.2f%%' % (train_acc * 100))
y_test_pred = model.predict_classes(X_test, verbose=0)
test_acc = np.sum(y_test == y_test_pred, axis=0) / X_test.shape[0]
print('Test accuracy: %.2f%%' % (test_acc * 100))
Explanation: Implement a neural network
fully connected with 2 hidden layers
tanh for hidden layers
softmax for output layer
cross entropy loss function (to match softmax output)
SGD optimization
End of explanation |
15,015 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exploring Titanic Dataset
Questions
Step1: Not really what I was looking for. Was hoping to see survived and died side by side. | Python Code:
# Import magic
%matplotlib inline
# More imports
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
#Set general plot properties
sns.set_style("white")
sns.set_context({"figure.figsize": (18, 8)})
# Load CSV data
titanic_data = pd.read_csv('titanic_data.csv')
survived = titanic_data[titanic_data['Survived'] == 1]
died = titanic_data[titanic_data['Survived'] == 0]
gender_grouped = titanic_data.groupby('Sex')
gender_died = died.groupby('Sex')
gender_survived = survived.groupby('Sex')
gender_grouped.hist(column=['Fare', 'Age', 'Pclass'])
Explanation: Exploring Titanic Dataset
Questions:
How does gender affect different aspects of survivorship
End of explanation
# Not null ages
female_survived_nn = gender_survived.get_group('female')[pd.notnull(gender_survived.get_group('female')['Age'])]
female_died_nn = gender_died.get_group('female')[pd.notnull(gender_died.get_group('female')['Age'])]
male_survived_nn = gender_survived.get_group('male')[pd.notnull(gender_survived.get_group('male')['Age'])]
male_died_nn = gender_died.get_group('male')[pd.notnull(gender_died.get_group('male')['Age'])]
fig, (ax1, ax2) = plt.subplots(ncols=2, sharey=True)
sns.distplot(female_survived_nn['Age'], kde=False, ax=ax1, bins=20)
sns.distplot(male_survived_nn['Age'], kde=False, ax=ax2, bins=20)
sns.distplot(female_died_nn['Age'], kde=False, ax=ax1, color='r', bins=20)
sns.distplot(male_died_nn['Age'], kde=False, ax=ax2, color='r', bins=20)
Explanation: Not really what I was looking for. Was hoping to see survived and died side by side.
End of explanation |
15,016 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I have a Dataframe as below. | Problem:
import pandas as pd
df = pd.DataFrame({'Name': ['Name1', 'Name2', 'Name3'],
'2001': [2, 1, 0],
'2002': [5, 4, 5],
'2003': [0, 2, 0],
'2004': [0, 0, 0],
'2005': [4, 4, 0],
'2006': [6, 0, 2]})
def g(df):
cols = list(df)[1:]
cols = cols[::-1]
for idx in df.index:
s = 0
cnt = 0
for col in cols:
if df.loc[idx, col] != 0:
cnt = min(cnt+1, 2)
s = (s + df.loc[idx, col]) / cnt
df.loc[idx, col] = s
return df
df = g(df.copy()) |
15,017 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
New functions
These are recently written functions that have not made it into the main documentation
Python Lesson
Step1: When things go wrong in your eppy script, you get "Errors and Exceptions".
To know more about how this works in python and eppy, take a look at Python
Step2: Now let us open file fname1 without setting the idd file
Step3: OK. It does not let you do that and it raises an exception
So let us set the idd file and then open the idf file
Step4: That worked without raising an exception
Now let us try to change the idd file. Eppy should not let you do this and should raise an exception.
Step5: Excellent!! It raised the exception we were expecting.
Check range for fields
The fields of idf objects often have a range of legal values. The following functions will let you discover what that range is and test if your value lies within that range
demonstrate two new functions
Step6: Let us set these values outside the range and see what happens
Step7: So the Range Check works
Looping through all the fields in an idf object
We have seen how to check the range of field in the idf object. What if you want to do a range check on all the fields in an idf object ? To do this we will need a list of all the fields in the idf object. We can do this easily by the following line
Step8: So let us use this
Step9: Now let us test if the values are in the legal range. We know that "Loads_Convergence_Tolerance_Value" is out of range
Step10: You see, we caught the out of range value
Blank idf file
Until now in all our examples, we have been reading an idf file from disk
Step11: It did not print anything. Why should it. It was empty.
What if we give it a string that was not blank
Step12: Aha !
Now let us give it a file name
Step13: Let us confirm that the file was saved to disk
Step14: Yup ! that file was saved. Let us delete it since we were just playing
Step15: Deleting, copying/adding and making new idfobjects
Making a new idf object
Let us start with a blank idf file and make some new "MATERIAL" objects in it
Step16: To make and add a new idfobject object, we use the function IDF.newidfobject(). We want to make an object of type "MATERIAL"
Step17: Let us give this a name, say "Shiny new material object"
Step18: Let us look at all the "MATERIAL" objects
Step19: As we can see there are three MATERIAL idfobjects. They are
Step20: You can see that the second material is gone ! Now let us remove the first material, but do it using a different function
Step21: So we have two ways of deleting an idf object
Step22: So now we have a copy of the material. You can use this method to copy idf objects from other idf files too.
Making an idf object with named arguments
What if we wanted to make an idf object with values for it's fields? We can do that too.
Step23: newidfobject() also fills in the default values like "Thermal Absorptance", "Solar Absorptance", etc.
Step24: Renaming an idf object
It is easy to rename an idf object. If we want to rename the gypboard object that we created above, we simply say
Step25: to rename gypboard and have that name change in all the places we call modeleditor.rename(idf, key, oldname, newname)
Step26: Now we have "peanut butter" everywhere. At least where we need it. Let us look at the entir idf file, just to be sure
Step27: Turn off default values
Can I turn off the defautl values. Yes you can
Step28: But why would you want to turn it off.
Well .... sometimes you have to
Try it with the object DAYLIGHTING
Step29: Can we do the same for zones ?
Not yet .. not yet. Not in this version on eppy
But we can still get the area and volume of the zone
Step30: Not as slick, but still pretty easy
Some notes on the zone area calculation
Step31: Compare the first printidf() and the second printidf().
The syntax of the json string is described below
Step32: What if you object name had a dot . in it? Will the json_function get confused?
If the name has a dot in it, there are two ways of doing this.
Step33: Note When you us the json update function
Step34: You have to find the IDD file on your hard disk.
Then set the IDD using setiddname(iddfile).
Now you can open the IDF file
Why can’t you just open the IDF file without jumping thru all those hoops. Why do you have to find the IDD file. What is the point of having a computer, if it does not do the grunt work for you.
The function easyopen will do the grunt work for you. It will automatically read the version number from the IDF file, locate the correct IDD file and set it in eppy and then open your file. It works like this
Step35: For this to work,
the IDF file should have the VERSION object. You may not have this if you are just working on a file snippet.
you need to have the version of EnergyPlus installed that matches your IDF version.
Energyplus should be installed in the default location.
If easyopen does not work, use the long winded steps shown in the tutorial. That is guaranteed to work
Fast HTML table file read
To read the html table files you would usually use the functions described in Reading outputs from E+. For instance you would use the functions as shown below.
Step36: titletable reads all the tables in the HTML file. With large E+ models, this file can be extremeely large and titletable will load all the tables into memory. This can take several minutes. If you are trying to get one table or one value from a table, waiting several minutes for you reseult can be exessive.
If you know which table you are looking for, there is a faster way of doing this. We used index=0 in the above example to get the first table. If you know the index of the file you are looking for, you can use a faster function to get the table as shown below
Step37: You can also get the table if you know the title of the table. This is the bold text just before the table in the HTML file. The title of our table is Site and Source Energy. The function tablebyname will get us the table. | Python Code:
# you would normaly install eppy by doing #
# python setup.py install
# or
# pip install eppy
# or
# easy_install eppy
# if you have not done so, uncomment the following three lines
import sys
# pathnameto_eppy = 'c:/eppy'
pathnameto_eppy = '../'
sys.path.append(pathnameto_eppy)
Explanation: New functions
These are recently written functions that have not made it into the main documentation
Python Lesson: Errors and Exceptions
End of explanation
from eppy import modeleditor
from eppy.modeleditor import IDF
fname1 = "../eppy/resources/idffiles/V_7_2/smallfile.idf"
Explanation: When things go wrong in your eppy script, you get "Errors and Exceptions".
To know more about how this works in python and eppy, take a look at Python: Errors and Exceptions
Setting IDD name
When you work with Energyplus you are working with idf files (files that have the extension *.idf). There is another file that is very important, called the idd file. This is the file that defines all the objects in Energyplus. Esch version of Energyplus has a different idd file.
So eppy needs to know which idd file to use. Only one idd file can be used in a script or program. This means that you cannot change the idd file once you have selected it. Of course you have to first select an idd file before eppy can work.
If you use eppy and break the above rules, eppy will raise an exception. So let us use eppy incorrectly and make eppy raise the exception, just see how that happens.
First let us try to open an idf file without setting an idd file.
End of explanation
try:
idf1 = IDF(fname1)
except Exception as e:
raise e
Explanation: Now let us open file fname1 without setting the idd file
End of explanation
iddfile = "../eppy/resources/iddfiles/Energy+V7_2_0.idd"
IDF.setiddname(iddfile)
idf1 = IDF(fname1)
Explanation: OK. It does not let you do that and it raises an exception
So let us set the idd file and then open the idf file
End of explanation
try:
IDF.setiddname("anotheridd.idd")
except Exception as e:
raise e
Explanation: That worked without raising an exception
Now let us try to change the idd file. Eppy should not let you do this and should raise an exception.
End of explanation
from eppy import modeleditor
from eppy.modeleditor import IDF
iddfile = "../eppy/resources/iddfiles/Energy+V7_2_0.idd"
fname1 = "../eppy/resources/idffiles/V_7_2/smallfile.idf"
# IDF.setiddname(iddfile)# idd ws set further up in this page
idf1 = IDF(fname1)
building = idf1.idfobjects['building'][0]
print(building)
print(building.getrange("Loads_Convergence_Tolerance_Value"))
print(building.checkrange("Loads_Convergence_Tolerance_Value"))
Explanation: Excellent!! It raised the exception we were expecting.
Check range for fields
The fields of idf objects often have a range of legal values. The following functions will let you discover what that range is and test if your value lies within that range
demonstrate two new functions:
EpBunch.getrange(fieldname) # will return the ranges for that field
EpBunch.checkrange(fieldname) # will throw an exception if the value is outside the range
End of explanation
building.Loads_Convergence_Tolerance_Value = 0.6
from eppy.bunch_subclass import RangeError
try:
print(building.checkrange("Loads_Convergence_Tolerance_Value"))
except RangeError as e:
raise e
Explanation: Let us set these values outside the range and see what happens
End of explanation
print(building.fieldnames)
Explanation: So the Range Check works
Looping through all the fields in an idf object
We have seen how to check the range of field in the idf object. What if you want to do a range check on all the fields in an idf object ? To do this we will need a list of all the fields in the idf object. We can do this easily by the following line
End of explanation
for fieldname in building.fieldnames:
print("%s = %s" % (fieldname, building[fieldname]))
Explanation: So let us use this
End of explanation
from eppy.bunch_subclass import RangeError
for fieldname in building.fieldnames:
try:
building.checkrange(fieldname)
print("%s = %s #-in range" % (fieldname, building[fieldname],))
except RangeError as e:
print("%s = %s #-****OUT OF RANGE****" % (fieldname, building[fieldname],))
Explanation: Now let us test if the values are in the legal range. We know that "Loads_Convergence_Tolerance_Value" is out of range
End of explanation
# some initial steps
from eppy.modeleditor import IDF
iddfile = "../eppy/resources/iddfiles/Energy+V7_2_0.idd"
# IDF.setiddname(iddfile) # Has already been set
# - Let us first open a file from the disk
fname1 = "../eppy/resources/idffiles/V_7_2/smallfile.idf"
idf_fromfilename = IDF(fname1) # initialize the IDF object with the file name
idf_fromfilename.printidf()
# - now let us open a file from the disk differently
fname1 = "../eppy/resources/idffiles/V_7_2/smallfile.idf"
fhandle = open(fname1, 'r') # open the file for reading and assign it a file handle
idf_fromfilehandle = IDF(fhandle) # initialize the IDF object with the file handle
idf_fromfilehandle.printidf()
# So IDF object can be initialized with either a file name or a file handle
# - How do I create a blank new idf file
idftxt = "" # empty string
from io import StringIO
fhandle = StringIO(idftxt) # we can make a file handle of a string
idf_emptyfile = IDF(fhandle) # initialize the IDF object with the file handle
idf_emptyfile.printidf()
Explanation: You see, we caught the out of range value
Blank idf file
Until now in all our examples, we have been reading an idf file from disk:
How do I create a blank new idf file
give it a file name
Save it to the disk
Here are the steps to do that
End of explanation
# - The string does not have to be blank
idftxt = "VERSION, 7.3;" # Not an emplty string. has just the version number
fhandle = StringIO(idftxt) # we can make a file handle of a string
idf_notemptyfile = IDF(fhandle) # initialize the IDF object with the file handle
idf_notemptyfile.printidf()
Explanation: It did not print anything. Why should it. It was empty.
What if we give it a string that was not blank
End of explanation
# - give it a file name
idf_notemptyfile.idfname = "notemptyfile.idf"
# - Save it to the disk
idf_notemptyfile.save()
Explanation: Aha !
Now let us give it a file name
End of explanation
txt = open("notemptyfile.idf", 'r').read()# read the file from the disk
print(txt)
Explanation: Let us confirm that the file was saved to disk
End of explanation
import os
os.remove("notemptyfile.idf")
Explanation: Yup ! that file was saved. Let us delete it since we were just playing
End of explanation
# making a blank idf object
blankstr = ""
from io import StringIO
idf = IDF(StringIO(blankstr))
Explanation: Deleting, copying/adding and making new idfobjects
Making a new idf object
Let us start with a blank idf file and make some new "MATERIAL" objects in it
End of explanation
newobject = idf.newidfobject("material")
print(newobject)
Explanation: To make and add a new idfobject object, we use the function IDF.newidfobject(). We want to make an object of type "MATERIAL"
End of explanation
newobject.Name = "Shiny new material object"
print(newobject)
anothermaterial = idf.newidfobject("material")
anothermaterial.Name = "Lousy material"
thirdmaterial = idf.newidfobject("material")
thirdmaterial.Name = "third material"
print(thirdmaterial)
Explanation: Let us give this a name, say "Shiny new material object"
End of explanation
print(idf.idfobjects["MATERIAL"])
Explanation: Let us look at all the "MATERIAL" objects
End of explanation
idf.popidfobject('MATERIAL', 1) # first material is '0', second is '1'
print(idf.idfobjects['MATERIAL'])
Explanation: As we can see there are three MATERIAL idfobjects. They are:
Shiny new material object
Lousy material
third material
Deleting an idf object
Let us remove 2. Lousy material. It is the second material in the list. So let us remove the second material
End of explanation
firstmaterial = idf.idfobjects['MATERIAL'][-1]
idf.removeidfobject(firstmaterial)
print(idf.idfobjects['MATERIAL'])
Explanation: You can see that the second material is gone ! Now let us remove the first material, but do it using a different function
End of explanation
onlymaterial = idf.idfobjects["MATERIAL"][0]
idf.copyidfobject(onlymaterial)
print(idf.idfobjects["MATERIAL"])
Explanation: So we have two ways of deleting an idf object:
popidfobject -> give it the idf key: "MATERIAL", and the index number
removeidfobject -> give it the idf object to be deleted
Copying/Adding an idf object
Having deleted two "MATERIAL" objects, we have only one left. Let us make a copy of this object and add it to our idf file
End of explanation
gypboard = idf.newidfobject('MATERIAL', Name="G01a 19mm gypsum board",
Roughness="MediumSmooth",
Thickness=0.019,
Conductivity=0.16,
Density=800,
Specific_Heat=1090)
print(gypboard)
Explanation: So now we have a copy of the material. You can use this method to copy idf objects from other idf files too.
Making an idf object with named arguments
What if we wanted to make an idf object with values for it's fields? We can do that too.
End of explanation
print(idf.idfobjects["MATERIAL"])
Explanation: newidfobject() also fills in the default values like "Thermal Absorptance", "Solar Absorptance", etc.
End of explanation
interiorwall = idf.newidfobject("CONSTRUCTION", Name="Interior Wall",
Outside_Layer="G01a 19mm gypsum board",
Layer_2="Shiny new material object",
Layer_3="G01a 19mm gypsum board")
print(interiorwall)
Explanation: Renaming an idf object
It is easy to rename an idf object. If we want to rename the gypboard object that we created above, we simply say:
But this could create a problem. What if this gypboard is part of a "CONSTRUCTION" object. The construction object will refer to the gypboard by name. If we change the name of the gypboard, we should change it in the construction object.
But there may be many constructions objects using the gypboard. Now we will have to change it in all those construction objects. Sounds painfull.
Let us try this with an example:
End of explanation
modeleditor.rename(idf, "MATERIAL", "G01a 19mm gypsum board", "peanut butter")
print(interiorwall)
Explanation: to rename gypboard and have that name change in all the places we call modeleditor.rename(idf, key, oldname, newname)
End of explanation
idf.printidf()
Explanation: Now we have "peanut butter" everywhere. At least where we need it. Let us look at the entir idf file, just to be sure
End of explanation
defaultmaterial = idf.newidfobject("MATERIAL",
Name='with default')
print(defaultmaterial)
nodefaultmaterial = idf.newidfobject("MATERIAL",
Name='Without default',
defaultvalues=False)
print(nodefaultmaterial)
Explanation: Turn off default values
Can I turn off the defautl values. Yes you can:
End of explanation
from eppy import modeleditor
from eppy.modeleditor import IDF
iddfile = "../eppy/resources/iddfiles/Energy+V7_2_0.idd"
fname1 = "../eppy/resources/idffiles/V_7_2/box.idf"
# IDF.setiddname(iddfile)
idf = IDF(fname1)
surfaces = idf.idfobjects["BuildingSurface:Detailed"]
surface = surfaces[0]
print("area = %s" % (surface.area, ))
print("tilt = %s" % (surface.tilt, ))
print( "azimuth = %s" % (surface.azimuth, ))
Explanation: But why would you want to turn it off.
Well .... sometimes you have to
Try it with the object DAYLIGHTING:CONTROLS, and you will see the need for defaultvalues=False
Of course, internally EnergyPlus will still use the default values it it is left blank. If just won't turn up in the IDF file.
Zone area and volume
The idf file has zones with surfaces and windows. It is easy to get the attributes of the surfaces and windows as we have seen in the tutorial. Let us review this once more:
End of explanation
zones = idf.idfobjects["ZONE"]
zone = zones[0]
area = modeleditor.zonearea(idf, zone.Name)
volume = modeleditor.zonevolume(idf, zone.Name)
print("zone area = %s" % (area, ))
print("zone volume = %s" % (volume, ))
Explanation: Can we do the same for zones ?
Not yet .. not yet. Not in this version on eppy
But we can still get the area and volume of the zone
End of explanation
idf1.printidf()
import eppy.json_functions as json_functions
json_str = {"idf.VERSION..Version_Identifier":8.5,
"idf.SIMULATIONCONTROL..Do_Zone_Sizing_Calculation": "No",
"idf.SIMULATIONCONTROL..Do_System_Sizing_Calculation": "No",
"idf.SIMULATIONCONTROL..Do_Plant_Sizing_Calculation": "No",
"idf.BUILDING.Empire State Building.North_Axis": 52,
"idf.BUILDING.Empire State Building.Terrain": "Rural",
}
json_functions.updateidf(idf1, json_str)
idf1.printidf()
Explanation: Not as slick, but still pretty easy
Some notes on the zone area calculation:
area is calculated by summing up all the areas of the floor surfaces
if there are no floors, then the sum of ceilings and roof is taken as zone area
if there are no floors, ceilings or roof, we are out of luck. The function returns 0
Using JSON to update idf
we are going to update idf1 using json. First let us print the idf1 before changing it, so we can see what has changed once we make an update
End of explanation
json_str = {"idf.BUILDING.Taj.Terrain": "Rural",}
json_functions.updateidf(idf1, json_str)
idf1.idfobjects['building']
# of course, you are creating an invalid E+ file. But we are just playing here.
Explanation: Compare the first printidf() and the second printidf().
The syntax of the json string is described below::
You can also create a new object using JSON, using the same syntax. Take a look at this:
End of explanation
# first way
json_str = {"idf.BUILDING.Taj.with.dot.Terrain": "Rural",}
json_functions.updateidf(idf1, json_str)
# second way (put the name in single quotes)
json_str = {"idf.BUILDING.'Another.Taj.with.dot'.Terrain": "Rural",}
json_functions.updateidf(idf1, json_str)
idf1.idfobjects['building']
Explanation: What if you object name had a dot . in it? Will the json_function get confused?
If the name has a dot in it, there are two ways of doing this.
End of explanation
from eppy import modeleditor
from eppy.modeleditor import IDF
iddfile = "../eppy/resources/iddfiles/Energy+V7_2_0.idd"
fname = "../eppy/resources/idffiles/V_7_2/smallfile.idf"
IDF.setiddname(iddfile)
idf = IDF(fname)
Explanation: Note When you us the json update function:
The json function expects the Name field to have a value.
If you try to update an object with a blank Name field, the results may be unexpected (undefined ? :-). So don't do this.
If the object has no Name field (some don't), changes are made to the first object in the list. Which should be fine, since usually there is only one item in the list
In any case, if the object does not exist, it is created with the default values
Use Case for JSON update
If you have an eppy running on a remote server somewhere on the internet, you can change an idf file by sending it a JSON over the internet. This is very useful if you ever need it. If you don't need it, you shouldn't care :-)
Open a file quickly¶
It is rather cumbersome to open an IDF file in eppy. From the tutorial, the steps look like this:
End of explanation
from importlib import reload
import eppy
reload(eppy.modeleditor)
from eppy.easyopen import easyopen
fname = '../eppy/resources/idffiles/V8_8/smallfile.idf'
idf = easyopen(fname)
Explanation: You have to find the IDD file on your hard disk.
Then set the IDD using setiddname(iddfile).
Now you can open the IDF file
Why can’t you just open the IDF file without jumping thru all those hoops. Why do you have to find the IDD file. What is the point of having a computer, if it does not do the grunt work for you.
The function easyopen will do the grunt work for you. It will automatically read the version number from the IDF file, locate the correct IDD file and set it in eppy and then open your file. It works like this:
End of explanation
from eppy.results import readhtml # the eppy module with functions to read the html
import pprint
pp = pprint.PrettyPrinter()
fname = "../eppy/resources/outputfiles/V_7_2/5ZoneCAVtoVAVWarmestTempFlowTable_ABUPS.html" # the html file you want to read
html_doc = open(fname, 'r').read()
htables = readhtml.titletable(html_doc) # reads the tables with their titles
firstitem = htables[0]
pp.pprint(firstitem)
Explanation: For this to work,
the IDF file should have the VERSION object. You may not have this if you are just working on a file snippet.
you need to have the version of EnergyPlus installed that matches your IDF version.
Energyplus should be installed in the default location.
If easyopen does not work, use the long winded steps shown in the tutorial. That is guaranteed to work
Fast HTML table file read
To read the html table files you would usually use the functions described in Reading outputs from E+. For instance you would use the functions as shown below.
End of explanation
from eppy.results import fasthtml
fname = "../eppy/resources/outputfiles/V_7_2/5ZoneCAVtoVAVWarmestTempFlowTable_ABUPS.html" # the html file you want to read
filehandle = open(fname, 'r') # get a file handle to the html file
firsttable = fasthtml.tablebyindex(filehandle, 0)
pp.pprint(firstitem)
Explanation: titletable reads all the tables in the HTML file. With large E+ models, this file can be extremeely large and titletable will load all the tables into memory. This can take several minutes. If you are trying to get one table or one value from a table, waiting several minutes for you reseult can be exessive.
If you know which table you are looking for, there is a faster way of doing this. We used index=0 in the above example to get the first table. If you know the index of the file you are looking for, you can use a faster function to get the table as shown below
End of explanation
filehandle = open(fname, 'r') # get a file handle to the html file
namedtable = fasthtml.tablebyname(filehandle, "Site and Source Energy")
pp.pprint(namedtable)
Explanation: You can also get the table if you know the title of the table. This is the bold text just before the table in the HTML file. The title of our table is Site and Source Energy. The function tablebyname will get us the table.
End of explanation |
15,018 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bastion hosts
There are many reasons for using bastion hosts
Step1: ssh_config
Instead of continuously passing options to ssh, we can use -F ssh_config and put configurations there.
Step2: If we don't use it, we can turn off GSSApiAuthentication which attempts may slow down the connection.
Unsecure by design
Inhibit PKI authentication is insecure by design
Step3: Exercise
Write the ssh-copy-id.yml playbook to install an ssh key to the bastion.
Bastion credentials are
Step4: ansible.cfg and ssh_config
In the previous exercise, we used the [ssh_connection] stanza to configure ssh connections.
We can instead just set
[ssh_connection]
ssh_args = -F ssh_config
Write everything in ssh_config.
Connecting via bastion in ansible enforcing multiple references to ssh_config
Exercise
Uncomment the last lines of ssh_config and try to use bastion for connecting to the other hosts | Python Code:
cd /notebooks/exercise-06/
Explanation: Bastion hosts
There are many reasons for using bastion hosts:
security access eg in cloud environment
vpn eg via windows hosts
The latter case is quite boring as ansible doesn't support windows as a client platform.
A standard approach is:
have a ssh server or a proxy installed on the bastion
connecto the bastion to the remote network (eg. via vpn)
configure ssh options in ansible to connect thru the bastion
We'll do this via two configuration files:
a standard ssh_config where we put the passthru configuration
a simple ansible.cfg referencing ssh_config
This approach allows us:
to test the standard ssh connection thru the bastion without messing with ansible
keep ansible.cfg simple in case we want to reuse them from the intranet (Eg. without traversing the bastion)
End of explanation
!cat ssh_config
Explanation: ssh_config
Instead of continuously passing options to ssh, we can use -F ssh_config and put configurations there.
End of explanation
fmt=r'{{.NetworkSettings.IPAddress}}'
!docker -H tcp://172.17.0.1:2375 inspect ansible101_bastion_1 --format {fmt} # pass variables *before* commands ;)
Explanation: If we don't use it, we can turn off GSSApiAuthentication which attempts may slow down the connection.
Unsecure by design
Inhibit PKI authentication is insecure by design:
passwords will surely ends in cleartext files
people ends doing things like the following
```
the password is sent to the bastion via a
cleartext file.
Match Host 172.25.0.*
ProxyCommand sshpass -f cleartext-bastion-password ssh -F config jump@bastion -W %h:%p
```
Connect to the bastion
Test connectivity to the bastion. Check your host ips and modify ssh_config accordingly.
Replace ALL bastion occurrencies, including the one below the BEWARE note
End of explanation
# Use this cell to create the pin file and then encrypt the vault
# Use this cell to test/run the playbook. You can --limit the execution to the bastion host only.
!ssh -Fssh_config bastion hostname
Explanation: Exercise
Write the ssh-copy-id.yml playbook to install an ssh key to the bastion.
Bastion credentials are:
user: root
password root
Try to do it without watching the previous exercises:
modify the empty ansible.cfg
referencing a pin file
passing [ssh_connection] arguments to avoid ssh key mismatches
pointing to the local inventory
store credentials in the encrypted vault.yml.
provide an inventory file
You can reuse the old id_ansible key or:
create a new one and adjust the reference in ssh_config
Hint:
if you provide an IdentityFile, password authentication won't work on the bastion node;
you must copy ssh id file using password authentication and eventually clean up your known_host file
End of explanation
fmt=r'{{.NetworkSettings.IPAddress}}'
!docker -H tcp://172.17.0.1:2375 inspect ansible101_web_1 --format {fmt} # pass variables *before* commands ;)
!ssh -F ssh_config [email protected] ip -4 -o a # get host ip
Explanation: ansible.cfg and ssh_config
In the previous exercise, we used the [ssh_connection] stanza to configure ssh connections.
We can instead just set
[ssh_connection]
ssh_args = -F ssh_config
Write everything in ssh_config.
Connecting via bastion in ansible enforcing multiple references to ssh_config
Exercise
Uncomment the last lines of ssh_config and try to use bastion for connecting to the other hosts
End of explanation |
15,019 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Alternate PowerShell Hosts
Metadata
| | |
|
Step1: Download & Process Mordor Dataset
Step2: Analytic I
Within the classic PowerShell log, event ID 400 indicates when a new PowerShell host process has started. Excluding PowerShell.exe is a good way to find alternate PowerShell hosts
| Data source | Event Provider | Relationship | Event |
|
Step3: Analytic II
Looking for processes loading a specific PowerShell DLL is a very effective way to document the use of PowerShell in your environment
| Data source | Event Provider | Relationship | Event |
|
Step4: Analytic III
Monitoring for PSHost* pipes is another interesting way to find other alternate PowerShell hosts in your environment.
| Data source | Event Provider | Relationship | Event |
| | Python Code:
from openhunt.mordorutils import *
spark = get_spark()
Explanation: Alternate PowerShell Hosts
Metadata
| | |
|:------------------|:---|
| collaborators | ['@Cyb3rWard0g', '@Cyb3rPandaH'] |
| creation date | 2019/08/15 |
| modification date | 2020/09/20 |
| playbook related | ['WIN-190410151110'] |
Hypothesis
Adversaries might be leveraging alternate PowerShell Hosts to execute PowerShell evading traditional PowerShell detections that look for powershell.exe in my environment.
Technical Context
None
Offensive Tradecraft
Adversaries can abuse alternate signed PowerShell Hosts to evade application whitelisting solutions that block powershell.exe and naive logging based upon traditional PowerShell hosts.
Characteristics of a PowerShell host (Matt Graeber @mattifestation) >
* These binaries are almost always C#/.NET .exes/.dlls
* These binaries have System.Management.Automation.dll as a referenced assembly
* These may not always be “built in” binaries
Mordor Test Data
| | |
|:----------|:----------|
| metadata | https://mordordatasets.com/notebooks/small/windows/02_execution/SDWIN-190518211456.html |
| link | https://raw.githubusercontent.com/OTRF/mordor/master/datasets/small/windows/lateral_movement/host/empire_psremoting_stager.zip |
Analytics
Initialize Analytics Engine
End of explanation
mordor_file = "https://raw.githubusercontent.com/OTRF/mordor/master/datasets/small/windows/lateral_movement/host/empire_psremoting_stager.zip"
registerMordorSQLTable(spark, mordor_file, "mordorTable")
Explanation: Download & Process Mordor Dataset
End of explanation
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, Channel
FROM mordorTable
WHERE (Channel = "Microsoft-Windows-PowerShell/Operational" OR Channel = "Windows PowerShell")
AND (EventID = 400 OR EventID = 4103)
AND NOT Message LIKE "%Host Application%powershell%"
'''
)
df.show(10,False)
Explanation: Analytic I
Within the classic PowerShell log, event ID 400 indicates when a new PowerShell host process has started. Excluding PowerShell.exe is a good way to find alternate PowerShell hosts
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| Powershell | Windows PowerShell | Application host started | 400 |
| Powershell | Microsoft-Windows-PowerShell/Operational | User started Application host | 4103 |
End of explanation
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, Image, Description
FROM mordorTable
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 7
AND (lower(Description) = "system.management.automation" OR lower(ImageLoaded) LIKE "%system.management.automation%")
AND NOT Image LIKE "%powershell.exe"
'''
)
df.show(10,False)
Explanation: Analytic II
Looking for processes loading a specific PowerShell DLL is a very effective way to document the use of PowerShell in your environment
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| Module | Microsoft-Windows-Sysmon/Operational | Process loaded Dll | 7 |
End of explanation
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, Image, PipeName
FROM mordorTable
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 17
AND lower(PipeName) LIKE "\\\pshost%"
AND NOT Image LIKE "%powershell.exe"
'''
)
df.show(10,False)
Explanation: Analytic III
Monitoring for PSHost* pipes is another interesting way to find other alternate PowerShell hosts in your environment.
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| Named pipe | Microsoft-Windows-Sysmon/Operational | Process created Pipe | 17 |
End of explanation |
15,020 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Solution of Lahti et al. 2014
Write a function that takes as input a dictionary of constraints (i.e., selecting a specific group of records) and returns a dictionary tabulating the BMI group for all the records matching the constraints. For example, calling
Step1: We start my reading the Metadata file, set up a csv reader, and print the header and first few lines to get a feel for data.
Step2: It's time to decide on a data structure to record our result
Step3: OK, now the tricky part
Step4: In some rows, all constraints will be fulfillled (i.e., our matching variable will still be TRUE after checking all elements). In this case, we want to increase the count of that particular BMI_group in our result dictionary BMI_count. We can directly add one to the appropriate BMI_group if we have seen it before, else we initiate that key with a value of one
Step6: Excellent! Now, we can put everything together and create a function that accepts our constraint dictionary. Remember to document everything nicely
Step7: Write a function that takes as input the constraints (as above), and a bacterial "genus". The function returns the average abundance (in logarithm base 10) of the genus for each group of BMI in the sub-population. For example, calling
Step8: Before moving on, let's have a look at the HITChip file
Step9: We see that that each row contains the SampleID and abundance data for various phylogenetically clustered bacteria. For each row in the file, we can now check if we are interested in that particular SampleID (i.e., if it matched our constraint and is in our BMI_IDs dictionary). If so, we retrieve the abundance of the bacteria of interest and add it to the the previously identified abundances within a particular BMI_group. If we had not encounter this BMI_group before, we initiate the key with the abundance as value. As we want to calculate the mean of these abuncandes later, we also keep track of the number of occurances
Step10: Now we take care of calculating the mean and printing the results. We need to load the scipy (or numby) module in order to calculate log10
Step11: Last but not least, we put it all together in a function
Step12: Repeat this analysis for all genera, and for the records having Time = 0.
A function to extract all the genera in the database
Step13: Testing
Step14: Now use this function to print the results for all genera at Time = 0 | Python Code:
import csv # Import csv modulce for reading the file
Explanation: Solution of Lahti et al. 2014
Write a function that takes as input a dictionary of constraints (i.e., selecting a specific group of records) and returns a dictionary tabulating the BMI group for all the records matching the constraints. For example, calling:
get_BMI_count({'Age': '28', 'Sex': 'female'})
should return:
{'NA': 3, 'lean': 8, 'overweight': 2, 'underweight': 1}
End of explanation
with open('../data/Lahti2014/Metadata.tab') as f:
csvr = csv.DictReader(f, delimiter = '\t')
header = csvr.fieldnames
print(header)
# For each row
for i, row in enumerate(csvr):
print(row)
if i > 2:
break
Explanation: We start my reading the Metadata file, set up a csv reader, and print the header and first few lines to get a feel for data.
End of explanation
# Initiate an empty dictionary to keep track of counts per BMI_group
BMI_count = {}
# set up our dictionary of constraints for testing purposes
dict_constraints = {'Age': '28', 'Sex': 'female'}
Explanation: It's time to decide on a data structure to record our result: For each row in the file, we want to make sure all the constraints are matching the desired ones. If so, we keep count of the BMI group. A dictionary with the BMI_groups as keys and counts as values will work well:
End of explanation
with open('../data/Lahti2014/Metadata.tab') as f:
csvr = csv.DictReader(f, delimiter = '\t')
for i, row in enumerate(csvr):
# check that all conditions are met
matching = True
for e in dict_constraints:
if row[e] != dict_constraints[e]:
# The constraint is not met. Move to the next record
matching = False
break
print("in row", i, "the key", e,"in data does not match", e, "in constraints")
if i > 5:
break
Explanation: OK, now the tricky part: for each row, we want to test if the constraints (a dictionary) matches the data (which itself is a dictionary). We can do it element-wise, that means we take a key from the data dictionary (row) and test if its value is NOT identical to the corresponding value in the constraint dictionary. We start out by setting the value matching to TRUE and set it to FALSE if we encounter a discripancy. This way, we stop immediately if one of the elements does not match and move on to the next row of data.
End of explanation
with open('../data/Lahti2014/Metadata.tab') as f:
csvr = csv.DictReader(f, delimiter = '\t')
for row in csvr:
# check that all conditions are met
matching = True
for e in dict_constraints:
if row[e] != dict_constraints[e]:
# The constraint is not met. Move to the next record
matching = False
break
# matching is True only if all the constraints have been met
if matching == True:
# extract the BMI_group
my_BMI = row['BMI_group']
if my_BMI in BMI_count.keys():
# If we've seen it before, add one record to the count
BMI_count[my_BMI] = BMI_count[my_BMI] + 1
else:
# If not, initialize at 1
BMI_count[my_BMI] = 1
BMI_count
Explanation: In some rows, all constraints will be fulfillled (i.e., our matching variable will still be TRUE after checking all elements). In this case, we want to increase the count of that particular BMI_group in our result dictionary BMI_count. We can directly add one to the appropriate BMI_group if we have seen it before, else we initiate that key with a value of one:
End of explanation
def get_BMI_count(dict_constraints):
Take as input a dictionary of constraints
for example, {'Age': '28', 'Sex': 'female'}
And return the count of the various groups of BMI
# We use a dictionary to store the results
BMI_count = {}
# Open the file, build a csv DictReader
with open('../data/Lahti2014/Metadata.tab') as f:
csvr = csv.DictReader(f, delimiter = '\t')
# For each row
for row in csvr:
# check that all conditions are met
matching = True
for e in dict_constraints:
if row[e] != dict_constraints[e]:
# The constraint is not met. Move to the next record
matching = False
break
# matching is True only if all the constraints have been met
if matching == True:
# extract the BMI_group
my_BMI = row['BMI_group']
if my_BMI in BMI_count.keys():
# If we've seen it before, add one record to the count
BMI_count[my_BMI] = BMI_count[my_BMI] + 1
else:
# If not, initialize at 1
BMI_count[my_BMI] = 1
return BMI_count
get_BMI_count({'Nationality': 'US', 'Sex': 'female'})
Explanation: Excellent! Now, we can put everything together and create a function that accepts our constraint dictionary. Remember to document everything nicely:
End of explanation
# We use a dictionary to store the results
BMI_IDs = {}
# Open the file, build a csv DictReader
with open('../data/Lahti2014/Metadata.tab') as f:
csvr = csv.DictReader(f, delimiter = '\t')
for row in csvr:
# check that all conditions are met
matching = True
for e in dict_constraints:
if row[e] != dict_constraints[e]:
# The constraint is not met. Move to the next record
matching = False
break
# matching is True only if all the constraints have been met
if matching == True:
# extract the BMI_group
my_BMI = row['BMI_group']
if my_BMI in BMI_IDs.keys():
# If we've seen it before, add the SampleID
BMI_IDs[my_BMI] = BMI_IDs[my_BMI] + [row['SampleID']]
else:
# If not, initialize
BMI_IDs[my_BMI] = [row['SampleID']]
BMI_IDs
Explanation: Write a function that takes as input the constraints (as above), and a bacterial "genus". The function returns the average abundance (in logarithm base 10) of the genus for each group of BMI in the sub-population. For example, calling:
get_abundance_by_BMI({'Time': '0', 'Nationality': 'US'}, 'Clostridium difficile et rel.')
should return:
```
Abundance of Clostridium difficile et rel. In sub-population:
Nationality -> US
Time -> 0
3.08 NA
3.31 underweight
3.84 lean
2.89 overweight
3.31 obese
3.45 severeobese
```
To solve this task, we can recycle quite a bit of code that we just developed. However, instead of just counting occurances of BMI_groups, we want to keep track of the records (i.e., SampleIDs) that match our constraints and look up a specific bacteria abundance in the file HITChip.tab.
First, we create a dictionary with all records that match our constraints:
End of explanation
with open("../data/Lahti2014/HITChip.tab", "r") as HIT:
csvr = csv.DictReader(HIT, delimiter = "\t")
header = csvr.fieldnames
print(header)
for i, row in enumerate(csvr):
print(row)
if i > 2:
break
Explanation: Before moving on, let's have a look at the HITChip file:
End of explanation
# set up dictionary to track abundances by BMI_group and number of identified records
abundance = {}
# choose a bacteria genus for testing
genus = "Clostridium difficile et rel."
with open('../data/Lahti2014/HITChip.tab') as f:
csvr = csv.DictReader(f, delimiter = '\t')
# For each row
for row in csvr:
# check whether we need this SampleID
matching = False
for g in BMI_IDs:
if row['SampleID'] in BMI_IDs[g]:
if g in abundance.keys():
abundance[g][0] = abundance[g][0] + float(row[genus])
abundance[g][1] = abundance[g][1] + 1
else:
abundance[g] = [float(row[genus]), 1]
# we have found it, so move on
break
abundance
Explanation: We see that that each row contains the SampleID and abundance data for various phylogenetically clustered bacteria. For each row in the file, we can now check if we are interested in that particular SampleID (i.e., if it matched our constraint and is in our BMI_IDs dictionary). If so, we retrieve the abundance of the bacteria of interest and add it to the the previously identified abundances within a particular BMI_group. If we had not encounter this BMI_group before, we initiate the key with the abundance as value. As we want to calculate the mean of these abuncandes later, we also keep track of the number of occurances:
End of explanation
import scipy
print("____________________________________________________________________")
print("Abundance of " + genus + " In sub-population:")
print("____________________________________________________________________")
for key, value in dict_constraints.items():
print(key, "->", value)
print("____________________________________________________________________")
for ab in ['NA', 'underweight', 'lean', 'overweight',
'obese', 'severeobese', 'morbidobese']:
if ab in abundance.keys():
abundance[ab][0] = scipy.log10(abundance[ab][0] / abundance[ab][1])
print(round(abundance[ab][0], 2), '\t', ab)
print("____________________________________________________________________")
print("")
Explanation: Now we take care of calculating the mean and printing the results. We need to load the scipy (or numby) module in order to calculate log10:
End of explanation
import scipy # For log10
def get_abundance_by_BMI(dict_constraints, genus = 'Aerococcus'):
# We use a dictionary to store the results
BMI_IDs = {}
# Open the file, build a csv DictReader
with open('../data/Lahti2014/Metadata.tab') as f:
csvr = csv.DictReader(f, delimiter = '\t')
# For each row
for row in csvr:
# check that all conditions are met
matching = True
for e in dict_constraints:
if row[e] != dict_constraints[e]:
# The constraint is not met. Move to the next record
matching = False
break
# matching is True only if all the constraints have been met
if matching == True:
# extract the BMI_group
my_BMI = row['BMI_group']
if my_BMI in BMI_IDs.keys():
# If we've seen it before, add the SampleID
BMI_IDs[my_BMI] = BMI_IDs[my_BMI] + [row['SampleID']]
else:
# If not, initialize
BMI_IDs[my_BMI] = [row['SampleID']]
# Now let's open the other file, and keep track of the abundance of the genus for each
# BMI group
abundance = {}
with open('../data/Lahti2014/HITChip.tab') as f:
csvr = csv.DictReader(f, delimiter = '\t')
# For each row
for row in csvr:
# check whether we need this SampleID
matching = False
for g in BMI_IDs:
if row['SampleID'] in BMI_IDs[g]:
if g in abundance.keys():
abundance[g][0] = abundance[g][0] + float(row[genus])
abundance[g][1] = abundance[g][1] + 1
else:
abundance[g] = [float(row[genus]), 1]
# we have found it, so move on
break
# Finally, calculate means, and print results
print("____________________________________________________________________")
print("Abundance of " + genus + " In sub-population:")
print("____________________________________________________________________")
for key, value in dict_constraints.items():
print(key, "->", value)
print("____________________________________________________________________")
for ab in ['NA', 'underweight', 'lean', 'overweight',
'obese', 'severeobese', 'morbidobese']:
if ab in abundance.keys():
abundance[ab][0] = scipy.log10(abundance[ab][0] / abundance[ab][1])
print(round(abundance[ab][0], 2), '\t', ab)
print("____________________________________________________________________")
print("")
get_abundance_by_BMI({'Time': '0', 'Nationality': 'US'},
'Clostridium difficile et rel.')
Explanation: Last but not least, we put it all together in a function:
End of explanation
def get_all_genera():
with open('../data/Lahti2014/HITChip.tab') as f:
header = f.readline().strip()
genera = header.split('\t')[1:]
return genera
Explanation: Repeat this analysis for all genera, and for the records having Time = 0.
A function to extract all the genera in the database:
End of explanation
get_all_genera()[:6]
Explanation: Testing:
End of explanation
for g in get_all_genera()[:5]:
get_abundance_by_BMI({'Time': '0'}, g)
Explanation: Now use this function to print the results for all genera at Time = 0:
End of explanation |
15,021 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This spark notebook connects to BigInsights on Cloud using BigSQL.
This notebook runs succesfully on stand alone spark-1.6.1-bin-hadoop2.6 and will output a dataframe like this
Step1: Code to connect to BigInsights on Cloud via Hive and BigSQL ...
Step2: Get the cluster certificate
Step3: Add the cluster certificate to a truststore
Step4: Now attempt to connect to BigInsights on Cloud | Python Code:
cluster = '10451' # E.g. 10000
username = 'biadmin' # E.g. biadmin
password = '' # Please request password from [email protected]
table = 'biadmin.rowapplyout' # BigSQL table to query
Explanation: This spark notebook connects to BigInsights on Cloud using BigSQL.
This notebook runs succesfully on stand alone spark-1.6.1-bin-hadoop2.6 and will output a dataframe like this:
[Row(F1=77.0, F2=-16.200000762939453, F3=7.81678581237793), Row(F1=77.0, F2=-16.200000762939453, F3=7.528648376464844), Row(F1=77.0, F2=-16.200000762939453, F3=7.240304946899414), Row(F1=77.0, F2=-16.200000762939453, F3=6.9515509605407715), Row(F1=77.0, F2=-16.200000762939453, F3=6.6621809005737305), Row(F1=77.0, F2=-16.200000762939453, F3=8.371989250183105), Row(F1=77.0, F2=-16.200000762939453, F3=10.080772399902344), Row(F1=77.0, F2=-16.200000762939453, F3=11.788325309753418), Row(F1=77.0, F2=-16.200000762939453, F3=13.494444847106934), Row(F1=77.0, F2=-16.200000762939453, F3=15.198928833007812)]
The notebook environment is:
Notebook server: 3.2.0-8b0eef4 | Python 2.7.11 |Anaconda 2.3.0 (x86_64)| (default, Dec 6 2015, 18:57:58)
[GCC 4.2.1 (Apple Inc. build 5577)]
Credentials - keep this secret!
End of explanation
import os
cwd = os.getcwd()
cls_host = 'ehaasp-{0}-mastermanager.bi.services.bluemix.net'.format(cluster)
sql_host = 'ehaasp-{0}-master-2.bi.services.bluemix.net'.format(cluster)
Explanation: Code to connect to BigInsights on Cloud via Hive and BigSQL ...
End of explanation
!openssl s_client -showcerts -connect {cls_host}:9443 < /dev/null | openssl x509 -outform PEM > certificate
# uncomment this for debugging
#!cat certificate
Explanation: Get the cluster certificate
End of explanation
!rm -f truststore.jks
!keytool -import -trustcacerts -alias biginsights -file certificate -keystore truststore.jks -storepass mypassword -noprompt
Explanation: Add the cluster certificate to a truststore
End of explanation
# test bigsql
url = 'jdbc:db2://{0}:51000/bigsql:user={1};password={2};sslConnection=true;sslTrustStoreLocation={3}/truststore.jks;Password=mypassword;'.format(sql_host, username, password, cwd)
df = sqlContext.read.format('jdbc').options(url=url, driver='com.ibm.db2.jcc.DB2Driver', dbtable=table).load()
print(df.take(10))
Explanation: Now attempt to connect to BigInsights on Cloud
End of explanation |
15,022 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Train/Dev/Test split of AotM-2011/30Music songs/playlists in setting I & II
Step1: Load playlists
Load playlists.
Step2: check duplicated songs in the same playlist.
Step3: Load song features
Load song_id --> feature array mapping
Step4: Load genres
Song genres from MSD Allmusic Genre Dataset (Top MAGD) and tagtraum.
Step5: Song collection
Step6: Randomise the order of song with the same age.
Step7: Check if all songs have genre info.
Step8: Song popularity.
Step9: deal with one outlier.
Step10: Split songs for setting I
~~Split songs (60/20/20 train/dev/test split) the latest released (year) songs are in dev and test set.~~
Step13: Create song-playlist matrix
Songs as rows, playlists as columns.
Step14: Setting I
Step15: Feature normalisation.
Step16: Remove playlists that have 0 songs in training set.
Step17: Remove playlists that have 0 songs in training + dev set.
Step18: Save data.
Step19: Indices of playlists from the same user (training set), playlists of the same user form a clique in the graph where playlists are nodes.
Step20: Indices of playlists from the same user (training + dev set), playlists of the same user form a clique in the graph where playlists are nodes.
Step21: Split playlists for setting II
Split playlists (60/10/30 train/dev/test split) uniformly at random.
~~Split each user's playlists (60/20/20 train/dev/test split) uniformly at random if the user has $5$ or more playlists.~~
Step22: Hold a subset of songs in dev/test playlist for setting II
~~Hold the last half of songs for playlists in dev and test set.~~
Step23: Keep the first $K=1,2,3,4$ songs for playlist in dev and test set.
Step24: Setting II
Step25: Use dedicated sparse matrices to reprsent what entries are observed in dev and test set.
Step26: Feature normalisation.
Step27: Build the adjacent matrix of playlists (nodes) for setting II, playlists of the same user form a clique.
Cliques in train + dev set.
Step28: Cliques in train + dev + test set. | Python Code:
%matplotlib inline
%load_ext autoreload
%autoreload 2
import os, sys
import gzip
import pickle as pkl
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.metrics import precision_recall_fscore_support, roc_auc_score, average_precision_score
from scipy.optimize import check_grad
from scipy.sparse import lil_matrix, issparse, hstack, vstack
from collections import Counter
import itertools as itt
import matplotlib.pyplot as plt
import seaborn as sns
np_settings0 = np.seterr(all='raise')
RAND_SEED = 1234567890
datasets = ['aotm2011', '30music']
ffeature = 'data/msd/song2feature.pkl.gz'
fgenre = 'data/msd/song2genre.pkl.gz'
N_SEEDS = [10000, 5000]
dix = 1
dataset_name = datasets[dix]
data_dir = 'data/%s' % dataset_name
fplaylist = os.path.join(data_dir, '%s-playlist.pkl.gz' % dataset_name)
print(dataset_name)
Explanation: Train/Dev/Test split of AotM-2011/30Music songs/playlists in setting I & II
End of explanation
_all_playlists = pkl.load(gzip.open(fplaylist, 'rb'))
# _all_playlists[0]
all_playlists = []
if type(_all_playlists[0][1]) == tuple:
for pl, u in _all_playlists:
user = '%s_%s' % (u[0], u[1]) # user string
all_playlists.append((pl, user))
else:
all_playlists = _all_playlists
# user_playlists = dict()
# for pl, u in all_playlists:
# try:
# user_playlists[u].append(pl)
# except KeyError:
# user_playlists[u] = [pl]
# all_playlists = []
# for u in user_playlists:
# if len(user_playlists[u]) > 4:
# all_playlists += [(pl, u) for pl in user_playlists[u]]
all_users = sorted(set({user for _, user in all_playlists}))
print('#user : {:,}'.format(len(all_users)))
print('#playlist: {:,}'.format(len(all_playlists)))
pl_lengths = [len(pl) for pl, _ in all_playlists]
plt.hist(pl_lengths, bins=100)
print('Average playlist length: %.1f' % np.mean(pl_lengths))
Explanation: Load playlists
Load playlists.
End of explanation
print('{:,} | {:,}'.format(np.sum(pl_lengths), np.sum([len(set(pl)) for pl, _ in all_playlists])))
Explanation: check duplicated songs in the same playlist.
End of explanation
song2feature = pkl.load(gzip.open(ffeature, 'rb'))
Explanation: Load song features
Load song_id --> feature array mapping: map a song to the audio features of one of its corresponding tracks in MSD.
End of explanation
song2genre = pkl.load(gzip.open(fgenre, 'rb'))
Explanation: Load genres
Song genres from MSD Allmusic Genre Dataset (Top MAGD) and tagtraum.
End of explanation
_all_songs = sorted([(sid, int(song2feature[sid][-1])) for sid in {s for pl, _ in all_playlists for s in pl}],
key=lambda x: (x[1], x[0]))
print('{:,}'.format(len(_all_songs)))
print('%.1f\n%.1f' % (len(_all_songs) * len(all_playlists) * 8 / (1024**2),
len(_all_songs) * (218 + len(all_users) - 1) * 8 / (1024**2)))
Explanation: Song collection
End of explanation
song_age_dict = dict()
for sid, age in _all_songs:
age = int(age)
try:
song_age_dict[age].append(sid)
except KeyError:
song_age_dict[age] = [sid]
all_songs = []
np.random.seed(RAND_SEED)
for age in sorted(song_age_dict.keys()):
all_songs += [(sid, age) for sid in np.random.permutation(song_age_dict[age])]
pkl.dump(all_songs, gzip.open(os.path.join(data_dir, 'setting2/all_songs.pkl.gz'), 'wb'))
Explanation: Randomise the order of song with the same age.
End of explanation
print('#songs missing genre: {:,}'.format(len(all_songs) - np.sum([sid in song2genre for (sid, _) in all_songs])))
Explanation: Check if all songs have genre info.
End of explanation
song2index = {sid: ix for ix, (sid, _) in enumerate(all_songs)}
song_pl_mat = lil_matrix((len(all_songs), len(all_playlists)), dtype=np.int8)
for j in range(len(all_playlists)):
pl = all_playlists[j][0]
ind = [song2index[sid] for sid in pl]
song_pl_mat[ind, j] = 1
song_pop = song_pl_mat.tocsc().sum(axis=1)
max_pop = np.max(song_pop)
max_pop
song2pop = {sid: song_pop[song2index[sid], 0] for (sid, _) in all_songs}
pkl.dump(song2pop, gzip.open(os.path.join(data_dir, 'setting2/song2pop.pkl.gz'), 'wb'))
Explanation: Song popularity.
End of explanation
# song_pop1 = song_pop.copy()
# maxix = np.argmax(song_pop)
# song_pop1[maxix] = 0
# clipped_max_pop = np.max(song_pop1) + 10 # second_max_pop + 10
# if max_pop - clipped_max_pop > 500:
# song_pop1[maxix] = clipped_max_pop
Explanation: deal with one outlier.
End of explanation
N_NEW_SONGS = N_SEEDS[dix]
print(N_NEW_SONGS)
# dev_ratio = 0.1
# test_ratio = 0.1
# nsong_dev_test = int(len(all_songs) * (dev_ratio + test_ratio))
# train_song_set = all_songs[nsong_dev_test:]
# # shuffle songs in dev and test set
# np.random.seed(60)
# dev_test_ix = np.random.permutation(np.arange(nsong_dev_test))
# nsong_dev = int(len(all_songs) * dev_ratio)
# dev_song_set = [all_songs[ix] for ix in dev_test_ix[:nsong_dev]]
# test_song_set = [all_songs[ix] for ix in dev_test_ix[nsong_dev:]]
nsong_test = N_NEW_SONGS
nsong_dev = N_NEW_SONGS
test_song_set = all_songs[:nsong_test]
dev_song_set = all_songs[nsong_test:nsong_test + nsong_dev]
train_song_set = all_songs[nsong_test + nsong_dev:]
test_songs = set([sid for sid, _ in test_song_set])
cnt = 0
for pl, _ in all_playlists:
plset = set(pl)
ninter = len(plset & test_songs)
if ninter > 0 and ninter < len(plset):
cnt += 1
print('%d playlists in test set, among %d' % (cnt, len(all_playlists)))
print('#songs in training set: {:,}, average song age: {:.2f} yrs'
.format(len(train_song_set), np.mean([t[1] for t in train_song_set])))
print('#songs in dev set : {:,}, average song age: {:.2f} yrs'
.format(len(dev_song_set), np.mean([t[1] for t in dev_song_set])))
print('#songs in test set : {:,}, average song age: {:.2f} yrs'
.format(len(test_song_set), np.mean([t[1] for t in test_song_set])))
print('#songs: {:,} | {:,}'.format(len(all_songs), len({s for s in train_song_set + dev_song_set+test_song_set})))
ax = plt.subplot(111)
ax.hist(song_pop, bins=100)
ax.set_yscale('log')
ax.set_xlim(0, song_pop.max()+10)
ax.set_title('Histogram of song popularity')
pass
train_song_pop = [song2pop[sid] for (sid, _) in train_song_set]
#if np.max(train_song_pop) > clipped_max_pop:
# train_song_pop[np.argmax(train_song_pop)] = clipped_max_pop
ax = plt.subplot(111)
ax.hist(train_song_pop, bins=100)
ax.set_yscale('log')
ax.set_xlim(0, song_pop.max()+10)
ax.set_title('Histogram of song popularity in TRAINING set')
pass
dev_song_pop = [song2pop[sid] for (sid, _) in dev_song_set]
#if np.max(dev_song_pop) > clipped_max_pop:
# dev_song_pop[np.argmax(dev_song_pop)] = clipped_max_pop
ax = plt.subplot(111)
ax.hist(dev_song_pop, bins=100)
ax.set_yscale('log')
ax.set_xlim(0, song_pop.max()+10)
ax.set_title('Histogram of song popularity in DEV set')
pass
test_song_pop = [song2pop[sid] for (sid, _) in test_song_set]
#if np.max(test_song_pop) > clipped_max_pop:
# test_song_pop[np.argmax(test_song_pop)] = clipped_max_pop
ax = plt.subplot(111)
ax.hist(test_song_pop, bins=100)
ax.set_yscale('log')
ax.set_xlim(0, song_pop.max()+10)
ax.set_title('Histogram of song popularity in TEST set')
pass
Explanation: Split songs for setting I
~~Split songs (60/20/20 train/dev/test split) the latest released (year) songs are in dev and test set.~~
End of explanation
def gen_dataset(playlists, song2feature, song2genre, train_song_set,
dev_song_set=[], test_song_set=[], song2pop_train=None):
Create labelled dataset: rows are songs, columns are users.
Input:
- playlists: a set of playlists
- train_song_set: a list of songIDs in training set
- dev_song_set: a list of songIDs in dev set
- test_song_set: a list of songIDs in test set
- song2feature: dictionary that maps songIDs to features from MSD
- song2genre: dictionary that maps songIDs to genre
- song2pop_train: a dictionary that maps songIDs to its popularity
Output:
- (Feature, Label) pair (X, Y)
X: #songs by #features
Y: #songs by #users
song_set = train_song_set + dev_song_set + test_song_set
N = len(song_set)
K = len(playlists)
genre_set = sorted({v for v in song2genre.values()})
genre2index = {genre: ix for ix, genre in enumerate(genre_set)}
def onehot_genre(songID):
One-hot encoding of genres.
Data imputation:
- one extra entry for songs without genre info
- mean imputation
- sampling from the distribution of genre popularity
num = len(genre_set) # + 1
vec = np.zeros(num, dtype=np.float)
if songID in song2genre:
genre_ix = genre2index[song2genre[songID]]
vec[genre_ix] = 1
else:
vec[:] = np.nan
#vec[-1] = 1
return vec
#X = np.array([features_MSD[sid] for sid in song_set]) # without using genre
#Y = np.zeros((N, K), dtype=np.bool)
X = np.array([np.concatenate([song2feature[sid], onehot_genre(sid)], axis=-1) for sid in song_set])
Y = lil_matrix((N, K), dtype=np.bool)
song2index = {sid: ix for ix, sid in enumerate(song_set)}
for k in range(K):
pl = playlists[k]
indices = [song2index[sid] for sid in pl if sid in song2index]
Y[indices, k] = True
# genre imputation
genre_ix_start = -len(genre_set)
genre_nan = np.isnan(X[:, genre_ix_start:])
genre_mean = np.nansum(X[:, genre_ix_start:], axis=0) / (X.shape[0] - np.sum(genre_nan, axis=0))
#print(np.nansum(X[:, genre_ix_start:], axis=0))
#print(genre_set)
#print(genre_mean)
for j in range(len(genre_set)):
X[genre_nan[:,j], j+genre_ix_start] = genre_mean[j]
# normalise the sum of all genres per song to 1
# X[:, -len(genre_set):] /= X[:, -len(genre_set):].sum(axis=1).reshape(-1, 1)
# NOTE: this is not necessary, as the imputed values are guaranteed to be normalised (sum to 1)
# due to the above method to compute mean genres.
# the log of song popularity
if song2pop_train is not None:
# for sid in song_set:
# assert sid in song2pop_train # trust the input
logsongpop = np.log([song2pop_train[sid]+1 for sid in song_set]) # deal with 0 popularity
X = np.hstack([X, logsongpop.reshape(-1, 1)])
#return X, Y
Y = Y.tocsr()
train_ix = [song2index[sid] for sid in train_song_set]
X_train = X[train_ix, :]
Y_train = Y[train_ix, :]
dev_ix = [song2index[sid] for sid in dev_song_set]
X_dev = X[dev_ix, :]
Y_dev = Y[dev_ix, :]
test_ix = [song2index[sid] for sid in test_song_set]
X_test = X[test_ix, :]
Y_test = Y[test_ix, :]
if len(dev_song_set) > 0:
if len(test_song_set) > 0:
return X_train, Y_train.tocsc(), X_dev, Y_dev.tocsc(), X_test, Y_test.tocsc()
else:
return X_train, Y_train.tocsc(), X_dev, Y_dev.tocsc()
else:
if len(test_song_set) > 0:
return X_train, Y_train.tocsc(), X_test, Y_test.tocsc()
else:
return X_train, Y_train.tocsc()
Explanation: Create song-playlist matrix
Songs as rows, playlists as columns.
End of explanation
pkl_dir = os.path.join(data_dir, 'setting1')
fsongs = os.path.join(pkl_dir, 'songs_train_dev_test_s1.pkl.gz')
fpl = os.path.join(pkl_dir, 'playlists_s1.pkl.gz')
fxtrain = os.path.join(pkl_dir, 'X_train.pkl.gz')
fytrain = os.path.join(pkl_dir, 'Y_train.pkl.gz')
fxdev = os.path.join(pkl_dir, 'X_dev.pkl.gz')
fydev = os.path.join(pkl_dir, 'Y_dev.pkl.gz')
fxtest = os.path.join(pkl_dir, 'X_test.pkl.gz')
fytest = os.path.join(pkl_dir, 'Y_test.pkl.gz')
fxtrndev = os.path.join(pkl_dir, 'X_train_dev.pkl.gz')
fytrndev = os.path.join(pkl_dir, 'Y_train_dev.pkl.gz')
fclique_train = os.path.join(pkl_dir, 'cliques_train.pkl.gz')
fclique_trndev = os.path.join(pkl_dir, 'cliques_trndev.pkl.gz')
X_train, Y_train, X_dev, Y_dev, X_test, Y_test = gen_dataset(playlists = [t[0] for t in all_playlists],
song2feature = song2feature, song2genre = song2genre,
train_song_set = [t[0] for t in train_song_set],
dev_song_set = [t[0] for t in dev_song_set],
test_song_set = [t[0] for t in test_song_set])
Explanation: Setting I: hold a subset of songs, use all playlists
End of explanation
X_train_mean = np.mean(X_train, axis=0).reshape((1, -1))
X_train_std = np.std(X_train, axis=0).reshape((1, -1)) + 10 ** (-6)
X_train -= X_train_mean
X_train /= X_train_std
X_dev -= X_train_mean
X_dev /= X_train_std
X_test -= X_train_mean
X_test /= X_train_std
X_train_dev = np.vstack([X_train, X_dev])
Y_train_dev = vstack([Y_train.tolil(), Y_dev.tolil()]).tocsc().astype(np.bool)
# NOTE: explicitly set type of Y is necessary, see issue #9777 of scikit-learn
Explanation: Feature normalisation.
End of explanation
ix_ytrain = np.where(Y_train.sum(axis=0).A.reshape(-1) > 0)[0]
Y_train = Y_train[:, ix_ytrain]
Y_dev = Y_dev[:, ix_ytrain]
print('%d playlists with 0 seed song (training set)' % \
np.where(Y_train.sum(axis=0).A.reshape(-1) == 0)[0].shape[0])
print('%d playlists in dev set, among %d' % \
(Y_dev.shape[1] - np.where(Y_dev.sum(axis=0).A.reshape(-1) == 0)[0].shape[0], Y_dev.shape[1]))
Explanation: Remove playlists that have 0 songs in training set.
End of explanation
ix_ytrndev = np.where(Y_train_dev.sum(axis=0).A.reshape(-1) > 0)[0]
Y_train_dev = Y_train_dev[:, ix_ytrndev]
Y_test = Y_test[:, ix_ytrndev]
print('%d playlists with 0 seed song (training + dev set)' % \
np.where(Y_train_dev.sum(axis=0).A.reshape(-1) == 0)[0].shape[0])
print('%d playlists in test set, among %d' % \
(Y_test.shape[1] - np.where(Y_test.sum(axis=0).A.reshape(-1) == 0)[0].shape[0], Y_test.shape[1]))
Explanation: Remove playlists that have 0 songs in training + dev set.
End of explanation
print('Train : %15s %15s' % (X_train.shape, Y_train.shape))
print('Dev : %15s %15s' % (X_dev.shape, Y_dev.shape))
print('Test : %15s %15s' % (X_test.shape, Y_test.shape))
print('Trndev: %15s %15s' % (X_train_dev.shape, Y_train_dev.shape))
print(np.mean(np.mean(X_train, axis=0)))
print(np.mean( np.std(X_train, axis=0)) - 1)
print(np.mean(np.mean(X_dev, axis=0)))
print(np.mean( np.std(X_dev, axis=0)) - 1)
print(np.mean(np.mean(X_train_dev, axis=0)))
print(np.mean( np.std(X_train_dev, axis=0)) - 1)
print(np.mean(np.mean(X_test, axis=0)))
print(np.mean( np.std(X_test, axis=0)) - 1)
pkl.dump(X_train, gzip.open(fxtrain, 'wb'))
pkl.dump(Y_train, gzip.open(fytrain, 'wb'))
pkl.dump(X_dev, gzip.open(fxdev, 'wb'))
pkl.dump(Y_dev, gzip.open(fydev, 'wb'))
pkl.dump(X_test, gzip.open(fxtest, 'wb'))
pkl.dump(Y_test, gzip.open(fytest, 'wb'))
pkl.dump(X_train_dev, gzip.open(fxtrndev, 'wb'))
pkl.dump(Y_train_dev, gzip.open(fytrndev, 'wb'))
pkl.dump({'train_song_set': train_song_set, 'dev_song_set': dev_song_set, 'test_song_set': test_song_set},
gzip.open(fsongs, 'wb'))
pkl.dump(all_playlists, gzip.open(fpl, 'wb'))
Explanation: Save data.
End of explanation
user_of_playlists_train = [u for (_, u) in [all_playlists[ix] for ix in ix_ytrain]]
clique_list_train = []
for u in sorted(set(user_of_playlists_train)):
clique = np.where(u == np.array(user_of_playlists_train, dtype=np.object))[0]
clique_list_train.append(clique)
pkl.dump(clique_list_train, gzip.open(fclique_train, 'wb'))
clqsize = [len(clique) for clique in clique_list_train]
print(np.min(clqsize), np.max(clqsize), len(clqsize), np.sum(clqsize))
# np.where('b' == np.array(['abcdefghi', 'b']))
# unexpected comparison results, please use a single string instead of a tuple to represent a user
# np.where((968763600.0, 'Aguilar') == np.array([(968763600.0, 'Aguilar'), (1042808400.0, 'Aguilar')],
# dtype=np.object))
assert np.all(np.arange(Y_train.shape[1]) == np.asarray(sorted([k for clq in clique_list_train for k in clq])))
Explanation: Indices of playlists from the same user (training set), playlists of the same user form a clique in the graph where playlists are nodes.
End of explanation
user_of_playlists_trndev = [u for (_, u) in [all_playlists[ix] for ix in ix_ytrndev]]
clique_list_trndev = []
for u in sorted(set(user_of_playlists_trndev)):
clique = np.where(u == np.array(user_of_playlists_trndev, dtype=np.object))[0]
clique_list_trndev.append(clique)
pkl.dump(clique_list_trndev, gzip.open(fclique_trndev, 'wb'))
clqsize = [len(clique) for clique in clique_list_trndev]
print(np.min(clqsize), np.max(clqsize), len(clqsize), np.sum(clqsize))
assert np.all(np.arange(Y_train_dev.shape[1]) == \
np.asarray(sorted([k for clq in clique_list_trndev for k in clq])))
Explanation: Indices of playlists from the same user (training + dev set), playlists of the same user form a clique in the graph where playlists are nodes.
End of explanation
train_playlists = []
dev_playlists = []
test_playlists = []
dev_ratio = 0.1
test_ratio = 0.3
npl_dev = int(dev_ratio * len(all_playlists))
npl_test = int(test_ratio * len(all_playlists))
np.random.seed(RAND_SEED)
pl_indices = np.random.permutation(len(all_playlists))
test_playlists = all_playlists[:npl_test]
dev_playlists = all_playlists[npl_test:npl_test + npl_dev]
train_playlists = all_playlists[npl_test + npl_dev:]
# user_playlists = dict()
# for pl, u in all_playlists:
# try:
# user_playlists[u].append(pl)
# except KeyError:
# user_playlists[u] = [pl]
# sanity check
# npl_all = np.sum([len(user_playlists[u]) for u in user_playlists])
# print('{:30s} {:,}'.format('#users:', len(user_playlists)))
# print('{:30s} {:,}'.format('#playlists:', npl_all))
# print('{:30s} {:.2f}'.format('Average #playlists per user:', npl_all / len(user_playlists)))
# np.random.seed(RAND_SEED)
# for u in user_playlists:
# playlists_u = [(pl, u) for pl in user_playlists[u]]
# if len(user_playlists[u]) < 5:
# train_playlists += playlists_u
# else:
# npl_test = int(test_ratio * len(user_playlists[u]))
# npl_dev = int(dev_ratio * len(user_playlists[u]))
# pl_indices = np.random.permutation(len(user_playlists[u]))
# test_playlists += playlists_u[:npl_test]
# dev_playlists += playlists_u[npl_test:npl_test + npl_dev]
# train_playlists += playlists_u[npl_test + npl_dev:]
print('{:30s} {:,}'.format('#playlist in training set:', len(train_playlists)))
print('{:30s} {:,}'.format('#playlist in dev set:', len(dev_playlists)))
print('{:30s} {:,}'.format('#playlist in test set:', len(test_playlists)))
len(train_playlists) + len(dev_playlists)
xmax = np.max([len(pl) for (pl, _) in all_playlists]) + 1
ax = plt.subplot(111)
ax.hist([len(pl) for (pl, _) in train_playlists], bins=100)
ax.set_yscale('log')
ax.set_xlim(0, xmax)
ax.set_title('Histogram of playlist length in TRAINING set')
pass
ax = plt.subplot(111)
ax.hist([len(pl) for (pl, _) in dev_playlists], bins=100)
ax.set_yscale('log')
ax.set_xlim(0, xmax)
ax.set_title('Histogram of playlist length in DEV set')
pass
ax = plt.subplot(111)
ax.hist([len(pl) for (pl, _) in test_playlists], bins=100)
ax.set_yscale('log')
ax.set_xlim(0, xmax)
ax.set_title('Histogram of playlist length in TEST set')
pass
Explanation: Split playlists for setting II
Split playlists (60/10/30 train/dev/test split) uniformly at random.
~~Split each user's playlists (60/20/20 train/dev/test split) uniformly at random if the user has $5$ or more playlists.~~
End of explanation
#dev_playlists_obs = [pl[:-int(len(pl)/2)] for (pl, _) in dev_playlists]
#dev_playlists_held = [pl[-int(len(pl)/2):] for (pl, _) in dev_playlists]
#test_playlists_obs = [pl[:-int(len(pl)/2)] for (pl, _) in test_playlists]
#test_playlists_held = [pl[-int(len(pl)/2):] for (pl, _) in test_playlists]
Explanation: Hold a subset of songs in dev/test playlist for setting II
~~Hold the last half of songs for playlists in dev and test set.~~
End of explanation
N_SEED_K = 1
dev_playlists_obs = []
dev_playlists_held = []
test_playlists_obs = []
test_playlists_held = []
# np.random.seed(135792468)
# for pl, _ in dev_playlists:
# npl = len(pl)
# k = np.random.choice(np.arange(1, npl))
# dev_playlists_obs.append(pl[:k])
# dev_playlists_held.append(pl[k:])
# for pl, _ in test_playlists:
# npl = len(pl)
# k = np.random.choice(np.arange(1, npl))
# test_playlists_obs.append(pl[:k])
# test_playlists_held.append(pl[k:])
for pl, _ in dev_playlists:
npl = len(pl)
k = N_SEED_K
dev_playlists_obs.append(pl[:k])
dev_playlists_held.append(pl[k:])
for pl, _ in test_playlists:
npl = len(pl)
k = N_SEED_K
test_playlists_obs.append(pl[:k])
test_playlists_held.append(pl[k:])
for ix in range(len(dev_playlists)):
assert np.all(dev_playlists[ix][0] == dev_playlists_obs[ix] + dev_playlists_held[ix])
for ix in range(len(test_playlists)):
assert np.all(test_playlists[ix][0] == test_playlists_obs[ix] + test_playlists_held[ix])
print('DEV obs: {:,} | DEV held: {:,} \nTEST obs: {:,} | TEST held: {:,}'.format(
np.sum([len(ppl) for ppl in dev_playlists_obs]), np.sum([len(ppl) for ppl in dev_playlists_held]),
np.sum([len(ppl) for ppl in test_playlists_obs]), np.sum([len(ppl) for ppl in test_playlists_held])))
song2pop_train = song2pop.copy()
song2pop_train_dev = song2pop.copy()
for ppl in dev_playlists_held:
for sid in ppl:
song2pop_train[sid] -= 1
for ppl in test_playlists_held:
for sid in ppl:
song2pop_train[sid] -= 1
song2pop_train_dev[sid] -= 1
Explanation: Keep the first $K=1,2,3,4$ songs for playlist in dev and test set.
End of explanation
pkl_dir2 = os.path.join(data_dir, 'setting2')
fpl2 = os.path.join(pkl_dir2, 'playlists_train_dev_test_s2_%d.pkl.gz' % N_SEED_K)
fy2 = os.path.join(pkl_dir2, 'Y_%d.pkl.gz' % N_SEED_K)
fxtrain2 = os.path.join(pkl_dir2, 'X_train_%d.pkl.gz' % N_SEED_K)
fytrain2 = os.path.join(pkl_dir2, 'Y_train_%d.pkl.gz' % N_SEED_K)
fxtrndev2 = os.path.join(pkl_dir2, 'X_trndev_%d.pkl.gz' % N_SEED_K)
fytrndev2 = os.path.join(pkl_dir2, 'Y_trndev_%d.pkl.gz' % N_SEED_K)
fydev2 = os.path.join(pkl_dir2, 'PU_dev_%d.pkl.gz' % N_SEED_K)
fytest2 = os.path.join(pkl_dir2, 'PU_test_%d.pkl.gz' % N_SEED_K)
fclique21 = os.path.join(pkl_dir2, 'cliques_trndev_%d.pkl.gz' % N_SEED_K)
fclique22 = os.path.join(pkl_dir2, 'cliques_all_%d.pkl.gz' % N_SEED_K)
X, Y = gen_dataset(playlists = [t[0] for t in train_playlists + dev_playlists + test_playlists],
song2feature = song2feature, song2genre = song2genre,
train_song_set = [t[0] for t in all_songs], song2pop_train=song2pop_train)
X_train = X
assert X.shape[0] == len(song2pop_train_dev)
X_train_dev = X_train.copy()
X_train_dev[:, -1] = np.log([song2pop_train_dev[sid]+1 for sid, _ in all_songs])
dev_cols = np.arange(len(train_playlists), len(train_playlists) + len(dev_playlists))
test_cols = np.arange(len(train_playlists) + len(dev_playlists), Y.shape[1])
assert len(dev_cols) == len(dev_playlists) == len(dev_playlists_obs)
assert len(test_cols) == len(test_playlists) == len(test_playlists_obs)
pkl.dump({'train_playlists': train_playlists, 'dev_playlists': dev_playlists, 'test_playlists': test_playlists,
'dev_playlists_obs': dev_playlists_obs, 'dev_playlists_held': dev_playlists_held,
'test_playlists_obs': test_playlists_obs, 'test_playlists_held': test_playlists_held},
gzip.open(fpl2, 'wb'))
song2index = {sid: ix for ix, sid in enumerate([t[0] for t in all_songs])}
Explanation: Setting II: hold a subset of songs in a subset of playlists, use all songs
End of explanation
Y_train = Y[:, :len(train_playlists)].tocsc()
Y_train_dev = Y[:, :len(train_playlists) + len(dev_playlists)].tocsc()
PU_dev = lil_matrix((len(all_songs), len(dev_playlists)), dtype=np.bool)
PU_test = lil_matrix((len(all_songs), len(test_playlists)), dtype=np.bool)
num_known_dev = 0
for j in range(len(dev_playlists)):
if (j+1) % 1000 == 0:
sys.stdout.write('\r%d / %d' % (j+1, len(dev_playlists))); sys.stdout.flush()
rows = [song2index[sid] for sid in dev_playlists_obs[j]]
PU_dev[rows, j] = True
num_known_dev += len(rows)
num_known_test = 0
for j in range(len(test_playlists)):
if (j+1) % 1000 == 0:
sys.stdout.write('\r%d / %d' % (j+1, len(test_playlists))); sys.stdout.flush()
rows = [song2index[sid] for sid in test_playlists_obs[j]]
PU_test[rows, j] = True
num_known_test += len(rows)
PU_dev = PU_dev.tocsr()
PU_test = PU_test.tocsr()
print('#unknown entries in DEV set: {:15,d} | {:15,d} \n#unknown entries in TEST set: {:15,d} | {:15,d}'.format(
np.prod(PU_dev.shape) - PU_dev.sum(), len(dev_playlists) * len(all_songs) - num_known_dev,
np.prod(PU_test.shape) - PU_test.sum(), len(test_playlists) * len(all_songs) - num_known_test))
# print('#unknown entries in Setting I: {:,}'.format((len(dev_song_set) + len(test_song_set)) * Y.shape[1]))
Explanation: Use dedicated sparse matrices to reprsent what entries are observed in dev and test set.
End of explanation
X_train_mean = np.mean(X_train, axis=0).reshape((1, -1))
X_train_std = np.std(X_train, axis=0).reshape((1, -1)) + 10 ** (-6)
X_train -= X_train_mean
X_train /= X_train_std
X_trndev_mean = np.mean(X_train_dev, axis=0).reshape((1, -1))
X_trndev_std = np.std(X_train_dev, axis=0).reshape((1, -1)) + 10 ** (-6)
X_train_dev -= X_trndev_mean
X_train_dev /= X_trndev_std
print(np.mean(np.mean(X_train, axis=0)))
print(np.mean( np.std(X_train, axis=0)) - 1)
print(np.mean(np.mean(X_train_dev, axis=0)))
print(np.mean( np.std(X_train_dev, axis=0)) - 1)
print('All : %s' % str(Y.shape))
print('Train : %s, %s' % (X_train.shape, Y_train.shape))
print('Dev : %s' % str(PU_dev.shape))
print('Trndev: %s, %s' % (X_train_dev.shape, Y_train_dev.shape))
print('Test : %s' % str(PU_test.shape))
pkl.dump(X_train, gzip.open(fxtrain2, 'wb'))
pkl.dump(Y_train, gzip.open(fytrain2, 'wb'))
pkl.dump(Y, gzip.open(fy2, 'wb'))
pkl.dump(X_train_dev, gzip.open(fxtrndev2, 'wb'))
pkl.dump(Y_train_dev, gzip.open(fytrndev2, 'wb'))
pkl.dump(PU_dev, gzip.open(fydev2, 'wb'))
pkl.dump(PU_test, gzip.open(fytest2, 'wb'))
Explanation: Feature normalisation.
End of explanation
pl_users = [u for (_, u) in train_playlists + dev_playlists]
cliques_train_dev = []
for u in sorted(set(pl_users)):
clique = np.where(u == np.array(pl_users, dtype=np.object))[0]
#if len(clique) > 1:
cliques_train_dev.append(clique)
pkl.dump(cliques_train_dev, gzip.open(fclique21, 'wb'))
clqsize = [len(clq) for clq in cliques_train_dev]
print(np.min(clqsize), np.max(clqsize), len(clqsize), np.sum(clqsize))
assert np.all(np.arange(Y_train_dev.shape[1]) == np.asarray(sorted([k for clq in cliques_train_dev for k in clq])))
Explanation: Build the adjacent matrix of playlists (nodes) for setting II, playlists of the same user form a clique.
Cliques in train + dev set.
End of explanation
pl_users = [u for (_, u) in train_playlists + dev_playlists + test_playlists]
clique_list2 = []
for u in sorted(set(pl_users)):
clique = np.where(u == np.array(pl_users, dtype=np.object))[0]
#if len(clique) > 1:
clique_list2.append(clique)
pkl.dump(clique_list2, gzip.open(fclique22, 'wb'))
clqsize = [len(clq) for clq in clique_list2]
print(np.min(clqsize), np.max(clqsize), len(clqsize), np.sum(clqsize))
assert np.all(np.arange(Y.shape[1]) == np.asarray(sorted([k for clq in clique_list2 for k in clq])))
Explanation: Cliques in train + dev + test set.
End of explanation |
15,023 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Experiments with Similarity Encoders
...to show that SimEc can create similarity preserving embeddings based on human ratings
In this iPython Notebook are some examples to illustrate the potential of Similarity Encoders (SimEc) for creating similarity preserving embeddings. For further details and theoretical background on this new neural network architecture, please refer to the corresponding paper.
Step1: Handwritten Digits (8x8 px)
See http
Step2: SimEc based on class labels
We've seen that SimEcs can reach the same solutions as traditional spectral methods such as kPCA and isomap. However, these methods have the limitation that you can only embed new data points if you can compute their kernel map, i.e. the similarity to the training examples. But what if the similarity matrix used as targets during training was generated by an unknown process such as human similarity judgments?
To show how we can use SimEc in such a scenario, we construct the similarity matrix from the class labels assigned by human annotators (1=same class, 0=different class).
Step3: Lets first try a simple linear SimEc.
Step4: Great, we already see some clusters separating from the rest! What if we add more layers?
We can examine how the embedding changes during training
Step5: MNIST Dataset
Embedding the regular 28x28 pixel MNIST digits
Step6: "Kernel PCA" and Ridge Regression vs. SimEc
To get an idea of how a perfect similarity preserving embedding would look like when computing similarities from class labels, we can embed the data by performing an eigendecomposition of the similarity matrix (i.e. performing kernel PCA). However, since in a real setting we would be unable to compute the similarities of the test samples to the training samples (since we don't know their class labels), to map the test samples into the embedding space we additionally need to train a (ridge) regression model to map from the original input space to the embedding space.
A SimEc with multiple hidden layers starts to get close to the eigendecomposition solution.
Step7: 20 Newsgroups
To show that SimEc embeddings can also be computed for other types of data, we do some further experiments with the 20 newsgroups dataset. We subsample 7 of the 20 categories and remove meta information such as headers to avoid overfitting (see also http | Python Code:
from __future__ import unicode_literals, division, print_function, absolute_import
from builtins import range
import numpy as np
np.random.seed(28)
import matplotlib.pyplot as plt
from sklearn.linear_model import Ridge
from sklearn.decomposition import PCA, KernelPCA
from sklearn.datasets import load_digits, fetch_mldata, fetch_20newsgroups
from sklearn.neighbors import KNeighborsClassifier as KNN
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import GridSearchCV
import tensorflow as tf
tf.set_random_seed(28)
import keras
# find nlputils at https://github.com/cod3licious/nlputils
from nlputils.features import FeatureTransform, features2mat
from simec import SimilarityEncoder
from utils import center_K, check_similarity_match
from utils_plotting import get_colors, plot_digits, plot_mnist, plot_20news
%matplotlib inline
%load_ext autoreload
%autoreload 2
# set this to True if you want to save the figures from the paper
savefigs = False
Explanation: Experiments with Similarity Encoders
...to show that SimEc can create similarity preserving embeddings based on human ratings
In this iPython Notebook are some examples to illustrate the potential of Similarity Encoders (SimEc) for creating similarity preserving embeddings. For further details and theoretical background on this new neural network architecture, please refer to the corresponding paper.
End of explanation
# load digits dataset
digits = load_digits()
X = digits.data
X /= float(X.max())
ss = StandardScaler(with_std=False)
X = ss.fit_transform(X)
y = digits.target
n_samples, n_features = X.shape
Explanation: Handwritten Digits (8x8 px)
See http://scikit-learn.org/stable/auto_examples/datasets/plot_digits_last_image.html
End of explanation
Y = np.tile(y, (len(y), 1))
S = center_K(np.array(Y==Y.T, dtype=int))
# take only some of the samples as targets to speed it all up
n_targets = 1000
# knn accuracy using all original feature dimensions
clf = KNN(n_neighbors=10)
clf.fit(X[:n_targets], y[:n_targets])
print("knn accuracy: %f" % clf.score(X[n_targets:], y[n_targets:]))
# PCA
pca = PCA(n_components=2)
X_embedp = pca.fit_transform(X)
plot_digits(X_embedp, digits, title='Digits embedded with PCA')
clf = KNN(n_neighbors=10)
clf.fit(X_embedp[:n_targets], y[:n_targets])
print("knn accuracy: %f" % clf.score(X_embedp[n_targets:], y[n_targets:]))
# check how many relevant dimensions there are
eigenvals = np.linalg.eigvalsh(S)[::-1]
plt.figure();
plt.plot(list(range(1, S.shape[0]+1)), eigenvals, '-o', markersize=3);
plt.plot([1, S.shape[0]],[0,0], 'k--', linewidth=0.5);
plt.xlim(1, X.shape[1]+1);
plt.title('Eigenvalue spectrum of S (based on class labels)');
D, V = np.linalg.eig(S)
# regular kpca embedding: take largest EV
D1, V1 = D[np.argsort(D)[::-1]], V[:,np.argsort(D)[::-1]]
X_embed = np.dot(V1.real, np.diag(np.sqrt(np.abs(D1.real))))
plot_digits(X_embed[:,:2], digits, title='Digits embedded based on first 2 components', plot_box=False)
clf = KNN(n_neighbors=10)
clf.fit(X_embed[:n_targets,:2], y[:n_targets])
print("knn accuracy: %f" % clf.score(X_embed[n_targets:,:2], y[n_targets:]))
print("similarity approximation - mse: %f" % check_similarity_match(X_embed[:,:2], S)[0])
Explanation: SimEc based on class labels
We've seen that SimEcs can reach the same solutions as traditional spectral methods such as kPCA and isomap. However, these methods have the limitation that you can only embed new data points if you can compute their kernel map, i.e. the similarity to the training examples. But what if the similarity matrix used as targets during training was generated by an unknown process such as human similarity judgments?
To show how we can use SimEc in such a scenario, we construct the similarity matrix from the class labels assigned by human annotators (1=same class, 0=different class).
End of explanation
# similarity encoder with similarities relying on class information - linear
simec = SimilarityEncoder(X.shape[1], 2, n_targets, l2_reg_emb=0.00001, l2_reg_out=0.0000001,
s_ll_reg=0.5, S_ll=S[:n_targets,:n_targets], opt=keras.optimizers.Adamax(lr=0.005))
simec.fit(X, S[:,:n_targets])
X_embed = simec.transform(X)
plot_digits(X_embed, digits, title='Digits - SimEc (class sim, linear)')
# of course we're overfitting here quite a bit since we used all samples for training
# even if we didn't use the corresponding similarities...but this is only a toy example anyways
clf = KNN(n_neighbors=10)
clf.fit(X_embed[:n_targets], y[:n_targets])
print("knn accuracy: %f" % clf.score(X_embed[n_targets:], y[n_targets:]))
print("similarity approximation - mse: %f" % check_similarity_match(X_embed, S)[0])
Explanation: Lets first try a simple linear SimEc.
End of explanation
# similarity encoder with similarities relying on class information - 1 hidden layer
n_targets = 1000
simec = SimilarityEncoder(X.shape[1], 2, n_targets, hidden_layers=[(100, 'tanh')],
l2_reg=0.00000001, l2_reg_emb=0.00001, l2_reg_out=0.0000001,
s_ll_reg=0.5, S_ll=S[:n_targets,:n_targets], opt=keras.optimizers.Adamax(lr=0.01))
e_total = 0
for e in [5, 10, 10, 10, 15, 25, 25]:
e_total += e
print(e_total)
simec.fit(X, S[:,:n_targets], epochs=e)
X_embed = simec.transform(X)
clf = KNN(n_neighbors=10)
clf.fit(X_embed[:1000], y[:1000])
acc = clf.score(X_embed[1000:], y[1000:])
print("knn accuracy: %f" % acc)
print("similarity approximation - mse: %f" % check_similarity_match(X_embed, S)[0])
plot_digits(X_embed, digits, title='SimEc after %i epochs; accuracy: %.1f' % (e_total, 100*acc) , plot_box=False)
Explanation: Great, we already see some clusters separating from the rest! What if we add more layers?
We can examine how the embedding changes during training: first some clusters separate, then it starts to look like the eigenvalue based embedding with the clusters of several numbers pulled together.
End of explanation
# load digits
mnist = fetch_mldata('MNIST original', data_home='data')
X = mnist.data/255. # normalize to 0-1
y = np.array(mnist.target, dtype=int)
# subsample 10000 random data points
np.random.seed(42)
n_samples = 10000
n_test = 2000
n_targets = 1000
rnd_idx = np.random.permutation(X.shape[0])[:n_samples]
X_test, y_test = X[rnd_idx[:n_test],:], y[rnd_idx[:n_test]]
X, y = X[rnd_idx[n_test:],:], y[rnd_idx[n_test:]]
# scale
ss = StandardScaler(with_std=False)
X = ss.fit_transform(X)
X_test = ss.transform(X_test)
n_train, n_features = X.shape
# compute similarity matrix based on class labels
Y = np.tile(y, (len(y), 1))
S = center_K(np.array(Y==Y.T, dtype=int))
Y = np.tile(y_test, (len(y_test), 1))
S_test = center_K(np.array(Y==Y.T, dtype=int))
Explanation: MNIST Dataset
Embedding the regular 28x28 pixel MNIST digits
End of explanation
D, V = np.linalg.eig(S)
# as a comparison: regular kpca embedding: take largest EV
D1, V1 = D[np.argsort(D)[::-1]], V[:,np.argsort(D)[::-1]]
X_embed = np.dot(V1.real, np.diag(np.sqrt(np.abs(D1.real))))
plot_mnist(X_embed[:,:2], y, title='MNIST (train) - largest 2 EV')
print("similarity approximation 2D - mse: %f" % check_similarity_match(X_embed[:,:2], S)[0])
print("similarity approximation 5D - mse: %f" % check_similarity_match(X_embed[:,:5], S)[0])
print("similarity approximation 7D - mse: %f" % check_similarity_match(X_embed[:,:7], S)[0])
print("similarity approximation 10D - mse: %f" % check_similarity_match(X_embed[:,:10], S)[0])
print("similarity approximation 25D - mse: %f" % check_similarity_match(X_embed[:,:25], S)[0])
n_targets = 2000
# get good alpha for RR model
m = Ridge()
rrm = GridSearchCV(m, {'alpha': [0.000001, 0.00001, 0.0001, 0.001, 0.01, 0.1, 0.25, 0.5, 0.75, 1., 2.5, 5., 7.5, 10., 25., 50., 75., 100., 250., 500., 750., 1000.]})
rrm.fit(X, X_embed[:,:8])
alpha = rrm.best_params_["alpha"]
print("Ridge Regression with alpha: %r" % alpha)
mse_ev, mse_rr, mse_rr_test = [], [], []
mse_simec, mse_simec_test = [], []
mse_simec_hl, mse_simec_hl_test = [], []
e_dims = [2, 3, 4, 5, 6, 7, 8, 9, 10, 15]
for e_dim in e_dims:
print(e_dim)
# eigenvalue based embedding
mse = check_similarity_match(X_embed[:,:e_dim], S)[0]
mse_ev.append(mse)
# train a linear ridge regression model to learn the mapping from X to Y
model = Ridge(alpha=alpha)
model.fit(X, X_embed[:,:e_dim])
X_embed_r = model.predict(X)
X_embed_test_r = model.predict(X_test)
mse = check_similarity_match(X_embed_r, S)[0]
mse_rr.append(mse)
mse = check_similarity_match(X_embed_test_r, S_test)[0]
mse_rr_test.append(mse)
# simec - linear
simec = SimilarityEncoder(X.shape[1], e_dim, n_targets, s_ll_reg=0.5, S_ll=S[:n_targets,:n_targets],
orth_reg=0.001 if e_dim > 8 else 0., l2_reg_emb=0.00001,
l2_reg_out=0.0000001, opt=keras.optimizers.Adamax(lr=0.001))
simec.fit(X, S[:,:n_targets])
X_embeds = simec.transform(X)
X_embed_tests = simec.transform(X_test)
mse = check_similarity_match(X_embeds, S)[0]
mse_simec.append(mse)
mse_t = check_similarity_match(X_embed_tests, S_test)[0]
mse_simec_test.append(mse_t)
# simec - 2hl
simec = SimilarityEncoder(X.shape[1], e_dim, n_targets, hidden_layers=[(25, 'tanh'), (25, 'tanh')],
s_ll_reg=0.5, S_ll=S[:n_targets,:n_targets], orth_reg=0.001 if e_dim > 7 else 0.,
l2_reg=0., l2_reg_emb=0.00001, l2_reg_out=0.0000001, opt=keras.optimizers.Adamax(lr=0.001))
simec.fit(X, S[:,:n_targets])
X_embeds = simec.transform(X)
X_embed_tests = simec.transform(X_test)
mse = check_similarity_match(X_embeds, S)[0]
mse_simec_hl.append(mse)
mse_t = check_similarity_match(X_embed_tests, S_test)[0]
mse_simec_hl_test.append(mse_t)
print("mse ev: %f; mse rr: %f (%f); mse simec (0hl): %f (%f); mse simec (2hl): %f (%f)" % (mse_ev[-1], mse_rr[-1], mse_rr_test[-1], mse_simec[-1], mse_simec_test[-1], mse, mse_t))
keras.backend.clear_session()
colors = get_colors(15)
plt.figure();
plt.plot(e_dims, mse_ev, '-o', markersize=3, c=colors[14], label='Eigendecomposition');
plt.plot(e_dims, mse_rr, '-o', markersize=3, c=colors[12], label='ED + Regression');
plt.plot(e_dims, mse_rr_test, '--o', markersize=3, c=colors[12], label='ED + Regression (test)');
plt.plot(e_dims, mse_simec, '-o', markersize=3, c=colors[8], label='SimEc 0hl');
plt.plot(e_dims, mse_simec_test, '--o', markersize=3, c=colors[8], label='SimEc 0hl (test)');
plt.plot(e_dims, mse_simec_hl, '-o', markersize=3, c=colors[4], label='SimEc 2hl');
plt.plot(e_dims, mse_simec_hl_test, '--o', markersize=3, c=colors[4], label='SimEc 2hl (test)');
plt.legend(loc=0);
plt.title('MNIST (class based similarities)');
plt.plot([0, e_dims[-1]], [0,0], 'k--', linewidth=0.5);
plt.xticks(e_dims, e_dims);
plt.xlabel('Number of Embedding Dimensions ($d$)')
plt.ylabel('Mean Squared Error of $\hat{S}$')
print("e_dims=", e_dims)
print("mse_ev=", mse_ev)
print("mse_rr=", mse_rr)
print("mse_rr_test=", mse_rr_test)
print("mse_simec=", mse_simec)
print("mse_simec_test=", mse_simec_test)
print("mse_simec_hl=", mse_simec_hl)
print("mse_simec_hl_test=", mse_simec_hl_test)
if savefigs: plt.savefig('fig_class_mse_edim.pdf', dpi=300)
Explanation: "Kernel PCA" and Ridge Regression vs. SimEc
To get an idea of how a perfect similarity preserving embedding would look like when computing similarities from class labels, we can embed the data by performing an eigendecomposition of the similarity matrix (i.e. performing kernel PCA). However, since in a real setting we would be unable to compute the similarities of the test samples to the training samples (since we don't know their class labels), to map the test samples into the embedding space we additionally need to train a (ridge) regression model to map from the original input space to the embedding space.
A SimEc with multiple hidden layers starts to get close to the eigendecomposition solution.
End of explanation
## load the data and transform it into a tf-idf representation
categories = [
"comp.graphics",
"rec.autos",
"rec.sport.baseball",
"sci.med",
"sci.space",
"soc.religion.christian",
"talk.politics.guns"
]
newsgroups_train = fetch_20newsgroups(subset='train', remove=(
'headers', 'footers', 'quotes'), data_home='data', categories=categories, random_state=42)
newsgroups_test = fetch_20newsgroups(subset='test', remove=(
'headers', 'footers', 'quotes'), data_home='data', categories=categories, random_state=42)
# store in dicts (if the text contains more than 3 words)
textdict = {i: t for i, t in enumerate(newsgroups_train.data) if len(t.split()) > 3}
textdict.update({i: t for i, t in enumerate(newsgroups_test.data, len(newsgroups_train.data)) if len(t.split()) > 3})
train_ids = [i for i in range(len(newsgroups_train.data)) if i in textdict]
test_ids = [i for i in range(len(newsgroups_train.data), len(textdict)) if i in textdict]
print("%i training and %i test samples" % (len(train_ids), len(test_ids)))
# transform into tf-idf features
ft = FeatureTransform(norm='max', weight=True, renorm='max')
docfeats = ft.texts2features(textdict, fit_ids=train_ids)
# organize in feature matrix
X, featurenames = features2mat(docfeats, train_ids)
X_test, _ = features2mat(docfeats, test_ids, featurenames)
print("%i features" % len(featurenames))
targets = np.hstack([newsgroups_train.target,newsgroups_test.target])
y = targets[train_ids]
y_test = targets[test_ids]
n_targets = 1000
target_names = newsgroups_train.target_names
# compute label based simmat
Y = np.tile(y, (len(y), 1))
S = center_K(np.array(Y==Y.T, dtype=int))
Y = np.tile(y_test, (len(y_test), 1))
S_test = center_K(np.array(Y==Y.T, dtype=int))
D, V = np.linalg.eig(S)
# as a comparison: regular kpca embedding: take largest EV
D1, V1 = D[np.argsort(D)[::-1]], V[:,np.argsort(D)[::-1]]
X_embed = np.dot(V1.real, np.diag(np.sqrt(np.abs(D1.real))))
plot_20news(X_embed[:, :2], y, target_names, title='20 newsgroups - 2 largest EV', legend=True)
print("similarity approximation 2D - mse: %f" % check_similarity_match(X_embed[:,:2], S)[0])
print("similarity approximation 5D - mse: %f" % check_similarity_match(X_embed[:,:5], S)[0])
print("similarity approximation 7D - mse: %f" % check_similarity_match(X_embed[:,:7], S)[0])
print("similarity approximation 10D - mse: %f" % check_similarity_match(X_embed[:,:10], S)[0])
print("similarity approximation 25D - mse: %f" % check_similarity_match(X_embed[:,:25], S)[0])
n_targets = 2000
# get good alpha for RR model
m = Ridge()
rrm = GridSearchCV(m, {'alpha': [0.000001, 0.00001, 0.0001, 0.001, 0.01, 0.1, 0.25, 0.5, 0.75, 1., 2.5, 5., 7.5, 10., 25., 50., 75., 100., 250., 500., 750., 1000.]})
rrm.fit(X, X_embed[:,:8])
alpha = rrm.best_params_["alpha"]
print("Ridge Regression with alpha: %r" % alpha)
mse_ev, mse_rr, mse_rr_test = [], [], []
mse_simec, mse_simec_test = [], []
mse_simec_hl, mse_simec_hl_test = [], []
e_dims = [2, 3, 4, 5, 6, 7, 8, 9, 10]
for e_dim in e_dims:
print(e_dim)
# eigenvalue based embedding
mse = check_similarity_match(X_embed[:,:e_dim], S)[0]
mse_ev.append(mse)
# train a linear ridge regression model to learn the mapping from X to Y
model = Ridge(alpha=alpha)
model.fit(X, X_embed[:,:e_dim])
X_embed_r = model.predict(X)
X_embed_test_r = model.predict(X_test)
mse = check_similarity_match(X_embed_r, S)[0]
mse_rr.append(mse)
mse = check_similarity_match(X_embed_test_r, S_test)[0]
mse_rr_test.append(mse)
# simec - linear
simec = SimilarityEncoder(X.shape[1], e_dim, n_targets, s_ll_reg=0.5, S_ll=S[:n_targets,:n_targets],
sparse_inputs=True, orth_reg=0.1 if e_dim > 6 else 0., l2_reg_emb=0.0001,
l2_reg_out=0.00001, opt=keras.optimizers.Adamax(lr=0.01))
simec.fit(X, S[:,:n_targets])
X_embeds = simec.transform(X)
X_embed_tests = simec.transform(X_test)
mse = check_similarity_match(X_embeds, S)[0]
mse_simec.append(mse)
mse_t = check_similarity_match(X_embed_tests, S_test)[0]
mse_simec_test.append(mse_t)
# simec - 2hl
simec = SimilarityEncoder(X.shape[1], e_dim, n_targets, hidden_layers=[(25, 'tanh'), (25, 'tanh')], sparse_inputs=True,
s_ll_reg=1., S_ll=S[:n_targets,:n_targets], orth_reg=0.1 if e_dim > 7 else 0.,
l2_reg=0., l2_reg_emb=0.01, l2_reg_out=0.00001, opt=keras.optimizers.Adamax(lr=0.01))
simec.fit(X, S[:,:n_targets])
X_embeds = simec.transform(X)
X_embed_tests = simec.transform(X_test)
mse = check_similarity_match(X_embeds, S)[0]
mse_simec_hl.append(mse)
mse_t = check_similarity_match(X_embed_tests, S_test)[0]
mse_simec_hl_test.append(mse_t)
print("mse ev: %f; mse rr: %f (%f); mse simec (0hl): %f (%f); mse simec (2hl): %f (%f)" % (mse_ev[-1], mse_rr[-1], mse_rr_test[-1], mse_simec[-1], mse_simec_test[-1], mse, mse_t))
keras.backend.clear_session()
colors = get_colors(15)
plt.figure();
plt.plot(e_dims, mse_ev, '-o', markersize=3, c=colors[14], label='Eigendecomposition');
plt.plot(e_dims, mse_rr, '-o', markersize=3, c=colors[12], label='ED + Regression');
plt.plot(e_dims, mse_rr_test, '--o', markersize=3, c=colors[12], label='ED + Regression (test)');
plt.plot(e_dims, mse_simec, '-o', markersize=3, c=colors[8], label='SimEc 0hl');
plt.plot(e_dims, mse_simec_test, '--o', markersize=3, c=colors[8], label='SimEc 0hl (test)');
plt.plot(e_dims, mse_simec_hl, '-o', markersize=3, c=colors[4], label='SimEc 2hl');
plt.plot(e_dims, mse_simec_hl_test, '--o', markersize=3, c=colors[4], label='SimEc 2hl (test)');
plt.legend(bbox_to_anchor=(1.02, 1), loc=2, borderaxespad=0.);
plt.title('20 newsgroups (class based similarities)');
plt.plot([0, e_dims[-1]], [0,0], 'k--', linewidth=0.5);
plt.xticks(e_dims, e_dims);
plt.xlabel('Number of Embedding Dimensions ($d$)')
plt.ylabel('Mean Squared Error of $\hat{S}$')
print("e_dims=", e_dims)
print("mse_ev=", mse_ev)
print("mse_rr=", mse_rr)
print("mse_rr_test=", mse_rr_test)
print("mse_simec=", mse_simec)
print("mse_simec_test=", mse_simec_test)
print("mse_simec_hl=", mse_simec_hl)
print("mse_simec_hl_test=", mse_simec_hl_test)
Explanation: 20 Newsgroups
To show that SimEc embeddings can also be computed for other types of data, we do some further experiments with the 20 newsgroups dataset. We subsample 7 of the 20 categories and remove meta information such as headers to avoid overfitting (see also http://scikit-learn.org/stable/datasets/twenty_newsgroups.html). The posts are transformed into very high dimensional tf-idf vectors used as input to the SimEc and to compute the linear kernel matrix.
End of explanation |
15,024 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Modeling and Simulation in Python
Case study
Step1: I'll start by getting the units we'll need from Pint.
Step2: Spider-Man
In this case study we'll develop a model of Spider-Man swinging from a springy cable of webbing attached to the top of the Empire State Building. Initially, Spider-Man is at the top of a nearby building, as shown in this diagram.
The origin, O⃗, is at the base of the Empire State Building. The vector H⃗ represents the position where the webbing is attached to the building, relative to O⃗. The vector P⃗ is the position of Spider-Man relative to O⃗. And L⃗ is the vector from the attachment point to Spider-Man.
By following the arrows from O⃗, along H⃗, and along L⃗, we can see that
H⃗ + L⃗ = P⃗
So we can compute L⃗ like this
Step4: Compute the initial position
Step6: Now here's a version of make_system that takes a Params object as a parameter.
make_system uses the given value of v_term to compute the drag coefficient C_d.
Step7: Let's make a System
Step9: Drag and spring forces
Here's drag force, as we saw in Chapter 22.
Step11: And here's the 2-D version of spring force. We saw the 1-D version in Chapter 21.
Step13: Here's the slope function, including acceleration due to gravity, drag, and the spring force of the webbing.
Step14: As always, let's test the slope function with the initial conditions.
Step15: And then run the simulation.
Step16: Visualizing the results
We can extract the x and y components as Series objects.
The simplest way to visualize the results is to plot x and y as functions of time.
Step17: We can plot the velocities the same way.
Step18: Another way to visualize the results is to plot y versus x. The result is the trajectory through the plane of motion.
Step19: Letting go
Now let's find the optimal time for Spider-Man to let go. We have to run the simulation in two phases because the spring force changes abruptly when Spider-Man lets go, so we can't integrate through it.
Here are the parameters for Phase 1, running for 9 seconds.
Step20: The final conditions from Phase 1 are the initial conditions for Phase 2.
Step21: Here's the position Vector.
Step22: And the velocity Vector.
Step23: Here is the System for Phase 2. We can turn off the spring force by setting k=0, so we don't have to write a new slope function.
Step25: Here's an event function that stops the simulation when Spider-Man reaches the ground.
Step26: Run Phase 2.
Step27: Plot the results.
Step29: Now we can gather all that into a function that takes t_release and V_0, runs both phases, and returns the results.
Step30: And here's a test run.
Step31: Animation
Here's a draw function we can use to animate the results.
Step33: Maximizing range
To find the best value of t_release, we need a function that takes possible values, runs the simulation, and returns the range.
Step34: We can test it.
Step35: And run it for a few values.
Step36: Now we can use maximize_scalar to find the optimum.
Step37: Finally, we can run the simulation with the optimal value. | Python Code:
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
Explanation: Modeling and Simulation in Python
Case study: Spider-Man
Copyright 2017 Allen Downey
License: Creative Commons Attribution 4.0 International
End of explanation
m = UNITS.meter
s = UNITS.second
kg = UNITS.kilogram
N = UNITS.newton
degree = UNITS.degree
radian = UNITS.radian
Explanation: I'll start by getting the units we'll need from Pint.
End of explanation
params = Params(height = 381 * m,
g = 9.8 * m/s**2,
mass = 75 * kg,
area = 1 * m**2,
rho = 1.2 * kg/m**3,
v_term = 60 * m / s,
length = 100 * m,
angle = (270 - 45) * degree,
k = 40 * N / m,
t_0 = 0 * s,
t_end = 30 * s)
Explanation: Spider-Man
In this case study we'll develop a model of Spider-Man swinging from a springy cable of webbing attached to the top of the Empire State Building. Initially, Spider-Man is at the top of a nearby building, as shown in this diagram.
The origin, O⃗, is at the base of the Empire State Building. The vector H⃗ represents the position where the webbing is attached to the building, relative to O⃗. The vector P⃗ is the position of Spider-Man relative to O⃗. And L⃗ is the vector from the attachment point to Spider-Man.
By following the arrows from O⃗, along H⃗, and along L⃗, we can see that
H⃗ + L⃗ = P⃗
So we can compute L⃗ like this:
L⃗ = P⃗ - H⃗
The goals of this case study are:
Implement a model of this scenario to predict Spider-Man's trajectory.
Choose the right time for Spider-Man to let go of the webbing in order to maximize the distance he travels before landing.
Choose the best angle for Spider-Man to jump off the building, and let go of the webbing, to maximize range.
I'll create a Params object to contain the quantities we'll need:
According to the Spider-Man Wiki, Spider-Man weighs 76 kg.
Let's assume his terminal velocity is 60 m/s.
The length of the web is 100 m.
The initial angle of the web is 45 degrees to the left of straight down.
The spring constant of the web is 40 N / m when the cord is stretched, and 0 when it's compressed.
Here's a Params object.
End of explanation
def initial_condition(params):
Compute the initial position and velocity.
params: Params object
height, length, angle = params.height, params.length, params.angle
H⃗ = Vector(0, height)
theta = angle.to(radian)
x, y = pol2cart(theta, length)
L⃗ = Vector(x, y)
P⃗ = H⃗ + L⃗
V⃗ = Vector(0, 0) * m/s
return State(P⃗=P⃗, V⃗=V⃗)
initial_condition(params)
Explanation: Compute the initial position
End of explanation
def make_system(params):
Makes a System object for the given conditions.
params: Params object
returns: System object
init = initial_condition(params)
mass, g = params.mass, params.g
rho, area, v_term = params.rho, params.area, params.v_term
C_d = 2 * mass * g / (rho * area * v_term**2)
return System(params, init=init, C_d=C_d)
Explanation: Now here's a version of make_system that takes a Params object as a parameter.
make_system uses the given value of v_term to compute the drag coefficient C_d.
End of explanation
system = make_system(params)
system.init
Explanation: Let's make a System
End of explanation
def drag_force(V⃗, system):
Compute drag force.
V⃗: velocity Vector
system: `System` object
returns: force Vector
rho, C_d, area = system.rho, system.C_d, system.area
mag = rho * V⃗.mag**2 * C_d * area / 2
direction = -V⃗.hat()
f_drag = direction * mag
return f_drag
V⃗_test = Vector(10, 10) * m/s
drag_force(V⃗_test, system)
Explanation: Drag and spring forces
Here's drag force, as we saw in Chapter 22.
End of explanation
def spring_force(L⃗, system):
Compute drag force.
L⃗: Vector representing the webbing
system: System object
returns: force Vector
extension = L⃗.mag - system.length
if magnitude(extension) < 0:
mag = 0
else:
mag = system.k * extension
direction = -L⃗.hat()
f_spring = direction * mag
return f_spring
L⃗_test = Vector(0, -system.length-1*m)
f_spring = spring_force(L⃗_test, system)
Explanation: And here's the 2-D version of spring force. We saw the 1-D version in Chapter 21.
End of explanation
def slope_func(state, t, system):
Computes derivatives of the state variables.
state: State (x, y, x velocity, y velocity)
t: time
system: System object with g, rho, C_d, area, mass
returns: sequence (vx, vy, ax, ay)
P⃗, V⃗ = state
g, mass = system.g, system.mass
H⃗ = Vector(0, system.height)
L⃗ = P⃗ - H⃗
a_grav = Vector(0, -g)
a_spring = spring_force(L⃗, system) / mass
a_drag = drag_force(V⃗, system) / mass
A⃗ = a_grav + a_drag + a_spring
return V⃗, A⃗
Explanation: Here's the slope function, including acceleration due to gravity, drag, and the spring force of the webbing.
End of explanation
slope_func(system.init, 0, system)
Explanation: As always, let's test the slope function with the initial conditions.
End of explanation
results, details = run_ode_solver(system, slope_func)
details
Explanation: And then run the simulation.
End of explanation
def plot_position(P⃗):
x = P⃗.extract('x')
y = P⃗.extract('y')
plot(x, label='x')
plot(y, label='y')
decorate(xlabel='Time (s)',
ylabel='Position (m)')
plot_position(results.P⃗)
Explanation: Visualizing the results
We can extract the x and y components as Series objects.
The simplest way to visualize the results is to plot x and y as functions of time.
End of explanation
def plot_velocity(V⃗):
vx = V⃗.extract('x')
vy = V⃗.extract('y')
plot(vx, label='vx')
plot(vy, label='vy')
decorate(xlabel='Time (s)',
ylabel='Velocity (m/s)')
plot_velocity(results.V⃗)
Explanation: We can plot the velocities the same way.
End of explanation
def plot_trajectory(P⃗, **options):
x = P⃗.extract('x')
y = P⃗.extract('y')
plot(x, y, **options)
decorate(xlabel='x position (m)',
ylabel='y position (m)')
plot_trajectory(results.P⃗, label='trajectory')
Explanation: Another way to visualize the results is to plot y versus x. The result is the trajectory through the plane of motion.
End of explanation
params1 = Params(params, t_end=9*s)
system1 = make_system(params1)
results1, details1 = run_ode_solver(system1, slope_func)
plot_trajectory(results1.P⃗, label='Phase 1')
Explanation: Letting go
Now let's find the optimal time for Spider-Man to let go. We have to run the simulation in two phases because the spring force changes abruptly when Spider-Man lets go, so we can't integrate through it.
Here are the parameters for Phase 1, running for 9 seconds.
End of explanation
t_final = get_last_label(results1) * s
Explanation: The final conditions from Phase 1 are the initial conditions for Phase 2.
End of explanation
init = results1.last_row()
init.P⃗
Explanation: Here's the position Vector.
End of explanation
init.V⃗
Explanation: And the velocity Vector.
End of explanation
system2 = System(system1, t_0=t_final, t_end=t_final+10*s, init=init, k=0*N/m)
Explanation: Here is the System for Phase 2. We can turn off the spring force by setting k=0, so we don't have to write a new slope function.
End of explanation
def event_func(state, t, system):
Stops when y=0.
state: State object
t: time
system: System object
returns: height
P⃗, V⃗ = state
return P⃗.y
Explanation: Here's an event function that stops the simulation when Spider-Man reaches the ground.
End of explanation
results2, details2 = run_ode_solver(system2, slope_func, events=event_func)
Explanation: Run Phase 2.
End of explanation
plot_trajectory(results1.P⃗, label='Phase 1')
plot_trajectory(results2.P⃗, label='Phase 2')
Explanation: Plot the results.
End of explanation
def run_two_phase(t_release, V⃗_0, params):
Run both phases.
t_release: time when Spider-Man lets go of the webbing
V_0: initial velocity
params1 = Params(params, t_end=t_release, V⃗_0=V⃗_0)
system1 = make_system(params1)
results1, details1 = run_ode_solver(system1, slope_func)
t_0 = get_last_label(results1) * s
t_end = t_0 + 10 * s
init = results1.last_row()
system2 = System(system1, t_0=t_0, t_end=t_end, init=init, k=0*N/m)
results2, details2 = run_ode_solver(system2, slope_func, events=event_func)
results = results1.combine_first(results2)
return TimeFrame(results)
Explanation: Now we can gather all that into a function that takes t_release and V_0, runs both phases, and returns the results.
End of explanation
t_release = 9 * s
V⃗_0 = Vector(0, 0) * m/s
results = run_two_phase(t_release, V⃗_0, params)
plot_trajectory(results.P⃗)
x_final = results.P⃗.last_value().x
Explanation: And here's a test run.
End of explanation
xs = results.P⃗.extract('x')
ys = results.P⃗.extract('y')
def draw_func(state, t):
set_xlim(xs)
set_ylim(ys)
x, y = state.P⃗
plot(x, y, 'bo')
decorate(xlabel='x position (m)',
ylabel='y position (m)')
animate(results, draw_func)
Explanation: Animation
Here's a draw function we can use to animate the results.
End of explanation
def range_func(t_release, params):
Compute the final value of x.
t_release: time to release web
params: Params object
V_0 = Vector(0, 0) * m/s
results = run_two_phase(t_release, V_0, params)
x_final = results.P⃗.last_value().x
print(t_release, x_final)
return x_final
Explanation: Maximizing range
To find the best value of t_release, we need a function that takes possible values, runs the simulation, and returns the range.
End of explanation
range_func(9*s, params)
Explanation: We can test it.
End of explanation
for t_release in linrange(3, 15, 3) * s:
range_func(t_release, params)
Explanation: And run it for a few values.
End of explanation
bounds = [6, 12] * s
res = maximize_golden(range_func, bounds, params)
Explanation: Now we can use maximize_scalar to find the optimum.
End of explanation
best_time = res.x
V⃗_0 = Vector(0, 0) * m/s
results = run_two_phase(best_time, V⃗_0, params)
plot_trajectory(results.P⃗)
x_final = results.P⃗.last_value().x
Explanation: Finally, we can run the simulation with the optimal value.
End of explanation |
15,025 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1 entity referent
self ("me")
addressee ("you here")
other ("somebody else")
2+ entity referent
self, addressee ("me and you here" / inclusive we)
self, other ("me and somebody else" / exclusive we)
addressee, addressee ("the two or more of you here")
addressee, other ("one of you here and somebody else")
other, other ("the two or more of them")
3+ entity referent
self, addressee, addressee ("me and the two or more of you here")
self, addressee, other ("me, one of you here, and somebody else")
self, other, other ("me and two or more other people")
addressee, addressee, other ("the two or more of you and somebody else")
addressee, other, other ("one of you and two or more other people")
4+ entity referent
self, addressee, addressee, other ("me, the two or more of you here, and somebody else")
self, addressee, other, other ("me, one of you here, and two or more other people")
addressee, addressee, other, other ("the two or more of you here and two or more other people")
5+ entity referent
self, addressee, addressee, other, other ("me, the two or more of you here, and two or more other people")
There are 17 possible markers if there's no distinction between 2 entities of the same type and 3+ entities of the same type.
* a dual or trial entity number could be added to have a 3-way distinction between e.g. [other, other] and [other, other, other]
* another entity category besides self, addressee, and other could be added (invisible/divine entities)
* multiple self referents could be included (choral we)
Also, what about the issue of mis-identifying the cue as "self" rather than "addressee" (kids calling themselves "you")?
Step1: Spoken English collapses these to 6 possibilities
Step2: Assume that the distribution of referent sets is uniform, which is probably not true.
Step3: With 100 trials, the learner is getting a lot of them right, but just by predicting 'you guys' or 'we' (if self is a referent) all of the time, since those cover most of the referent sets. | Python Code:
from itertools import combinations, combinations_with_replacement
referents = []
for i in xrange(1, len(entities) * 2):
for combo in combinations_with_replacement(entities, i):
# choral we is impossible
if combo.count('self') > 1:
continue
# only singular vs plural
if combo.count('addressee') > 2:
continue
if combo.count('other') > 2:
continue
# compound cues
referent = list(combo)
for j in xrange(2, len(combo) + 1):
for compound in combinations(combo, j):
if compound not in referent:
referent.append(compound)
referents.append(referent)
len(referents)
referents
Explanation: 1 entity referent
self ("me")
addressee ("you here")
other ("somebody else")
2+ entity referent
self, addressee ("me and you here" / inclusive we)
self, other ("me and somebody else" / exclusive we)
addressee, addressee ("the two or more of you here")
addressee, other ("one of you here and somebody else")
other, other ("the two or more of them")
3+ entity referent
self, addressee, addressee ("me and the two or more of you here")
self, addressee, other ("me, one of you here, and somebody else")
self, other, other ("me and two or more other people")
addressee, addressee, other ("the two or more of you and somebody else")
addressee, other, other ("one of you and two or more other people")
4+ entity referent
self, addressee, addressee, other ("me, the two or more of you here, and somebody else")
self, addressee, other, other ("me, one of you here, and two or more other people")
addressee, addressee, other, other ("the two or more of you here and two or more other people")
5+ entity referent
self, addressee, addressee, other, other ("me, the two or more of you here, and two or more other people")
There are 17 possible markers if there's no distinction between 2 entities of the same type and 3+ entities of the same type.
* a dual or trial entity number could be added to have a 3-way distinction between e.g. [other, other] and [other, other, other]
* another entity category besides self, addressee, and other could be added (invisible/divine entities)
* multiple self referents could be included (choral we)
Also, what about the issue of mis-identifying the cue as "self" rather than "addressee" (kids calling themselves "you")?
End of explanation
def english(referents):
# first-person
if 'self' in referents:
if 'addressee' in referents: # inclusive we
# doesn't matter who else is being referred to
return 'we'
if 'other' in referents: # exclusive we
# doesn't matter who else is being referred to
return 'we'
return 'I'
# second-person, if the speaker isn't included
elif 'addressee' in referents:
if referents.count('addressee') > 1: # inclusive you
return 'you guys'
if 'other' in referents: # exclusive you
return 'you guys'
return 'you'
# third-person, if the addressee isn't included either
elif 'other' in referents:
if referents.count('other') > 1:
return 'they'
return 's/he'
english(['self', 'addressee'])
english(['self', 'other'])
english(['addressee', 'other'])
english(['addressee', 'addressee']) # also ('addressee', 'addressee') compound
import pandas
data = pandas.DataFrame()
data['Cues'] = referents
data['Outcomes'] = [english(referent) for referent in referents]
data
Explanation: Spoken English collapses these to 6 possibilities: I, you, s/he, we, you guys, they
End of explanation
import numpy
def sampler(p):
def uniform():
return numpy.random.choice(p)
return uniform
referent_sampler = sampler(len(data))
import ndl
def activation(W):
return pandas.DataFrame([ndl.activation(c, W) for c in data.Cues], index=data.index)
W = ndl.rw(data, M=100, distribution=referent_sampler)
A = activation(W)
A
pandas.DataFrame([data['Outcomes'], A.idxmax(1), A.idxmax(1) == data['Outcomes']],
index = ['Truth', 'Prediction', 'Accurate?']).T
Explanation: Assume that the distribution of referent sets is uniform, which is probably not true.
End of explanation
import sim
english_learning = sim.Simulation(english, data, referent_sampler, 2000)
import matplotlib.pyplot as plt
%matplotlib inline
trajectory = [english_learning.accuracy(i) for i in xrange(1, english_learning.MAX_M)]
plt.plot(range(1, len(trajectory) + 1), trajectory, '-')
plt.xlabel('Trial Number')
%load_ext rpy2.ipython
%Rpush trajectory
%%R
trajectory = data.frame(trial=1:length(trajectory), learned=trajectory)
library('ggplot2')
ggplot(trajectory, aes(trial, learned)) +
geom_point(alpha=0.25) +
stat_smooth() +
coord_cartesian(ylim=c(0,1))
Explanation: With 100 trials, the learner is getting a lot of them right, but just by predicting 'you guys' or 'we' (if self is a referent) all of the time, since those cover most of the referent sets.
End of explanation |
15,026 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="images/logo.jpg" style="display
Step1: <p style="text-align
Step2: <p style="text-align
Step3: <p style="text-align
Step5: <div class="align-center" style="display
Step6: <div class="align-center" style="display
Step7: <p style="text-align
Step8: <p style="text-align
Step9: <p style="text-align
Step10: <p style="text-align
Step11: <p style="text-align
Step12: <p style="text-align
Step13: <p style="text-align
Step14: <div class="align-center" style="display
Step15: <p style="text-align
Step17: <p style="text-align
Step18: <p style="text-align
Step19: <p style="text-align
Step21: <p style="text-align
Step22: <p style="text-align
Step24: <p style="text-align
Step25: <p style="text-align
Step26: <p style="text-align
Step28: <p style="text-align
Step29: <p style="text-align
Step30: <p style="text-align
Step31: <p style="text-align
Step32: <div class="align-center" style="display
Step34: <p style="align | Python Code:
print("Let's print a newline\nVery good. Now let us create a newline\n\twith a nested text!")
Explanation: <img src="images/logo.jpg" style="display: block; margin-left: auto; margin-right: auto;" alt="לוגו של מיזם לימוד הפייתון. נחש מצויר בצבעי צהוב וכחול, הנע בין האותיות של שם הקורס: לומדים פייתון. הסלוגן המופיע מעל לשם הקורס הוא מיזם חינמי ללימוד תכנות בעברית.">
<p style="text-align: right; direction: rtl; float: right;">מחרוזות – חלק 2</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">הברחת מחרוזות</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לפעמים נרצה שהמחרוזת שלנו תעשה "דברים מיוחדים" – כמו לרדת שורה באמצע המחרוזת או לעשות ריווח גדול באמצעות <kbd dir="ltr" style="direction: ltr">↹ TAB</kbd> (לצורך טבלאות, לדוגמה).<br>
המחשב מתייחס להזחה ולשורה חדשה כתווים של ממש, ועבור כל "תו מיוחד" שכזה יצרו רצף של תווים שמייצג אותו.<br>
לדוגמה, כשנדע מהו רצף התווים שמייצג שורה חדשה, נוכל לכתוב אותו כחלק מהמחרוזת, וכאשר נדפיס אותה נראה ירידת שורה במקום רצף התווים.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נלמד את שני התווים המיוחדים שדיברנו עליהם – <code dir="ltr" style="direction: ltr;">\n</code> הוא תו שמסמן ירידת שורה, ו־<code dir="ltr" style="direction: ltr;">\t</code> מסמן ריווח גדול.<br>
איך נכניס אותם בקוד? פשוט נשלב אותם כחלק מהמחרוזת במקומות שבהם בא לנו שיופיעו:
</p>
End of explanation
print('It\'s Friday, Friday\nGotta get down on Friday')
print("Oscar Wild once said: \"Be yourself; everyone else is already taken.\"")
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
רצפי התווים האלו משתמשים בסימן <kbd>\</kbd> ("Backslash", לוכסן שמאלי) כדי לסמן תו מיוחד, והם נפוצים גם בשפות אחרות שאינן פייתון.</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נשתמש ב־<code dir="ltr" style="direction: ltr;">\'</code> במחרוזת שבה הסימן שמתחיל ומסיים את המחרוזת הוא <code dir="ltr" style="direction: ltr;">'</code>. שימוש דומה נעשה עבור <code dir="ltr" style="direction: ltr;">\"</code>:<br>
</p>
End of explanation
print("The path of the document is C:\nadia\tofes161\advanced_homework.docx")
print("The path of the document is C:\\nadia\\tofes161\\advanced_homework.docx")
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
לפעמים גם נרצה פשוט לרשום את התו <em>\</em>. לצורך כך, נשתמש פעמיים בתו הזה.<br>
השוו את שתי המחרוזות הבאות, לדוגמה:
</p>
End of explanation
print(r"The path of the document is C:\nadia\tofes161\advanced_homework.docx")
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
נוכל גם להבריח את המחרוזת כולה באמצעות התו <code>r</code> <strong>לפני</strong> המחרוזת, במקרה שאנחנו לא רוצים להשתמש בתווים מיוחדים:
</p>
End of explanation
friday_song =
It's Friday, Friday
Gotta get down on Friday
Everybody's lookin' forward to the weekend, weekend
Friday, Friday
Gettin' down on Friday
Everybody's lookin' forward to the weekend
Partyin', partyin' (Yeah)
Partyin', partyin' (Yeah)
Fun, fun, fun, fun
Lookin' forward to the weekend
It's Friday, Friday
Gotta get down on Friday
Everybody's lookin' forward to the weekend, weekend
Friday, Friday
Gettin' down on Friday
Everybody's lookin' forward to the weekend
print(friday_song)
Explanation: <div class="align-center" style="display: flex; text-align: right; direction: rtl;">
<div style="display: flex; width: 10%; float: right; ">
<img src="images/tip.png" style="height: 50px !important;" alt="טיפ!">
</div>
<div style="width: 90%">
<p style="text-align: right; direction: rtl;">
פייתון, בניגוד לבני אדם, מסתכלת על תווים מיוחדים כמו <code dir="ltr" style="direction: ltr;">\n</code> ו־<code dir="ltr" style="direction: ltr;">\t</code> כתו אחד.<br>
נסו לבדוק מה ה־<code>len</code> שלהם כדי להיווכח בעצמכם.
</p>
</div>
</div>
<div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
<div style="display: flex; width: 10%; float: right; clear: both;">
<img src="images/warning.png" style="height: 50px !important;" alt="אזהרה!">
</div>
<div style="width: 90%">
<p style="text-align: right; direction: rtl; clear: both;">
פייתון לא מתמודדת טוב עם התו <em>\</em> בסוף המחרוזת.<br>
אם הגעתם למצב שבו זה קורה, פשוט הבריחו את ה־\ עם \ נוסף.
</p>
</div>
</div>
<div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
<div style="display: flex; width: 10%; float: right; clear: both;">
<img src="images/exercise.svg" style="height: 50px !important;" alt="תרגול">
</div>
<div style="width: 90%">
<p style="text-align: right; direction: rtl; float: right; clear: both;">
<strong>תרגול</strong>:
במשבצת שמתחתיי מצוטט חלק מהנאום המפורסם של מרטין לותר קינג, "<cite>I Have A Dream</cite>".<br>
הדפיסו אותו בעזרת משפט <code>print</code> בודד.
</p>
</div>
</div>
<blockquote>
With this faith we will be able to work together, to pray together, to struggle together, to go to jail together, to stand up for freedom together, knowing that we will be free one day.
This will be the day when all of God's children will be able to sing with new meaning, "My country 'tis of thee, sweet land of liberty, of thee I sing. Land where my fathers died, land of the Pilgrims' pride, from every mountainside, let freedom ring."
</blockquote>
<p style="text-align: right; direction: rtl; float: right; clear: both;">מחרוזת קשוחה</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לפעמים אנחנו פשוט רוצים שפייתון תתמודד עם גוש מלל שהדבקנו.<br>
לפייתון יש פתרון נהדר: היא עובדת גם אם המלל מפוצל לכמה שורות (היא תכניס תווי <code dir="ltr" style="direction: ltr;">\n</code> ותפצל לשורות בעצמה), וגם במקרים בהם יש תווים כמו <code>'</code> או <code>"</code>.<br>
פתרון זה נקרא "מחרוזת קשוחה" (אוקיי, למען האמת, אף אחד לא קורא לה ככה חוץ ממני), וכדי להשתמש בה יש להקליד 3 פעמים את התו <code>"</code> או את התו <code>'</code> משני צידי המחרוזת.<br>
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נראה דוגמה:</p>
End of explanation
age = 18
name = 'Yam'
Explanation: <div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
<div style="display: flex; width: 10%; float: right; clear: both;">
<img src="images/exercise.svg" style="height: 50px !important;" alt="תרגול">
</div>
<div style="width: 90%">
<p style="text-align: right; direction: rtl; float: right; clear: both;">
<strong>תרגול</strong>:
בשנת 1963 נשא ג'ון קנדי את נאומו "אני ברלינאי". הטקסט המלא שלו מופיע <a href="https://en.wikisource.org/wiki/Ich_bin_ein_Berliner">כאן</a>.<br>
השתמשו בפקודת <code>print</code> בודדת כדי להדפיס את הנאום כולו.
</p>
</div>
</div>
<p style="text-align: right; direction: rtl; float: right; clear: both;">מחרוזות מעוצבות</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
עד כה שרשרנו מחרוזות בזהירות בעזרת סימן השרשור <code>+</code>.<br>
דרך נוספת לעשות זאת היא בעזרת <dfn>fstring</dfn>, או בשמן המלא, <dfn>formatted strings</dfn>.<br>
נראה דוגמה:
</p>
End of explanation
print("My age is " + str(age) + " and my name is " + name + ".")
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
שרשור כמו שאנחנו מכירים עד כה:
</p>
End of explanation
print(f"My age is {age} and my name is {name}.")
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
שימוש ב־<em>fstrings</em>:
</p>
End of explanation
# נסו להכניס הרבה רווחים אחרי או לפני שם המשתמש
username = input("Please enter your user: ")
username = username.strip()
print(f"This string is: {username}.")
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
כדאי לשים לב לנקודות הבאות:
</p>
<ul style="text-align: right; direction: rtl; float: right; clear: both;">
<li>כדי להשתמש ב־fstrings, הוספנו את התו f לפני המחרוזת.</li>
<li>כדי לציין שם של משתנה שאנחנו רוצים להדפיס, השתמשנו בסוגריים מסולסלים סביב שם המשתנה.</li>
<li>בשימוש ב־fstrings, לא היינו צריכים לבצע המרה לפני שהשתמשנו במשתנה מסוג שלם.</li>
<li>מן הסתם, ניתן להשתמש ב־fstrings גם למטרות נוספות מלבד הדפסה.</li>
</ul>
<p style="text-align: right; direction: rtl; float: right; clear: both;">פעולות על מחרוזות</p>
<p style="text-align: right; direction: rtl; float: right;">הגדרה</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
עד כה למדנו פונקציות שעובדות על סוגים שונים של נתונים. אנחנו קוראים להן בשמן, ואז מעבירים להן את הנתון שעליו רוצים לפעול.<br>
<code>type</code>, לדוגמה, יודעת לעבוד על כל נתון שנעביר לה ולהחזיר לנו את הסוג שלו.<br>
רעיון דומה בתכנות נקרא "<dfn>פעולה</dfn>", או <dfn>method</dfn> (מתודה).<br>
בניגוד לפונקציות, שם הפעולה משויך לסוג הנתון שעליו אנחנו הולכים להפעיל את הפעולה.<br>
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
בעתיד נלמד לעומק יותר על ההבדלים בין פונקציה לפעולה, והם יישמעו לנו מופשטים פחות. עד אז אתם פטורים מלדעת את הטריוויה הזו.<br>
דבר שכדאי לדעת הוא שהפעולות שנראה עכשיו עובדות <em>רק</em> על מחרוזות, ולא על טיפוסים אחרים.<br>
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
בשורות הבאות, נשחק עם דוגמאות של פעולות ונראה מה הן עושות בתאים שיבואו לאחר מכן.<br>
אם נבחין בתא שיש צורך להסביר מה קרה בו – יצורף הסבר מעל התא.
</p>
<p style="text-align: right; direction: rtl; float: right;">לנקות רעשים מסביב</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
פעמים רבות, בין אם בקבלת קלט מהמשתמש ובין אם במחרוזות שקיבלנו ממקור חיצוני כלשהו, נתקל בתווים מיותרים שמקיפים את המחרוזת שלנו.<br>
הפעולה <dfn>strip</dfn> עוזרת לנו להסיר אותם.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
בצורתה הפשוטה, כשלא מעבירים לה ארגומנטים, היא תסיר את כל הרווחים, האנטרים והטאבים שנמצאים סביב המחרוזת:
</p>
End of explanation
strange_string = '!@#$%!!!^&This! Is! Sparta!!!!!!!!!&^%$!!!#@!'
print(strange_string.strip('~!@#$%^&*'))
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
כאשר מעבירים לה מחרוזת כארגומנט, היא תבצע את האלגוריתם הבא:
</p>
<ol style="text-align: right; direction: rtl; float: right; clear: both;">
<li>עבור על כל תו מתחילת המחרוזת:
<ul>
<li>כל עוד התו נמצא בארגומנט – מחק אותו והמשך</li>
</ul>
</li>
<li>עבור על כל תו מסוף המחרוזת:
<ul>
<li>כל עוד התו נמצא בארגומנט – מחק אותו והמשך</li>
</ul>
</li>
</ol>
End of explanation
strange = "This is a very long string which contains strange words, like ululation and lollygag."
Explanation: <p style="text-align: right; direction: rtl; float: right;">מְצִיאוֹת וּמְצִיאוֹת</p>
End of explanation
strange.find("ululation")
strange.find("lollygag")
strange.index("lollygag")
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
נמצא את המקומות שבהם נמצאות המילים המוזרות.<br>
הפעולות <em>find</em> ו־<em>index</em> יחזירו לנו את המיקום (האינדקס) של תת־מחרוזת בתוך מחרוזת אחרת:
</p>
End of explanation
strange.find('luculent')
strange.index('luculent')
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
אז רגע, למה צריך 2 מתודות אם הן עושות אותו דבר?<br>
אם המחרוזת לא נמצאה, <em dir="ltr" style="direction: ltr">find</em> תחזיר לנו <samp dir="ltr" style="direction: ltr">-1</samp>, בעוד ש־<em dir="ltr" style="direction: ltr">index</em> תחזיר לנו שגיאה.
</p>
End of explanation
test1 = "HeLlO WoRlD 123!"
test1
test1.upper()
test1.lower()
test1.capitalize() # רק האות הראשונה תהיה גדולה
test1.title() # מגדיל את האות הראשונה בכל מילה
Explanation: <div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
<div style="display: flex; width: 10%; float: right; clear: both;">
<img src="images/warning.png" style="height: 50px !important;" alt="אזהרה!">
</div>
<div style="width: 90%">
<p style="text-align: right; direction: rtl; clear: both;">
הפעולות הללו יחזירו רק את התוצאה הראשונה.
</p>
</div>
</div>
<div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
<div style="display: flex; width: 10%; float: right; clear: both;">
<img src="images/warning.png" style="height: 50px !important;" alt="אזהרה!">
</div>
<div style="width: 90%">
<p style="text-align: right; direction: rtl; clear: both;">
אינטואיטיבית, נראה שנכון תמיד להשתמש בפעולה <em dir="ltr" style="direction: ltr;">find</em>, אך אליה וקוץ בה.<br>
כשאנחנו בטוחים שאמורה לחזור תוצאה, עדיף להשתמש ב־<em dir="ltr" style="direction: ltr;">index</em> כדי לדעת מאיפה מגיעה השגיאה בתוכנית שלנו ולטפל בה במהירות.
</p>
</div>
</div>
<div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
<div style="display: flex; width: 10%; float: right; clear: both;">
<img src="images/exercise.svg" style="height: 50px !important;" alt="תרגול">
</div>
<div style="width: 90%">
<p style="text-align: right; direction: rtl; float: right; clear: both;">
<strong>תרגול</strong>:
קבלו מהמשתמש שתי מחרוזות.<br>
אם המחרוזת השנייה נמצאת לפני אמצע המחרוזת הראשונה, הדפיסו "<span dir="ltr">Yes!</span>"<br>
אם המחרוזת השנייה לא נמצאת לפני אמצע המחרוזת הראשונה, הדפיסו "<span dir="ltr">No!</span>"<br>
בונוס לגיבורים ולגיבורות: נסו להשתמש בשתי הפעולות, <em>index</em> ו־<em>find</em>.
</p>
</div>
</div>
<p style="text-align: right; direction: rtl; float: right;">משחקים עם גודלי אותיות</p>
End of explanation
test1
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
חשוב לזכור שהפעולות לא משנות את המשתנה.<br>
אם רוצים לשנות אותו, יש להשתמש בהשמה.
</p>
End of explanation
gettysburg_address =
Four score and seven years ago our fathers brought forth, on this continent, a new nation, conceived in Liberty, and dedicated to the proposition that all men are created equal.
Now we are engaged in a great civil war, testing whether that nation, or any nation so conceived and so dedicated, can long endure. We are met on a great battle-field of that war. We have come to dedicate a portion of that field, as a final resting place for those who here gave their lives that that nation might live. It is altogether fitting and proper that we should do this.
But, in a larger sense, we cannot dedicate—we cannot consecrate—we cannot hallow—this ground. The brave men, living and dead, who struggled here, have consecrated it, far above our poor power to add or detract. The world will little note, nor long remember what we say here, but it can never forget what they did here. It is for us the living, rather, to be dedicated here to the unfinished work which they who fought here have thus far so nobly advanced. It is rather for us to be here dedicated to the great task remaining before us—that from these honored dead we take increased devotion to that cause for which they here gave the last full measure of devotion—that we here highly resolve that these dead shall not have died in vain—that this nation, under God, shall have a new birth of freedom—and that government of the people, by the people, for the people, shall not perish from the earth.
Explanation: <p style="text-align: right; direction: rtl; float: right;">סופרים סתם</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
אם נרצה לבדוק כמה פעמים תת־מחרוזת נמצאת בתוך המחרוזת שלנו, נשתמש בפעולה <dfn>count</dfn>.<br>
ננסה להבין כמה פעמים מילים מעניינות הופיעו בנאום גטיסברג המפורסם:
</p>
End of explanation
gettysburg_address = gettysburg_address.lower()
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
ראשית, נעשה טריק מוכר. נשתמש בפעולה <em dir="ltr" style="direction: ltr">lower</em> כדי להעיף את האותיות הגדולות.<br>
בצורה הזו, <em>count</em> תספור לנו גם את המילים שנכתבו באותיות רישיות:
</p>
End of explanation
gettysburg_address.count('we')
gettysburg_address.count('dedicated')
gettysburg_address.count('nation')
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
עכשיו נבדוק כמה פעמים השתמש לינקולן בכל אחת מהמילים: we, nation ו־dedicated.</p>
End of explanation
lyrics = So let it out and let it in, hey Jude, begin
You're waiting for someone to perform with
And don't you know that it's just you, hey Jude, you'll do
The movement you need is on your shoulder
Na na na na na na na na na yeah
lyrics.replace('Jude', 'Dude')
Explanation: <p style="text-align: right; direction: rtl; float: right;">החלפה</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
פעולה מאוד נפוצה היא <dfn>replace</dfn>, שעוזרת לנו להחליף את כל המופעים של תת־מחרוזת אחת באחרת.<br>
לדוגמה, ניקח את הברידג' השני בשיר הנפלא של הביטלס, <cite>Hey Jude</cite>, ונחליף את כל המופעים של <em>Jude</em> ב־<em>Dude</em>:
</p>
End of explanation
print(lyrics.replace('Jude', 'Dude'))
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
שימו לב לתווים המוזרים באמצע. אלו ירידות השורה שעליהן למדנו בשיעור. אל דאגה – הן לא יופיעו כשנדפיס את המחרוזת.
</p>
End of explanation
lyrics = So let it out and let it in, hey Jude, begin
You're waiting for someone to perform with
And don't you know that it's just you, hey Jude, you'll do
The movement you need is on your shoulder
Na na na na na na na na na yeah
print("Before: ")
lyrics.replace('Jude', 'Dude')
print(lyrics)
lyrics = lyrics.replace('Jude', 'Dude')
print('-' * 50)
print("After: ")
print(lyrics)
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
רק נזכיר שהפעולה לא משנה את המחרוזת עצמה, וכדי לשנות אותה נצטרך לבצע השמה.
</p>
End of explanation
i_like_to_eat = 'chocolate, fudge, cream, cookies, banana, hummus'
i_like_to_eat.split(', ')
Explanation: <p style="text-align: right; direction: rtl; float: right;">הפרד ומשול</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לעיתים קרובות נרצה להפריד את הטקסט שלנו לחלקים.<br>
הפעולה <dfn>split</dfn> מאפשרת לנו לעשות את זה, ולקבל רשימה של האיברים המופרדים:
</p>
End of explanation
type(i_like_to_eat.split(', '))
i_like_to_eat.split(', ')[0]
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
בסוגריים כתבנו מה אנחנו רוצים שיהיה התו, או סדרת התווים, שתהיה אחראית להפרדה בין האיברים.<br>
שימו לב שקיבלנו רשימה לכל דבר:
</p>
End of explanation
some_paragraph =
Gadsby is a 1939 novel by Ernest Vincent Wright written as a lipogram, which does not include words that contain the letter E. The plot revolves around the dying fictional city of Branton Hills, which is revitalized as a result of the efforts of protagonist John Gadsby and a youth group he organizes.
Though vanity published and little noticed in its time, the book is a favourite of fans of constrained writing and is a sought-after rarity among some book collectors. Later editions of the book have sometimes carried the alternative subtitle 50,000 Word Novel Without the Letter "E".
Despite Wright's claim, published versions of the book may contain a handful of uses of the letter "e". The version on Project Gutenberg, for example, contains "the" three times and "officers" once.
some_paragraph.split()
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
דרך נוספת להשתמש ב־<em>split</em> היא לא להעביר לה כלום ברשימת הארגומנטים.<br>
במקרה כזה, <i>split</i> תפצל לנו את המחרוזת לפי רווחים, שורות חדשות וטאבים.
</p>
End of explanation
i_love_to_eat = ['chocolate', 'fudge', 'cream', 'cookies', 'banana', 'hummus']
thing_to_join_by = ", "
thing_to_join_by.join(i_love_to_eat)
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
הפעולה הזו שימושית בטירוף ונראה אותה עוד הרבה.<br>
היא מאפשרת לנו לקבל מידע רב על כמות גדולה של מלל.
</p>
<p style="text-align: right; direction: rtl; float: right;">חבר ומשול</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לפעמים אנחנו רוצים לעשות את הפעולה ההפוכה מפיצול – איחוד!<br>
הפעולה <dfn>join</dfn> מקבלת כארגומנט רשימה, ופועלת על המחרוזת שתחבר בין איבר לאיבר.<br>
נראה דוגמה:
</p>
End of explanation
what_i_love = ["שוקולד", "עוגות גבינה", "ארטיק", "סוכריות", "תות גינה"]
vav_ha_hibur = ' ו'
song = "אני אוהב " + vav_ha_hibur.join(what_i_love)
print(song)
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
ואם אנחנו כבר שם, קצת כבוד לספר הישראלי, הכבש השישה־עשר:
</p>
End of explanation
some_test = "Hello, my name is Inigo Montoya, you killed my father, prepare to die!"
is_welcoming = some_test.startswith('Hello,')
print(is_welcoming)
is_shouting = some_test.endswith('!')
print(is_shouting)
is_goodbye = some_test.endswith("Goodbye, my kind sir.")
print(is_goodbye)
address = "Python Street 5, Hadera, Israel"
print("Does the user live in Python Street?... " + str(address.startswith('Python Street')))
print("Does the user live in Scotland?... " + str(address.endswith('Scotland')))
Explanation: <p style="text-align: right; direction: rtl; float: right;">אני, בוליאני</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
אחד הטריקים השימושיים הוא לבדוק אם המחרוזת שלנו מתחילה או מסתיימת בתת־מחרוזת אחרת.
</p>
End of explanation
test2 = "HELLO WORLD"
print("test2.isalnum(): " + str(test2.isalnum()))
print("test2.isalpha(): " + str(test2.isalpha()))
print("test2.isdecimal(): " + str(test2.isdecimal()))
test3 = "12345"
print("test3.isalnum(): " + str(test3.isalnum()))
print("test3.isalpha(): " + str(test3.isalpha()))
print("test3.isdecimal(): " + str(test3.isdecimal()))
test4 = "HELLOWORLD"
print("test4.isalnum(): " + str(test4.isalnum()))
print("test4.isalpha(): " + str(test4.isalpha()))
print("test4.isdecimal(): " + str(test4.isdecimal()))
test5 = "ABC123"
print("test5.isalnum(): " + str(test5.isalnum()))
print("test5.isalpha(): " + str(test5.isalpha()))
print("test5.isdecimal(): " + str(test5.isdecimal()))
Explanation: <div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
<div style="display: flex; width: 10%; float: right; clear: both;">
<img src="images/exercise.svg" style="height: 50px !important;" alt="תרגול">
</div>
<div style="width: 90%">
<p style="text-align: right; direction: rtl; float: right; clear: both;">
<strong>תרגול</strong>:
קבלו מהמשתמש נתיב לקובץ מסוים שנמצא על המחשב שלו, ובדקו האם הסיומת שלו היא <i style="direction: ltr" dir="ltr">.docx</i><br>
הדפיסו לו הודעה מתאימה.<br>
דוגמה לנתיב תקין: <i>C:\My Documents\Resume.docx</i>.
</p>
</div>
</div>
<div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
<div style="display: flex; width: 10%; float: right; clear: both;">
<img src="images/warning.png" style="height: 50px !important;" alt="אזהרה!">
</div>
<div style="width: 90%">
<p style="text-align: right; direction: rtl; clear: both;">
יש נטייה לשכוח את ה־<em>s</em> אחרי ה־<em>end</em> או ה־<em>start</em> ב־<em>end<strong>s</strong>with</em> וב־<em>start<strong>s</strong>with</em>.
</p>
</div>
</div>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נוכל גם לבדוק אם המחרוזת שלנו היא מסוג מסוים:
</p>
End of explanation
gettysburg_address =
Four score and seven years ago our fathers brought forth, on this continent, a new nation, conceived in Liberty, and dedicated to the proposition that all men are created equal.
Now we are engaged in a great civil war, testing whether that nation, or any nation so conceived and so dedicated, can long endure. We are met on a great battle-field of that war. We have come to dedicate a portion of that field, as a final resting place for those who here gave their lives that that nation might live. It is altogether fitting and proper that we should do this.
But, in a larger sense, we cannot dedicate—we cannot consecrate—we cannot hallow—this ground. The brave men, living and dead, who struggled here, have consecrated it, far above our poor power to add or detract. The world will little note, nor long remember what we say here, but it can never forget what they did here. It is for us the living, rather, to be dedicated here to the unfinished work which they who fought here have thus far so nobly advanced. It is rather for us to be here dedicated to the great task remaining before us—that from these honored dead we take increased devotion to that cause for which they here gave the last full measure of devotion—that we here highly resolve that these dead shall not have died in vain—that this nation, under God, shall have a new birth of freedom—and that government of the people, by the people, for the people, shall not perish from the earth.
Explanation: <p style="align: right; direction: rtl; float: right; clear: both;">תרגול</p>
<p style="align: right; direction: rtl; float: right; clear: both;">נאום גטיסברג</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
השתמשו בטקסט של נאום גטיסברג ובדקו כמה מילים יש בו.<br>
בדקו כמה פעמים הופיעו המילים we, here, great, nation ו־dedicated, וחשבו מה שיעורן באחוזים בטקסט כולו.
</p>
End of explanation |
15,027 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step4: Implement a custom pytoch Dataset to load image frames from a remote DICOM VL Whole Slide Microscopy Image instance
Step6: Implement a simple binary image segmentation model
Step7: Instantiate the Dataset, pass inputs to the model and receive back outputs in form of NumPy arrays
Step8: Plot an overview of model inputs and outputs
Step9: Encode model outputs in form of a DICOM Segmentation instance | Python Code:
class Dataset(torch.utils.data.Dataset):
Class for getting individual Pixel Data element frame items of a DICOM VL Whole Slide Microscocpy Image data set stored on a remote server.
def __init__(self, url: str, study_id: str, series_id: str, instance_id: str):
Parameters
----------
url: str
Address of a DICOMweb origin server
study_id: str
Study Instance UID
series_id: str
Seriess Instance UID of a Slide Microscopy series
instance_id: str
SOP Instance UID of a VL Whole Slide Microscopy Image instance
self.client = dicomweb_client.api.DICOMwebClient(url)
metadata = self.client.retrieve_instance_metadata(
study_instance_uid=study_id,
series_instance_uid=series_id,
sop_instance_uid=instance_id
)
self.meta = pydicom.dataset.Dataset.from_json(metadata)
def __len__(self) -> int:
int: number of frames
return int(self.meta.NumberOfFrames)
def __getitem__(self, idx: int) -> numpy.ndarray:
Retrieves an invidivual frame.
Parameters
----------
idx: int
Zero-based frame index
Returns
-------
numpy.ndarray
Pixels of the frame
frames = self.client.retrieve_instance_frames(
study_instance_uid=self.meta.StudyInstanceUID,
series_instance_uid=self.meta.SeriesInstanceUID,
sop_instance_uid=self.meta.SOPInstanceUID,
frame_numbers=[idx+1],
media_types=('image/jpeg', 'image/jp2', )
)
buf = io.BytesIO(frames[0])
return numpy.array(PIL.Image.open(buf))
Explanation: Implement a custom pytoch Dataset to load image frames from a remote DICOM VL Whole Slide Microscopy Image instance
End of explanation
def model(image: numpy.ndarray) -> numpy.ndarray:
Segments a microscopy image into regions representing tissue foreground and slide background.
Parameters
----------
image: numpy.ndarray
Pixel matrix of an image or image frame
Returns
-------
numpy.ndarray
Binary mask where tissue foreground is ``True`` and slide background is ``False``
return numpy.max(image < 225, 2).astype(numpy.bool)
Explanation: Implement a simple binary image segmentation model
End of explanation
dataset = Dataset(
url='https://server.dcmjs.org/dcm4chee-arc/aets/DCM4CHEE/rs',
study_id='1.2.392.200140.2.1.1.1.2.799008771.3960.1519719403.819',
series_id='1.2.392.200140.2.1.1.1.3.799008771.3960.1519719403.820',
instance_id='1.2.392.200140.2.1.1.1.4.799008771.3960.1519719570.834'
)
inputs = []
outputs = []
for i in range(len(dataset)):
image_frame = dataset[i]
mask_frame = model(image_frame)
inputs.append(image_frame)
outputs.append(mask_frame)
image = numpy.stack(inputs)
mask = numpy.stack(outputs)
print('Expected input shape : ', (int(dataset.meta.NumberOfFrames), dataset.meta.Rows, dataset.meta.Columns, dataset.meta.SamplesPerPixel))
print('Actual input shape : ', image.shape)
print('Expected output shape : ', (int(dataset.meta.NumberOfFrames), dataset.meta.Rows, dataset.meta.Columns))
print('Actual ouput shape : ', mask.shape)
Explanation: Instantiate the Dataset, pass inputs to the model and receive back outputs in form of NumPy arrays
End of explanation
fig, axs = matplotlib.pyplot.subplots(
nrows=10,
ncols=len(dataset) // 5,
figsize=(10, 10),
subplot_kw={'xticks': [], 'yticks': []}
)
for i, ax in enumerate(axs.flat[:(len(axs.flat) // 2)]):
ax.imshow(image[i])
for i, ax in enumerate(axs.flat[(len(axs.flat) // 2):]):
ax.imshow(mask[i].astype(numpy.uint8) * 255)
matplotlib.pyplot.tight_layout()
Explanation: Plot an overview of model inputs and outputs
End of explanation
algorithm = highdicom.content.AlgorithmIdentificationSequence(
name='Binary Image Segmentation Example',
family=pydicom.sr.codedict.codes.cid7162.ArtificialIntelligence,
version='v0.1.0'
)
segment_description = highdicom.seg.content.SegmentDescription(
segment_number=1,
segment_label='ROI #1',
segmented_property_category=pydicom.sr.codedict.codes.cid7150.Tissue,
segmented_property_type=pydicom.sr.codedict.codes.SCT.BodyFat,
algorithm_type=highdicom.seg.enum.SegmentAlgorithmTypeValues.AUTOMATIC,
algorithm_identification=algorithm
)
segmentation = highdicom.seg.sop.Segmentation(
source_images=[dataset.meta],
pixel_array=mask,
segmentation_type=highdicom.seg.enum.SegmentationTypeValues.FRACTIONAL,
segment_descriptions=[segment_description],
series_instance_uid=highdicom.uid.UID(),
series_number=2,
sop_instance_uid=highdicom.uid.UID(),
instance_number=1,
manufacturer='MGH Computational Pathology',
manufacturer_model_name='Example Jupyter Notebook',
software_versions=highdicom.version.__version__,
device_serial_number='XXX'
)
print(segmentation)
Explanation: Encode model outputs in form of a DICOM Segmentation instance
End of explanation |
15,028 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Image Registration Exercise
Written by Gautham Narayan ([email protected]) for LSST DSFP #5
In this directory, you should be able to find two fits file from one of the projects I worked on
Step1: While the images have been de-trended, they still have the original WCS from the telescope. They aren't aligned. You could use ds9 to check this trivially, but lets do it with astropy instead.
Step2: Use the astrometry.net client (solve-field) to determine an accurate WCS solution for this field.
Step3: Options you might want to look at
Step4: or
Step5: Use the above info to solve for the WCS for both images.
The problem with the distortion polynomial that astronometry.net uses is that routines like ds9 ignore it. Look at astrometry.net's wcs-to-tan routine and convert the solved WCS to the tangent projection. | Python Code:
!ls *fits
Explanation: Image Registration Exercise
Written by Gautham Narayan ([email protected]) for LSST DSFP #5
In this directory, you should be able to find two fits file from one of the projects I worked on
End of explanation
import astropy.io.fits as afits
from astropy.wcs import WCS
from astropy.visualization import ZScaleInterval
import matplotlib
%matplotlib notebook
%pylab
##### I've given you some imports above. They're a big hint on what to try. You get to do this!!! #####
Explanation: While the images have been de-trended, they still have the original WCS from the telescope. They aren't aligned. You could use ds9 to check this trivially, but lets do it with astropy instead.
End of explanation
!solve-field -h
Explanation: Use the astrometry.net client (solve-field) to determine an accurate WCS solution for this field.
End of explanation
print(WCS(f1[0].header))
Explanation: Options you might want to look at:
--ra, --dec and --radius: Restrict the solution to some radius around RA and Dec. The regular telescope WCS should be plenty for an initial guess.
--scale-units, --scale-low and --scale-high: You might not know the exact pixel scale (and it's a function of where you are on the detector in any case, but you also set limits from this based on the existing CD1_2, CD2_1
-D, -N: Write to an output directory and write out a new FITS file with the solved WCS.
<u>Don't use out/ as the output directory.</u>
--parity: You can usually set this and get a speedup of 2x
To get info from the header, you can use astropy, or just use imhead from the WCSTools package at the command line
End of explanation
!imhead wdd7.040920_0452.051_6.fits
Explanation: or
End of explanation
!wcs-to-tan -h
Explanation: Use the above info to solve for the WCS for both images.
The problem with the distortion polynomial that astronometry.net uses is that routines like ds9 ignore it. Look at astrometry.net's wcs-to-tan routine and convert the solved WCS to the tangent projection.
End of explanation |
15,029 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Тест. Доверительные интервалы для долей
Step1: Большая часть млекопитающих неспособны во взрослом возрасте переваривать лактозу, содержащуюся в молоке. У людей за расщепление лактозы отвечает фермент лактаза, кодируемый геном LCT. У людей с вариантом 13910T этого гена лактаза продолжает функционировать на протяжении всей жизни. Распределение этого варианта гена сильно варьируется в различных генетических популяциях.
Из 50 исследованных представителей народа майя вариант 13910T был обнаружен у одного. Постройте нормальный 95% доверительный интервал для доли носителей варианта 13910T в популяции майя. Чему равна его нижняя граница? Округлите ответ до 4 знаков после десятичной точки.
Step2: В условиях предыдущей задачи постройте 95% доверительный интервал Уилсона для доли носителей варианта 13910T в популяции майя. Чему равна его нижняя граница? Округлите ответ до 4 знаков после десятичной точки.
Step3: Пусть в популяции майя действительно 2% носителей варианта 13910T, как в выборке, которую мы исследовали. Какой объём выборки нужен, чтобы с помощью нормального интервала оценить долю носителей гена 13910T с точностью ±0.01 на уровне доверия 95%?
Step4: Постройте график зависимости объёма выборки, необходимой для оценки для доли носителей гена 13910T с точностью ±0.01 на уровне доверия 95%, от неизвестного параметра p. Посмотрите, при каком значении p нужно больше всего испытуемых. Как вы думаете, насколько вероятно, что выборка, которую мы анализируем, взята из случайной величины с этим значением параметра?
Как бы вы не ответили на последний вопрос, рассмотреть объём выборки, необходимый при таком p, всё равно полезно — это даёт максимально пессимистичную оценку необходимого объёма выборки.
Какой объём выборки нужен в худшем случае, чтобы с помощью нормального интервала оценить долю носителей гена 13910T с точностью ±0.01 на уровне доверия 95%? | Python Code:
import numpy as np
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
Explanation: Тест. Доверительные интервалы для долей
End of explanation
size = 50
data_gen = np.zeros(size)
data_gen[0] = 1
data_gen
from statsmodels.stats.proportion import proportion_confint
normal_interval = proportion_confint(sum(data_gen), len(data_gen), method = 'normal')
print('Normal interval [%.4f, %.4f] with width %f' % (normal_interval[0],
normal_interval[1],
normal_interval[1] - normal_interval[0]))
Explanation: Большая часть млекопитающих неспособны во взрослом возрасте переваривать лактозу, содержащуюся в молоке. У людей за расщепление лактозы отвечает фермент лактаза, кодируемый геном LCT. У людей с вариантом 13910T этого гена лактаза продолжает функционировать на протяжении всей жизни. Распределение этого варианта гена сильно варьируется в различных генетических популяциях.
Из 50 исследованных представителей народа майя вариант 13910T был обнаружен у одного. Постройте нормальный 95% доверительный интервал для доли носителей варианта 13910T в популяции майя. Чему равна его нижняя граница? Округлите ответ до 4 знаков после десятичной точки.
End of explanation
wilson_interval = proportion_confint(sum(data_gen), len(data_gen), method = 'wilson')
print('Wilson interval [%.4f, %.4f] with width %f' % (wilson_interval[0],
wilson_interval[1],
wilson_interval[1] - wilson_interval[0]))
Explanation: В условиях предыдущей задачи постройте 95% доверительный интервал Уилсона для доли носителей варианта 13910T в популяции майя. Чему равна его нижняя граница? Округлите ответ до 4 знаков после десятичной точки.
End of explanation
from statsmodels.stats.proportion import samplesize_confint_proportion
n_samples = int(np.ceil(samplesize_confint_proportion(data_gen.mean(), 0.01)))
n_samples
Explanation: Пусть в популяции майя действительно 2% носителей варианта 13910T, как в выборке, которую мы исследовали. Какой объём выборки нужен, чтобы с помощью нормального интервала оценить долю носителей гена 13910T с точностью ±0.01 на уровне доверия 95%?
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
proportion = np.linspace(0, 1, 101)
proportion
n_samples = np.empty(proportion.shape)
for i, p in enumerate(proportion):
n_samples[i] = int(np.ceil(samplesize_confint_proportion(p, 0.01)))
n_samples
plt.plot(proportion, n_samples);
n_samples[np.where(proportion == 0.5)]
Explanation: Постройте график зависимости объёма выборки, необходимой для оценки для доли носителей гена 13910T с точностью ±0.01 на уровне доверия 95%, от неизвестного параметра p. Посмотрите, при каком значении p нужно больше всего испытуемых. Как вы думаете, насколько вероятно, что выборка, которую мы анализируем, взята из случайной величины с этим значением параметра?
Как бы вы не ответили на последний вопрос, рассмотреть объём выборки, необходимый при таком p, всё равно полезно — это даёт максимально пессимистичную оценку необходимого объёма выборки.
Какой объём выборки нужен в худшем случае, чтобы с помощью нормального интервала оценить долю носителей гена 13910T с точностью ±0.01 на уровне доверия 95%?
End of explanation |
15,030 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
write a code on how to use pretrained gpt2 on custom data
| Python Code::
import numpy as np
import gpt_2_simple as gpt2
from datetime import datetime
from google.colab import files
from tensorflow.python.framework import ops
gpt2.download_gpt2(model_name="124M")
from transformers import GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
sess = gpt2.start_tf_sess()
gpt2.finetune(sess, dataset= file_name, model_name=model_size, steps=200, restore_from='fresh',run_name = run_name,print_every=10,sample_every=50,save_every=50,learning_rate=0.0007)
ops.reset_default_graph()
sess = gpt2.start_tf_sess()
gpt2.load_gpt2(sess, run_name=run_name)
gpt2.generate(sess, run_name=run_name, temperature=.7, length=100, prefix=None, top_k=40, nsamples=10)
|
15,031 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Practical MCMC in Python
by Dan Foreman-Mackey
A worksheet for the Local Group Astrostatistics workshop at the University of Michigan, June 2015.
Introduction
In this notebook, we'll implement a Markov Chain Monte Carlo (MCMC) algorithm and demonstrate its use on two realistic simulated datasets. First, we'll fit a line to a set of data points with Gaussian uncertainties in one dimension. This problem should never be done using MCMC in practice—the solution is analytic!—but it is useful as a functional test of the code and as a demonstration of the concepts. Next, we'll fit a power law model to a set of entries in a catalog assuming a Poisson likelihood function. This problem is very relevant to this meeting for a few reasons but we'll come back to that later.
This worksheet is written in Python and it lives in an IPython notebook. In this context, you'll be asked to write a few lines of code to implement the sampler and the models but much of the boilerplate code is already in place. Therefore, even if you're not familiar with Python, you should be able to get something out of the notebook. I don't expect that everyone will finish the full notebook but that's fine because it has been designed to get more difficult as we progress.
How to use the notebook
If you're familiar with IPython notebooks, you can probably skip this section without missing anything.
IPython notebooks work by running a fully functional Python sever behind the scenes and if you're reading this then you probably already figured out how to get that running. Then, inside the notebook, the content is divided into cells containing code or text.
You'll be asked to edit a few of the cells below to add your own code. To do this, click on the cell to start editing and then type as you normally would. To execute the code contained in the cell, press Shift-Enter. Even for existing cells that you don't need to edit, you should select them and type Shift-Enter when you get there because the cells below generally depend on the previous cells being executed first.
To get started, edit the cell below to assign your name (or whatever you want) to the variable name and then press Shift-Enter to exectue the cell.
Step1: If this works, the output should greet you without throwing any errors. If so, that's pretty much all we need so let's get started with some MCMC!
Dataset 1
Step2: Now we'll load the datapoints and plot them. When you execute the following cell, you should see a plot of the data. If not, make sure that you run the import cell from above first.
Step3: As I mentioned previously, it is pretty silly to use MCMC to solve this problem because the maximum likelihood and full posterior probability distribution (under infinitely broad priors) for the slope and intercept of the line are known analytically. Therefore, let's compute what the right answer should be before we even start. The analytic result for the posterior probability distribution is a 2-d Gaussian with mean
$$\mathbf{w} = \left(\begin{array}{c}
m \ b
\end{array}\right) = (\mathbf{A}^\mathrm{T}\,C^{-1}\mathbf{A})^{-1} \, \mathbf{A}^\mathrm{T}\,C^{-1}\,\mathbf{y}$$
and covariance matrix
$$\mathbf{V} = (\mathbf{A}^\mathrm{T}\,C^{-1}\mathbf{A})^{-1}$$
where
$$\mathbf{y} = \left(\begin{array}{c}
y_1 \ y_2 \ \vdots \ y_N
\end{array}\right) \quad , \quad \mathbf{A} = \left(\begin{array}{cc}
x_1 & 1 \ x_2 & 1 \ \vdots & \vdots \ x_N & 1
\end{array}\right) \quad ,\, \mathrm{and} \quad
\mathbf{C} = \left(\begin{array}{cccc}
\sigma_1^2 & 0 & \cdots & 0 \
0 & \sigma_2^2 & \cdots & 0 \
&&\ddots& \
0 & 0 & \cdots & \sigma_N^2
\end{array}\right)$$
There are various functions in Python for computing this but I prefer to do it myself (it only takes a few lines of code!) and here it is
Step4: We'll save these results for later to compare them to the result computed using MCMC but for now, it's nice to take a look and see what this prediction looks like. To do this, we'll sample 24 slopes and intercepts from this 2d Gaussian and overplot them on the data.
Step5: This plot is a visualization of our posterior expectations for the true underlying line that generated these data. We'll reuse this plot a few times later to test the results of our code.
The probabilistic model
In order use MCMC to perform posterior inference on a model and dataset, we need a function that computes the value of the posterior probability given a proposed setting of the parameters of the model. For reasons that will become clear below, we actually only need to return a value that is proportional to the probability.
As discussed in a previous tutorial, the posterior probability for parameters $\mathbf{w} = (m,\,b)$ conditioned on a dataset $\mathbf{y}$ is given by
$$p(\mathbf{w} \,|\, \mathbf{y}) = \frac{p(\mathbf{y} \,|\, \mathbf{w}) \, p(\mathbf{w})}{p(\mathbf{y})}$$
where $p(\mathbf{y} \,|\, \mathbf{w})$ is the likelihood and $p(\mathbf{w})$ is the prior. For this example, we're modeling the likelihood by assuming that the datapoints are independent with known Gaussian uncertainties $\sigma_n$. This specifies a likelihood function
Step6: After you're satisfied with your implementation, run the following cell. In this cell, we're checking to see if your code is right. If it is, you'll see a smiling face (☺︎) but if not, you'll get an error message.
Step7: If you don't get the ☺︎, go back and try to debug your model. Iterate until your result is correct.
Once you get that, we'll use this to implement the full model (Remember
Step8: Metropolis(–Hastings) MCMC
The simplest MCMC algorithm is generally referred to as the Metropolis method. All MCMC algorithms work by specifying a "step" that moves from one position in parameter space to another with some probability. The Metropolis step takes a position $\theta_t$ (a vector containing the slope and intercept at step $t$) to the position $\theta_{t+1}$ using the following steps
Step9: As before, here's a simple test for this function. When you run the following cell it will either print a smile or throw an exception. Since the algorithm is random, it might occasionally fail this test so if it fails once, try running it again. If it fails a second time, edit your implementation until the test consistently passes.
Step10: Running the Markov Chain
Now that we have an implementation of the Metropolis step, we can go on to sample from the posterior probability density that we implemented above. To start, we need to initialize the sampler somewhere in parameter space. In the following cell, edit your guess for the slope and intercept of the line until it looks like a reasonably good fit to the data.
Step11: In the next cell, we'll start from this initial guess for the slope and intercept and walk through parameter space (using the transition probability from above) to generate a Markov Chain of samples from the posterior probability.
There are a few tuning parameters for the method. The first and most important choice has already been covered
Step12: The results of the MCMC run are stored in the array called chain with dimensions (nstep, 2). These are samples from the posterior probability density for the parameters. We know from above that this should be a Gaussian with mean $\mathbf{w}$ and covariance $\mathbf{V}$ so let's compare the sample mean and covariance to the analytic result that we computed above
Step13: If you don't get a smile here, that could mean a few things
Step14: This plot is a representation of our contraints on the posterior probability for the slope and intercept conditioned on the data. The 2-D plot shows the full posterior and the two 1-D plots show the constraints for each parameter marginalized over the other.
The second plot that we want to make is a represnetation of the posterior predictive distribution for the data. To do this we will plot a few (50) randomly selected samples from the chain and overplot the resulting line on the data.
Step15: It is always useful to make a plot like this because it lets you see if your model is capable of describing your data or if there is anything catasrophically wrong.
Dataset 2
Step16: In the following cell, you need to implement the log-likelihood function for the problem (same as above)
Step17: As before, edit your implementation until the following test passes.
Step18: Once you're happy with this implementation, we'll define the full probabilistic model including a prior. As before, I've chosen a broad flat prior on alpha and beta but you should feel free to change this.
Step19: Now let's run the MCMC for this model. As before, you should tune the parameters of the algorithm until you get a reasonable acceptance fraction ($\sim 25- 40\%$) and the chains seem converged.
Step20: Once you're happy with to convergence of your chain, plot the results as a corner plot (compared to the values that I used to generate the dataset; $\alpha = 500$ and $\beta = -2$) and plot the posterior predictive distribution. | Python Code:
name = "YOUR NAME HERE"
print("Hello {0}!".format(name))
Explanation: Practical MCMC in Python
by Dan Foreman-Mackey
A worksheet for the Local Group Astrostatistics workshop at the University of Michigan, June 2015.
Introduction
In this notebook, we'll implement a Markov Chain Monte Carlo (MCMC) algorithm and demonstrate its use on two realistic simulated datasets. First, we'll fit a line to a set of data points with Gaussian uncertainties in one dimension. This problem should never be done using MCMC in practice—the solution is analytic!—but it is useful as a functional test of the code and as a demonstration of the concepts. Next, we'll fit a power law model to a set of entries in a catalog assuming a Poisson likelihood function. This problem is very relevant to this meeting for a few reasons but we'll come back to that later.
This worksheet is written in Python and it lives in an IPython notebook. In this context, you'll be asked to write a few lines of code to implement the sampler and the models but much of the boilerplate code is already in place. Therefore, even if you're not familiar with Python, you should be able to get something out of the notebook. I don't expect that everyone will finish the full notebook but that's fine because it has been designed to get more difficult as we progress.
How to use the notebook
If you're familiar with IPython notebooks, you can probably skip this section without missing anything.
IPython notebooks work by running a fully functional Python sever behind the scenes and if you're reading this then you probably already figured out how to get that running. Then, inside the notebook, the content is divided into cells containing code or text.
You'll be asked to edit a few of the cells below to add your own code. To do this, click on the cell to start editing and then type as you normally would. To execute the code contained in the cell, press Shift-Enter. Even for existing cells that you don't need to edit, you should select them and type Shift-Enter when you get there because the cells below generally depend on the previous cells being executed first.
To get started, edit the cell below to assign your name (or whatever you want) to the variable name and then press Shift-Enter to exectue the cell.
End of explanation
%matplotlib inline
from matplotlib import rcParams
rcParams["savefig.dpi"] = 100 # This makes all the plots a little bigger.
import numpy as np
import matplotlib.pyplot as plt
Explanation: If this works, the output should greet you without throwing any errors. If so, that's pretty much all we need so let's get started with some MCMC!
Dataset 1: Fitting a line to data
Today, we're going to implement the simplest possible MCMC algorithm but before we do that, we'll need some data to test our method with.
Load the data
I've generated a simulated dataset generated from a linear model with no uncertainties in the $x$ dimension and known Gaussian uncertainties in the $y$ dimension. These data are saved in the CSV file linear.csv included with this notebook.
First we'll need numpy and matplotlib so let's import them:
End of explanation
# Load the data from the CSV file.
x, y, yerr = np.loadtxt("linear.csv", delimiter=",", unpack=True)
# Plot the data with error bars.
plt.errorbar(x, y, yerr=yerr, fmt=".k", capsize=0)
plt.xlim(0, 5);
Explanation: Now we'll load the datapoints and plot them. When you execute the following cell, you should see a plot of the data. If not, make sure that you run the import cell from above first.
End of explanation
A = np.vander(x, 2) # Take a look at the documentation to see what this function does!
ATA = np.dot(A.T, A / yerr[:, None]**2)
w = np.linalg.solve(ATA, np.dot(A.T, y / yerr**2))
V = np.linalg.inv(ATA)
Explanation: As I mentioned previously, it is pretty silly to use MCMC to solve this problem because the maximum likelihood and full posterior probability distribution (under infinitely broad priors) for the slope and intercept of the line are known analytically. Therefore, let's compute what the right answer should be before we even start. The analytic result for the posterior probability distribution is a 2-d Gaussian with mean
$$\mathbf{w} = \left(\begin{array}{c}
m \ b
\end{array}\right) = (\mathbf{A}^\mathrm{T}\,C^{-1}\mathbf{A})^{-1} \, \mathbf{A}^\mathrm{T}\,C^{-1}\,\mathbf{y}$$
and covariance matrix
$$\mathbf{V} = (\mathbf{A}^\mathrm{T}\,C^{-1}\mathbf{A})^{-1}$$
where
$$\mathbf{y} = \left(\begin{array}{c}
y_1 \ y_2 \ \vdots \ y_N
\end{array}\right) \quad , \quad \mathbf{A} = \left(\begin{array}{cc}
x_1 & 1 \ x_2 & 1 \ \vdots & \vdots \ x_N & 1
\end{array}\right) \quad ,\, \mathrm{and} \quad
\mathbf{C} = \left(\begin{array}{cccc}
\sigma_1^2 & 0 & \cdots & 0 \
0 & \sigma_2^2 & \cdots & 0 \
&&\ddots& \
0 & 0 & \cdots & \sigma_N^2
\end{array}\right)$$
There are various functions in Python for computing this but I prefer to do it myself (it only takes a few lines of code!) and here it is:
End of explanation
plt.errorbar(x, y, yerr=yerr, fmt=".k", capsize=0)
for m, b in np.random.multivariate_normal(w, V, size=50):
plt.plot(x, m*x + b, "g", alpha=0.1)
plt.xlim(0, 5);
Explanation: We'll save these results for later to compare them to the result computed using MCMC but for now, it's nice to take a look and see what this prediction looks like. To do this, we'll sample 24 slopes and intercepts from this 2d Gaussian and overplot them on the data.
End of explanation
def lnlike_linear((m, b)):
raise NotImplementedError("Delete this placeholder and implement this function")
Explanation: This plot is a visualization of our posterior expectations for the true underlying line that generated these data. We'll reuse this plot a few times later to test the results of our code.
The probabilistic model
In order use MCMC to perform posterior inference on a model and dataset, we need a function that computes the value of the posterior probability given a proposed setting of the parameters of the model. For reasons that will become clear below, we actually only need to return a value that is proportional to the probability.
As discussed in a previous tutorial, the posterior probability for parameters $\mathbf{w} = (m,\,b)$ conditioned on a dataset $\mathbf{y}$ is given by
$$p(\mathbf{w} \,|\, \mathbf{y}) = \frac{p(\mathbf{y} \,|\, \mathbf{w}) \, p(\mathbf{w})}{p(\mathbf{y})}$$
where $p(\mathbf{y} \,|\, \mathbf{w})$ is the likelihood and $p(\mathbf{w})$ is the prior. For this example, we're modeling the likelihood by assuming that the datapoints are independent with known Gaussian uncertainties $\sigma_n$. This specifies a likelihood function:
$$p(\mathbf{y} \,|\, \mathbf{w}) = \prod_{n=1}^N \frac{1}{\sqrt{2\,\pi\,\sigma_n^2}} \,
\exp \left(-\frac{[y_n - f_\mathbf{w}(x_n)]^2}{2\,\sigma_n^2}\right)$$
where $f_\mathbf{w}(x) = m\,x + b$ is the linear model.
For numerical reasons, we will acutally want to compute the logarithm of the likelihood. In this case, this becomes:
$$\ln p(\mathbf{y} \,|\, \mathbf{w}) = -\frac{1}{2}\sum_{n=1}^N \frac{[y_n - f_\mathbf{w}(x_n)]^2}{\sigma_n^2} + \mathrm{constant} \quad.$$
In the following cell, replace the contents of the lnlike_linear function to implement this model. The function takes two values (m and b) as input and it should return the log likelihood (a single number) up to a constant. In this function, you can just use the globaly defined dataset x, y and yerr. For performance, I recommend using vectorized numpy operations (the key function will be np.sum).
End of explanation
p_1, p_2 = (0.0, 0.0), (0.01, 0.01)
ll_1, ll_2 = lnlike_linear(p_1), lnlike_linear(p_2)
if not np.allclose(ll_2 - ll_1, 535.8707738280209):
raise ValueError("It looks like your implementation is wrong!")
print("☺︎")
Explanation: After you're satisfied with your implementation, run the following cell. In this cell, we're checking to see if your code is right. If it is, you'll see a smiling face (☺︎) but if not, you'll get an error message.
End of explanation
def lnprior_linear((m, b)):
if not (-10 < m < 10):
return -np.inf
if not (-10 < b < 10):
return -np.inf
return 0.0
def lnpost_linear(theta):
return lnprior_linear(theta) + lnlike_linear(theta)
Explanation: If you don't get the ☺︎, go back and try to debug your model. Iterate until your result is correct.
Once you get that, we'll use this to implement the full model (Remember: we haven't added in the prior yet). For the purposes of this demonstration, we'll assume broad uniform priors on both $m$ and $b$. This isn't generally a good idea... instead, you should normally use a prior that actually represents your prior beliefs. But this a discussion for another day.
I've chosen to set the bounds on each parameter to be (-10, 10) but you should feel free to change these numbers. Since this is the log-prior, we'll return -np.inf from lnprior_linear when the parameter is outside of the allowed range. And then, since we only need to compute the probability up to a constant, we will return 0.0 (an arbitrary constant) when the parameters are valid.
Finally, the function lnpost_linear sums the output of lnprior_linear and lnlike_linear to compute the log-posterior probability up to a constant.
End of explanation
def metropolis_step(lnpost_function, theta_t, lnpost_t, step_cov):
raise NotImplementedError("Delete this placeholder and implement this function")
Explanation: Metropolis(–Hastings) MCMC
The simplest MCMC algorithm is generally referred to as the Metropolis method. All MCMC algorithms work by specifying a "step" that moves from one position in parameter space to another with some probability. The Metropolis step takes a position $\theta_t$ (a vector containing the slope and intercept at step $t$) to the position $\theta_{t+1}$ using the following steps:
propose a new position $\mathbf{q}$ drawn from a Gaussian centered on the current position $\theta_t$
compute the probability of the new position $p(\mathbf{q}\,|\,\mathbf{y})$
draw a random number $r$ between 0 and 1 and if
$$r < \frac{p(\mathbf{q}\,|\,\mathbf{y})}{p(\mathbf{x}t\,|\,\mathbf{y})}$$
return $\mathbf{q}$ as $\theta{t+1}$ and, otherwise, return $\theta_t$ as $\theta_{t+1}$.
In the following cell, you'll implement this step. The function will take 4 arguments:
a function that computes the ln-probability (for this demo, it'll be lnpost_linear from above),
the current position $\theta_t$,
the ln-probability at the current point $p(\theta_t\,|\,\mathbf{y})$, and
the covariance matrix of the Gaussian proposal distribution.
It should return two values, the new coordinate $\theta_{t+1}$ and the ln-probability at that point $p(\theta_{t+1}\,|\,\mathbf{y})$. The syntax for returning multiple values is return a, b.
This function is really the key to this whole tutorial so spend some time getting it right! It is hard to robustly test functions with a random component so chat with other people around you to check your method. We'll also try to test it below but it's worth spending some time now.
There are a few functions that will come in handy here but the two most important ones are:
np.random.multivariate_normal(theta_t, step_cov) - draws a vector sample from the multivariate Gaussian centered on theta_t with covariance matrix step_cov.
np.random.rand() - draws a random number between 0 and 1.
End of explanation
lptest = lambda x: -0.5 * np.sum(x**2)
th = np.array([0.0])
lp = 0.0
chain = np.array([th for th, lp in (metropolis_step(lptest, th, lp, [[0.3]])
for _ in range(10000))])
if np.abs(np.mean(chain)) > 0.1 or np.abs(np.std(chain) - 1.0) > 0.1:
raise ValueError("It looks like your implementation is wrong!")
print("☺︎")
Explanation: As before, here's a simple test for this function. When you run the following cell it will either print a smile or throw an exception. Since the algorithm is random, it might occasionally fail this test so if it fails once, try running it again. If it fails a second time, edit your implementation until the test consistently passes.
End of explanation
# Edit these guesses.
m_initial = 2.
b_initial = 0.45
# You shouldn't need to change this plotting code.
plt.errorbar(x, y, yerr=yerr, fmt=".k", capsize=0)
for m, b in np.random.multivariate_normal(w, V, size=24):
plt.plot(x, m_initial*x + b_initial, "g", alpha=0.1)
plt.xlim(0, 5);
Explanation: Running the Markov Chain
Now that we have an implementation of the Metropolis step, we can go on to sample from the posterior probability density that we implemented above. To start, we need to initialize the sampler somewhere in parameter space. In the following cell, edit your guess for the slope and intercept of the line until it looks like a reasonably good fit to the data.
End of explanation
# Edit this line to specify the proposal covariance:
step = np.diag([1e-6, 1e-6])
# Edit this line to choose the number of steps you want to take:
nstep = 50000
# Edit this line to set the number steps to discard as burn-in.
nburn = 1000
# You shouldn't need to change any of the lines below here.
p0 = np.array([m_initial, b_initial])
lp0 = lnpost_linear(p0)
chain = np.empty((nstep, len(p0)))
for i in range(len(chain)):
p0, lp0 = metropolis_step(lnpost_linear, p0, lp0, step)
chain[i] = p0
# Compute the acceptance fraction.
acc = float(np.any(np.diff(chain, axis=0), axis=1).sum()) / (len(chain)-1)
print("The acceptance fraction was: {0:.3f}".format(acc))
# Plot the traces.
fig, axes = plt.subplots(2, 1, figsize=(8, 5), sharex=True)
axes[0].plot(chain[:, 0], "k")
axes[0].axhline(w[0], color="g", lw=1.5)
axes[0].set_ylabel("m")
axes[0].axvline(nburn, color="g", alpha=0.5, lw=2)
axes[1].plot(chain[:, 1], "k")
axes[1].axhline(w[1], color="g", lw=1.5)
axes[1].set_ylabel("b")
axes[1].axvline(nburn, color="g", alpha=0.5, lw=2)
axes[1].set_xlabel("step number")
axes[0].set_title("acceptance: {0:.3f}".format(acc));
Explanation: In the next cell, we'll start from this initial guess for the slope and intercept and walk through parameter space (using the transition probability from above) to generate a Markov Chain of samples from the posterior probability.
There are a few tuning parameters for the method. The first and most important choice has already been covered: initialization. The practical performance of an MCMC sampler depends sensitively on the initial position so it's worth spending some time choosing a good initialization.
The second big tuning parameter is the scale of the proposal distribution. We must specify the covariance matrix for the proposal Gaussian. This proposal is currently set to a very bad value. Your job is to run the sampler, look at the output, and try to tune the proposal until you find a "good" value. You will judge this based on a few things. First, you can check the acceptance fraction (the fraction of accepted proposals). For this (easy!) problem, the target is around about 50% but for harder problems in higher dimensions, a good target is around 20%. Another useful diagnostic is a plot of the parameter values as a function of step number. For example, if this looks like a random walk then your proposal scale is probably too small. Once you reach a good proposal, this plot should "look converged".
The final tuning parameter is the number of steps to take. In theory, you need to take an infitite number of steps but we don't (ever) have time for that so instead you'll want to take a large enough number of samples so that the sampler has sufficiently explored parameter space and converged to a stationary distribution. This is, of course, unknowable so for today you'll just have to go with your intuition.
You can also change the number of steps that are discarded as burn-in but (in this problem) your results shouldn't be very sensitive to this number.
Take some time now to adjust these tuning parameters and get a sense of what happens to the sampling when you change different things.
End of explanation
if np.any(np.abs(np.mean(chain, axis=0)-w)>0.01) or np.any(np.abs(np.cov(chain, rowvar=0)-V)>1e-4):
raise ValueError("It looks like your implementation is wrong!")
print("☺︎")
Explanation: The results of the MCMC run are stored in the array called chain with dimensions (nstep, 2). These are samples from the posterior probability density for the parameters. We know from above that this should be a Gaussian with mean $\mathbf{w}$ and covariance $\mathbf{V}$ so let's compare the sample mean and covariance to the analytic result that we computed above:
End of explanation
import triangle
triangle.corner(chain[nburn:, :], labels=["m", "b"], truths=w);
Explanation: If you don't get a smile here, that could mean a few things:
you didn't run for long enough (try increasing nstep),
your choice of step scale was not good (try playing around with the definition of step), or
there's a bug in your code.
Try out all of these tuning parameters until you have a good intuition for what's going on and figure out which settings pass this test and which don't.
Plotting the results
In this section, we'll make two plots that are very useful for checking your results after you run an MCMC:
corner plot or scatterplot matrix — a plot of all the 2- and 1-D projections of the MCMC samples. To make this plot, we'll use triangle.py, a Python module specifically designed for this purpose. For simplicity, I've included the module with this notebook so you won't have to install it separately.
predictive distribution — a plot of the "posterior predicted data" overplotted on the observed data. This kind of plot can be used as a qualitative model check.
First, the corner plot:
End of explanation
plt.errorbar(x, y, yerr=yerr, fmt=".k", capsize=0)
for m, b in chain[nburn+np.random.randint(len(chain)-nburn, size=50)]:
plt.plot(x, m*x + b, "g", alpha=0.1)
plt.xlim(0, 5);
Explanation: This plot is a representation of our contraints on the posterior probability for the slope and intercept conditioned on the data. The 2-D plot shows the full posterior and the two 1-D plots show the constraints for each parameter marginalized over the other.
The second plot that we want to make is a represnetation of the posterior predictive distribution for the data. To do this we will plot a few (50) randomly selected samples from the chain and overplot the resulting line on the data.
End of explanation
# Edit these guesses.
alpha_initial = 100
beta_initial = -1
# These are the edges of the distribution (don't change this).
a, b = 1.0, 5.0
# Load the data.
events = np.loadtxt("poisson.csv")
# Make a correctly normalized histogram of the samples.
bins = np.linspace(a, b, 12)
weights = 1.0 / (bins[1] - bins[0]) + np.zeros(len(events))
plt.hist(events, bins, range=(a, b), histtype="step", color="k", lw=2, weights=weights)
# Plot the guess at the rate.
xx = np.linspace(a, b, 500)
plt.plot(xx, alpha_initial * xx ** beta_initial, "g", lw=2)
# Format the figure.
plt.ylabel("number")
plt.xlabel("x");
Explanation: It is always useful to make a plot like this because it lets you see if your model is capable of describing your data or if there is anything catasrophically wrong.
Dataset 2: Population inference
In this section, we'll go through a more realistic example problem. There is not closed form solution for the posterior probability in this case and the model might even be relevant to your research! In this problem, we're using a simulated catalog of measurements and we want to fit for a power law rate function. This is similar to how you might go about fitting for the luminosity function of a population of stars for example.
A (wrong!) method that is sometimes used for this problem is to make a histogram of the samples and then fit a line to the log bin heights but the correct method is not much more complicated than this. Instead, we start by choosing a rate model that (in this case) will be a power law:
$$\Gamma(x) = \alpha\,x^{\beta} \quad \mathrm{for} \, a < x < b$$
and we want to find the posterior probability for $\alpha$ and $\beta$ conditioned on a set of measurements ${x_k}_{k=1}^K$. To do this, we need to choose a likelihood function (a generative model for the dataset). A reasonable choice in this case is the likelihood function for an inhomogeneous Poisson process (the generalization of the Poisson likelihood to a variable rate function):
$$p({x_k}\,|\,\alpha,\,\beta) \propto \exp \left( - \int_a^b \Gamma(x)\,\mathrm{d}x \right) \, \prod_{k=1}^K \Gamma(x_k)$$
Because of our choice of rate function, we can easily compute the integral in the exponent:
$$\int_a^b \Gamma(x)\,\mathrm{d}x = \frac{\alpha}{\beta+1}\,\left[b^{\beta+1} - a^{\beta+1}\right]$$
Therefore, the full log-likelihood function is:
$$\ln p({x_k}\,|\,\alpha,\,\beta) = \frac{\alpha}{\beta+1}\,\left[a^{\beta+1} - b^{\beta+1}\right] + K\,\ln\alpha + \sum_{k=1}^K \beta\,\ln x_k + \mathrm{const}$$
In the next few cell, you'll implement this model and use your MCMC implementation from above to sample from the posterior for $\alpha$ and $\beta$. But first, let's load the data and plot it.
In this cell, you should change your initial guess for alpha and beta until the green line gives a good fit to the histogram.
End of explanation
def lnlike_poisson((alpha, beta)):
raise NotImplementedError("Delete this placeholder and implement this function")
Explanation: In the following cell, you need to implement the log-likelihood function for the problem (same as above):
$$\ln p({x_k}\,|\,\alpha,\,\beta) = \frac{\alpha}{\beta+1}\,\left[a^{\beta+1} - b^{\beta+1}\right] + K\,\ln\alpha + \sum_{k=1}^K \beta\,\ln x_k + \mathrm{const}$$
Note that this is only valid for $\beta \ne -1$. In practice you shouldn't ever hit this boundary but, just in case, you should special case beta == 1.0 where
$$\ln p({x_k}\,|\,\alpha,\,\beta=-1) = \alpha\,\left[\ln a - \ln b\right] + K\,\ln\alpha - \sum_{k=1}^K \ln x_k + \mathrm{const}$$
End of explanation
p_1, p_2 = (1000.0, -1.), (1500., -2.)
ll_1, ll_2 = lnlike_poisson(p_1), lnlike_poisson(p_2)
if not np.allclose(ll_2 - ll_1, 337.039175916):
raise ValueError("It looks like your implementation is wrong!")
print("☺︎")
Explanation: As before, edit your implementation until the following test passes.
End of explanation
def lnprior_poisson((alpha, beta)):
if not (0 < alpha < 1000):
return -np.inf
if not (-10 < beta < 10):
return -np.inf
return 0.0
def lnpost_poisson(theta):
return lnprior_poisson(theta) + lnlike_poisson(theta)
Explanation: Once you're happy with this implementation, we'll define the full probabilistic model including a prior. As before, I've chosen a broad flat prior on alpha and beta but you should feel free to change this.
End of explanation
# Edit this line to specify the proposal covariance:
step = np.diag([1000., 4.])
# Edit this line to choose the number of steps you want to take:
nstep = 50000
# Edit this line to set the number steps to discard as burn-in.
nburn = 1000
# You shouldn't need to change any of the lines below here.
p0 = np.array([alpha_initial, beta_initial])
lp0 = lnpost_poisson(p0)
chain = np.empty((nstep, len(p0)))
for i in range(len(chain)):
p0, lp0 = metropolis_step(lnpost_poisson, p0, lp0, step)
chain[i] = p0
# Compute the acceptance fraction.
acc = float(np.any(np.diff(chain, axis=0), axis=1).sum()) / (len(chain)-1)
print("The acceptance fraction was: {0:.3f}".format(acc))
# Plot the traces.
fig, axes = plt.subplots(2, 1, figsize=(8, 5), sharex=True)
axes[0].plot(chain[:, 0], "k")
axes[0].set_ylabel("alpha")
axes[0].axvline(nburn, color="g", alpha=0.5, lw=2)
axes[1].plot(chain[:, 1], "k")
axes[1].set_ylabel("beta")
axes[1].axvline(nburn, color="g", alpha=0.5, lw=2)
axes[1].set_xlabel("step number")
axes[0].set_title("acceptance: {0:.3f}".format(acc));
Explanation: Now let's run the MCMC for this model. As before, you should tune the parameters of the algorithm until you get a reasonable acceptance fraction ($\sim 25- 40\%$) and the chains seem converged.
End of explanation
triangle.corner(chain[nburn:], labels=["alpha", "beta"], truths=[500, -2]);
plt.hist(events, bins, range=(a, b), histtype="step", color="k", lw=2, weights=weights)
# Plot the guess at the rate.
xx = np.linspace(a, b, 500)
for alpha, beta in chain[nburn+np.random.randint(len(chain)-nburn, size=50)]:
plt.plot(xx, alpha * xx ** beta, "g", alpha=0.1)
# Format the figure.
plt.ylabel("number")
plt.xlabel("x");
Explanation: Once you're happy with to convergence of your chain, plot the results as a corner plot (compared to the values that I used to generate the dataset; $\alpha = 500$ and $\beta = -2$) and plot the posterior predictive distribution.
End of explanation |
15,032 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 5
Step2: Leave One Out Cross Validation (LOOCV)
Instead of R's glm, we use Scikit-Learn's LinearRegression to arrive at very similar results.
Step3: K-Fold Cross Validation
Step5: Bootstrap | Python Code:
from __future__ import division
import pandas as pd
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.cross_validation import LeaveOneOut
from sklearn.cross_validation import KFold
from sklearn.cross_validation import Bootstrap
from sklearn.metrics import mean_squared_error
%matplotlib inline
auto_df = pd.read_csv("../data/Auto.csv", na_values="?")
auto_df.dropna(inplace=True)
auto_df.head()
ax = auto_df.plot(x="horsepower", y="mpg", style="o")
ax.set_ylabel("mpg")
Explanation: Chapter 5: Cross-Validation and Bootstrap
End of explanation
clf = LinearRegression()
loo = LeaveOneOut(len(auto_df))
X = auto_df[["horsepower"]].values
y = auto_df["mpg"].values
n = np.shape(X)[0]
mses = []
for train, test in loo:
Xtrain, ytrain, Xtest, ytest = X[train], y[train], X[test], y[test]
clf.fit(Xtrain, ytrain)
ypred = clf.predict(Xtest)
mses.append(mean_squared_error(ytest, ypred))
np.mean(mses)
def loo_shortcut(X, y):
implement one-pass LOOCV calculation for linear models from ISLR Page 180 (Eqn 5.2)
clf = LinearRegression()
clf.fit(X, y)
ypred = clf.predict(X)
xbar = np.mean(X, axis=0)
xsum = np.sum(np.power(X - xbar, 2))
nrows = np.shape(X)[0]
mses = []
for row in range(0, nrows):
hi = (1 / nrows) + (np.sum(X[row] - xbar) ** 2 / xsum)
mse = (y[row] - ypred[row]) ** 2 / (1 - hi)
mses.append(mse)
return np.mean(mses)
loo_shortcut(auto_df[["horsepower"]].values, auto_df["mpg"].values)
# LOOCV against models of different degrees
auto_df["horsepower^2"] = auto_df["horsepower"] * auto_df["horsepower"]
auto_df["horsepower^3"] = auto_df["horsepower^2"] * auto_df["horsepower"]
auto_df["horsepower^4"] = auto_df["horsepower^3"] * auto_df["horsepower"]
auto_df["horsepower^5"] = auto_df["horsepower^4"] * auto_df["horsepower"]
auto_df["unit"] = 1
colnames = ["unit", "horsepower", "horsepower^2", "horsepower^3", "horsepower^4", "horsepower^5"]
cv_errors = []
for ncols in range(2, 6):
X = auto_df[colnames[0:ncols]]
y = auto_df["mpg"]
clf = LinearRegression()
clf.fit(X, y)
cv_errors.append(loo_shortcut(X.values, y.values))
plt.plot(range(1,5), cv_errors)
plt.xlabel("degree")
plt.ylabel("cv.error")
Explanation: Leave One Out Cross Validation (LOOCV)
Instead of R's glm, we use Scikit-Learn's LinearRegression to arrive at very similar results.
End of explanation
cv_errors = []
for ncols in range(2, 6):
# each ncol corresponds to a polynomial model
X = auto_df[colnames[0:ncols]].values
y = auto_df["mpg"].values
kfold = KFold(len(auto_df), n_folds=10)
mses = []
for train, test in kfold:
# each model is cross validated 10 times
Xtrain, ytrain, Xtest, ytest = X[train], y[train], X[test], y[test]
clf = LinearRegression()
clf.fit(X, y)
ypred = clf.predict(Xtest)
mses.append(mean_squared_error(ypred, ytest))
cv_errors.append(np.mean(mses))
plt.plot(range(1,5), cv_errors)
plt.xlabel("degree")
plt.ylabel("cv.error")
Explanation: K-Fold Cross Validation
End of explanation
cv_errors = []
for ncols in range(2, 6):
# each ncol corresponds to a polynomial model
X = auto_df[colnames[0:ncols]].values
y = auto_df["mpg"].values
n = len(auto_df)
bs = Bootstrap(n, train_size=int(0.9*n), test_size=int(0.1*n), n_iter=10, random_state=0)
mses = []
for train, test in bs:
# each model is resampled 10 times
Xtrain, ytrain, Xtest, ytest = X[train], y[train], X[test], y[test]
clf = LinearRegression()
clf.fit(X, y)
ypred = clf.predict(Xtest)
mses.append(mean_squared_error(ypred, ytest))
cv_errors.append(np.mean(mses))
plt.plot(range(1,5), cv_errors)
plt.xlabel("degree")
plt.ylabel("cv.error")
def alpha(x, y):
allocate alpha of your assets to x and (1-alpha) to y for optimum
vx = np.var(x)
vy = np.var(y)
cxy = np.cov(x, y)
return ((vy - cxy) / (vx + vy - 2 * cxy))[0, 1]
# From ISLR package, retrieved with write.csv(Portfolio, "portfolio.csv", row.names=FALSE)
portfolio_df = pd.read_csv("../data/Portfolio.csv")
portfolio_df.head()
alpha(portfolio_df["X"].values, portfolio_df["Y"].values)
# Find the variance of alpha - shows that bootstrapping results in a near-normal distribution
X = portfolio_df["X"].values
Y = portfolio_df["Y"].values
bs = Bootstrap(len(portfolio_df), n_iter=1000, train_size=99, random_state=0)
alphas = []
for train, test in bs:
xtrain, ytrain = X[train], Y[train]
alphas.append(alpha(xtrain, ytrain))
plt.hist(alphas)
Explanation: Bootstrap
End of explanation |
15,033 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Comment mettre en oeuvre une régression linéaire avec python ?
Germain Salvato-Vallverdu [email protected]
L'objectif de ce TP est de mettre en pratique le langage python pour réaliser une regression linéaire. L'idée est, dans un premier temps, de reprendre les éléments de base du langage (condition, boucles ...) pour créer un outil qui calcule les paramètres par la méthode des moindres carrés. Dans une deuxième partie les modules spécifiques tels que numpy ou matplotlib seront mis à profit pour réaliser la même opération avec des méthodes existantes.
Note
Step1: Lecture du fichier ligne par ligne
Step2: On va maintenant enregistrer les valeurs xi et yi dans des listes. Par ailleurs, on peut remarquer que les valeurs sont lues comme des chaînes de caractères. Il faut les convertir en nombre flottants avec la fonction float().
Step3: Nous avons maintenant nos listes de valeurs de x et y
Step4: Nous verrons par la suite comment lire ce type de fichier de façon efficace avec la méthode loadtxt() de numpy.
Étape 1
Step5: On fait de même pour la somme des valeurs de $y$ et des valeurs de $x$ au carré.
Step6: Il reste à calculer la somme des produits $x\times y$. Pour parcourir à la fois la liste des valeurs de $x$ et de $y$ on utilise la fonction zip qui joint les deux listes
Step7: Mettons cela à profit pour calculer la somme
Step8: Maintenant que nous disposons de toutes les valeurs nécessaires, il ne reste plus qu'à calculer $a$ et $b$. Nous avons encore besoin du nombre de points. La fonction len donne le nombre d'éléments d'une liste.
Step9: D'après les équations présentées en introduction
Step11: Étape 2
Step12: Utilisons maintenant cette nouvelle fonction.
Step13: Pour afficher les nombres flotant, il est possible d'utiliser un format. Deux syntaxes existent suivant la version de python
Step14: Étape 3
Step15: La somme des valeurs d'un tableau peut être obtenue par la méthode sum().
Step16: La somme des carrés ou des produits peut également être obtenue aisément.
Step17: Pour calculer les produits entre deux arrays numpy il suffit de les multiplier
Step18: La somme se calcule alors de la même manière que précédemment pour le carré
Step20: Nous pouvons maintenant simplifier la fonction regLin en utilisant les fonctions de numpy
Step21: La nouvelle fonction renvoie évidemment les mêmes résultats
Step22: Étape 4
Step23: Les valeurs de $x$ correspondent à la première colonne.
Step24: Les valeurs de $y$ correspondent à la deuxième colonne.
Step25: Le tout peut être fait en une seul ligne
Step26: Utilisation de la méthode polyfit
La méthode polytfit du module numpy prend comme argument les valeurs de $x$, de $y$ et le degré du polynome (1 ici puisqu'il s'agit d'une droite).
Step27: Les paramètres sont bien les mêmes que ceux que nous avions déterminés à la main.
Step28: Utilisation de la méthode linregress
La méthode linregress est contenue dans le module stats du module scipy. Elle prend comme argument les valeurs de $x$, de $y$ et retourne en plus des paramètres $a$ et $b$, le coefficiant de corrélation.
Commençons par importer la méthode linregress.
Step29: Utilisons maintenant la méthode linregress.
Step30: Représentation graphique
Python offre la possibilité de réaliser une réprésentation graphique en utilisant le module matplotlib. Voici un exemple d'utilisation de ce module pour représenter les points $(x, y)$ et la droite de régression précédemment déterminée.
Pour une utilisation dans ipython il faut d'abord préparer l'environnement pour intégrer matplotlib.
Step31: Chargeons le module matplotlib.
Step32: Préparons maintenant le graphique.
Step33: Pour aller plus loin
Step34: Utilisons maintenant la fonction linregress pour trouver les paramètres de la régression linéaire
Step35: Nous allons maintenant visualiser les points et la droite de régression dans un premier graphique, puis, les résidus dans un second graphique. Pour ce faire, nous utiliserons la méthode subplot de matplotlib qui place des graphiques sur une grille. Les trois arguments de subplot sont le nombre de lignes, le nombre de colonnes et le numéro du graphique.
Step36: Les résidus sont un bon indicateur de la qualité du modèle. Ici, on dit que les résidus sont 'structurés'. Ils ne sont pas aléatoirement répartis autour de zéro, ils présentent une variation parabolique. Cette structure des résidus indique qu'une fonction affine n'est pas adaptée pour représenter les données.
Utilisons la méthode polyfit pour ajuster une parabolle
Step37: Reprennons les lignes précédentes pour représenter les données graphiquement.
Step38: Cette fois, les résidus sont bien répartis aléatoirement autour de l'origine. Calculons le coefficient de détermination selon
\begin{equation}
R^2 = \frac{\sum_k (y^{calc}_k - \overline{y})^2}{\sum_k (y_k - \overline{y})^2}
\end{equation}
où les $y_k$ sont les données, les $y^{calc}$ sont ceux calculés avec le polynômes et $\overline{y}$ est la moyenne des valeurs $y_k$. | Python Code:
cat data/donnees.dat
Explanation: Comment mettre en oeuvre une régression linéaire avec python ?
Germain Salvato-Vallverdu [email protected]
L'objectif de ce TP est de mettre en pratique le langage python pour réaliser une regression linéaire. L'idée est, dans un premier temps, de reprendre les éléments de base du langage (condition, boucles ...) pour créer un outil qui calcule les paramètres par la méthode des moindres carrés. Dans une deuxième partie les modules spécifiques tels que numpy ou matplotlib seront mis à profit pour réaliser la même opération avec des méthodes existantes.
Note : Ce notebook est compatible python2 et python3. Il est recommandé d'utiliser python3.
Sommaire
Introduction
Cahier des charges
Rappels mathématiques
Progression
Programation
Étape 0: Lecture du fichier de données
Étape 1: À la main
Étape 2: Créer une fonction
Étape 3: Utilisation du module numpy
Étape 4: Utilisation des méthodes prédéfinies dans numpy et scipy
Représentation graphique
Pour aller plus loin ; les résidus
Conlusion
Introduction
Cahier des charges
Le programme que nous allons écrire devra réaliser les opérations suivantes :
Lire les valeurs des valeurs de $x$ et $y$ sur le fichier donnees.dat.
Calculer les paramètres $a$ et $b$ de la régression linéaire par la méthode des moindres carrés et les afficher.
(bonus) Représenter les points $(x,y)$ et tracer la droite de régression.
Rappels mathématiques
La régression linéaire consiste à chercher les paramètres $a$ et $b$ définissant la droite $y=ax+b$ qui passe au plus près d'un ensemble de points $(x_k,y_k)$. Les paramètres $a$ et $b$ sont déterminés par la méthodes des moindres carrés qui consiste, dans le cas d'une régression linéaire, à minimiser la quantité :
\begin{equation}
Q(a, b) = \sum_{k=1}^N (y_k - a x_k - b)^2
\end{equation}
Le minimum de $Q(a,b)$ est obtenu lorsque ses dérivées par rapport à $a$ et $b$ sont nulles. Il faut donc résoudre le système à deux équations deux inconnues suivant :
\begin{align}
&
\begin{cases}
\displaystyle\frac{\partial Q(a,b)}{\partial a} = 0 \
\displaystyle\frac{\partial Q(a,b)}{\partial b} = 0
\end{cases}
&
\Leftrightarrow &
&
&
\begin{cases}
\displaystyle -2 \sum_{k=1}^N x_k \left(y_k - a x_k - b\right) = 0 \
\displaystyle -2 \sum_{k=1}^N \left(y_k - a x_k - b\right) = 0
\end{cases}
\end{align}
Les solutions de ce système sont :
\begin{align}
a & = \frac{\displaystyle N \sum_{k=1}^N x_k y_k - \sum_{k=1}^N x_k\sum_{k=1}^N y_k}{\displaystyle N\sum_{k=1}^N x_k^2 - \left(\sum_{k=1}^N x_k\right)^2} &
b & = \frac{\displaystyle \sum_{k=1}^N x_k^2 \sum_{k=1}^N y_k - \sum_{k=1}^N x_k\sum_{k=1}^N x_k y_k}{\displaystyle N\sum_{k=1}^N x_k^2 - \left(\sum_{k=1}^N x_k\right)^2}
\end{align}
Progression
Le programme sera écrit de plusieurs façon différentes.
Tous les calculs seront réalisés à la main.
Création d'une fonction qui réalise la régression linéaire
Utilisation du module numpy pour simplifier les calculs
Utilisation des méthodes des modules numpy/scipy pour réaliser la régression linéaire
(bonus) Utilisation du module matplotlib pour représenter les points et la droite de régression.
Programmation
Étape 0: Lecture du fichier donnees.dat
Contenu du fichier donnees.dat
End of explanation
with open("data/donnees.dat", "r") as inp:
for line in inp:
xi, yi = line.split()
print(xi, yi)
print("type de xi : ", type(xi))
Explanation: Lecture du fichier ligne par ligne :
End of explanation
# création des listes
x = list()
y = list()
# lecture du fichier
with open("data/donnees.dat", "r") as inp:
for line in inp:
xi, yi = line.split()
x.append(float(xi))
y.append(float(yi))
Explanation: On va maintenant enregistrer les valeurs xi et yi dans des listes. Par ailleurs, on peut remarquer que les valeurs sont lues comme des chaînes de caractères. Il faut les convertir en nombre flottants avec la fonction float().
End of explanation
print(x)
print(y)
Explanation: Nous avons maintenant nos listes de valeurs de x et y :
End of explanation
# initialisation
x_sum = 0
# calcul de la somme
for xi in x:
x_sum += xi
# affichage
print("somme des valeurs de x = ", x_sum)
Explanation: Nous verrons par la suite comment lire ce type de fichier de façon efficace avec la méthode loadtxt() de numpy.
Étape 1: À la main
Dans cette étape on va utiliser directement les formules présentées en introduction pour calculer la valeur des paramètres $a$ et $b$. Commençons par calculer la somme des valeurs de $x$. Le point important est de ne pas oublier d'initialiser la valeur de la somme.
End of explanation
y_sum = 0.
for yi in y:
y_sum += yi
print("somme des valeurs de y = ", y_sum)
x2_sum = 0.
for xi in x:
x2_sum += xi**2
print("somme des valeurs de x^2 = ", x2_sum)
Explanation: On fait de même pour la somme des valeurs de $y$ et des valeurs de $x$ au carré.
End of explanation
for xi, yi in zip(x, y):
print("xi = ", xi, "\tyi = ", yi)
Explanation: Il reste à calculer la somme des produits $x\times y$. Pour parcourir à la fois la liste des valeurs de $x$ et de $y$ on utilise la fonction zip qui joint les deux listes :
End of explanation
xy_sum = 0.
for xi, yi in zip(x, y):
xy_sum += xi * yi
print("somme des valeurs de x*y = ", xy_sum)
Explanation: Mettons cela à profit pour calculer la somme :
End of explanation
npoints = len(x)
print("Nombre de points = ", npoints)
Explanation: Maintenant que nous disposons de toutes les valeurs nécessaires, il ne reste plus qu'à calculer $a$ et $b$. Nous avons encore besoin du nombre de points. La fonction len donne le nombre d'éléments d'une liste.
End of explanation
a = (npoints * xy_sum - x_sum * y_sum) / (npoints * x2_sum - x_sum**2)
print("a = ", a)
b = (x2_sum * y_sum - x_sum * xy_sum) / (npoints * x2_sum - x_sum**2)
print("b = ", b)
Explanation: D'après les équations présentées en introduction :
End of explanation
def regLin(x, y):
Ajuste une droite d'équation a*x + b sur les points (x, y) par la méthode
des moindres carrés.
Args :
* x (list): valeurs de x
* y (list): valeurs de y
Return:
* a (float): pente de la droite
* b (float): ordonnée à l'origine
# initialisation des sommes
x_sum = 0.
x2_sum = 0.
y_sum = 0.
xy_sum = 0.
# calcul des sommes
for xi, yi in zip(x, y):
x_sum += xi
x2_sum += xi**2
y_sum += yi
xy_sum += xi * yi
# nombre de points
npoints = len(x)
# calcul des parametres
a = (npoints * xy_sum - x_sum * y_sum) / (npoints * x2_sum - x_sum**2)
b = (x2_sum * y_sum - x_sum * xy_sum) / (npoints * x2_sum - x_sum**2)
# renvoie des parametres
return a, b
Explanation: Étape 2: Créer une fonction
Dans cette fonction nous allons regrouper les différentes étapes permettant de réaliser la régression linéaire. La fonction prend comme arguments les listes des valeurs de $x$ et $y$ et retourne les valeurs des paramètres $a$ et $b$. Les entrées et sorties de la fonction doivent être explicités dans la docstring située en dessous de la définition.
End of explanation
a, b = regLin(x, y)
print("a = ", a)
print("b = ", b)
Explanation: Utilisons maintenant cette nouvelle fonction.
End of explanation
# python 2.7 et superieur
print("a = {:8.3f}".format(a))
print("b = {:8.3f}".format(b))
# python 2.X
print("a = %8.3f" % a)
print("b = %8.3f" % b)
Explanation: Pour afficher les nombres flotant, il est possible d'utiliser un format. Deux syntaxes existent suivant la version de python :
End of explanation
import numpy as np
Explanation: Étape 3: Utilisation du module numpy
Nous allons maintenant utiliser le module numpy pour simplifier le calcul des sommes. Pour commencer il faut importer le module numpy. Il est courant de le donner np comme raccourci.
End of explanation
a = np.array([1, 2, 3])
a.sum()
Explanation: La somme des valeurs d'un tableau peut être obtenue par la méthode sum().
End of explanation
(a**2).sum()
Explanation: La somme des carrés ou des produits peut également être obtenue aisément.
End of explanation
b = np.array([3, 2, 1])
print("a : ", a)
print("b : ", b)
print("a * b :", a * b)
Explanation: Pour calculer les produits entre deux arrays numpy il suffit de les multiplier :
End of explanation
(a * b).sum()
Explanation: La somme se calcule alors de la même manière que précédemment pour le carré :
End of explanation
def regLin_np(x, y):
Ajuste une droite d'équation a*x + b sur les points (x, y) par la méthode
des moindres carrés.
Args :
* x (list): valeurs de x
* y (list): valeurs de y
Return:
* a (float): pente de la droite
* b (float): ordonnée à l'origine
# conversion en array numpy
x = np.array(x)
y = np.array(y)
# nombre de points
npoints = len(x)
# calculs des parametres a et b
a = (npoints * (x*y).sum() - x.sum()*y.sum()) / (npoints*(x**2).sum() - (x.sum())**2)
b = ((x**2).sum()*y.sum() - x.sum() * (x*y).sum()) / (npoints * (x**2).sum() - (x.sum())**2)
# renvoie des parametres
return a, b
Explanation: Nous pouvons maintenant simplifier la fonction regLin en utilisant les fonctions de numpy :
End of explanation
a, b = regLin_np(x, y)
print("a = {:8.3f}\nb = {:8.3f}".format(a, b)) # \n est le caractere de fin de ligne
Explanation: La nouvelle fonction renvoie évidemment les mêmes résultats :
End of explanation
data = np.loadtxt("data/donnees.dat")
print(data)
Explanation: Étape 4: Utilisation des méthodes prédéfinies dans numpy et scipy
Numpy et Scipy sont deux modules scientifiques de python qui regroupent de nombreuses fonctions. Nous allons utiliser la méthode loadtxt pour lire le fichier texte et les méthodes polyfit et linregress pour réaliser la régression linéaire. Le module numpy est totalement inclu dans scipy. Compte tenu du grand nombre de modules et bibliothèques python existants, il est important de savoir lire une documentation pour utiliser les méthodes disponibles. De plus, l'utilisation d'une méthode déjà existante accélère le travail de développement tout en évitant de refaire la même chose. Pour obtenir la documentation à l'intérieur de ipyhon, ajouter un ? après le nom de la méthode.
Commençons par lire le fichier donnees.dat avec la méthode loadtxt pour lire le fichier de données.
End of explanation
x = data[:,0]
print(x)
Explanation: Les valeurs de $x$ correspondent à la première colonne.
End of explanation
y = data[:,1]
print(y)
Explanation: Les valeurs de $y$ correspondent à la deuxième colonne.
End of explanation
x, y = np.loadtxt("data/donnees.dat", unpack=True)
print(x)
print(y)
Explanation: Le tout peut être fait en une seul ligne :
End of explanation
parametres = np.polyfit(x, y, 1)
print(parametres)
Explanation: Utilisation de la méthode polyfit
La méthode polytfit du module numpy prend comme argument les valeurs de $x$, de $y$ et le degré du polynome (1 ici puisqu'il s'agit d'une droite).
End of explanation
a, b = parametres
print("a = {:8.3f}\nb = {:8.3f}".format(a, b))
Explanation: Les paramètres sont bien les mêmes que ceux que nous avions déterminés à la main.
End of explanation
from scipy.stats import linregress
Explanation: Utilisation de la méthode linregress
La méthode linregress est contenue dans le module stats du module scipy. Elle prend comme argument les valeurs de $x$, de $y$ et retourne en plus des paramètres $a$ et $b$, le coefficiant de corrélation.
Commençons par importer la méthode linregress.
End of explanation
a, b, r, p_value, std_err = linregress(x, y)
print("a ={:8.3f}\nb ={:8.3f}\nr^2 ={:8.5f}".format(a, b, r**2))
Explanation: Utilisons maintenant la méthode linregress.
End of explanation
%matplotlib inline
Explanation: Représentation graphique
Python offre la possibilité de réaliser une réprésentation graphique en utilisant le module matplotlib. Voici un exemple d'utilisation de ce module pour représenter les points $(x, y)$ et la droite de régression précédemment déterminée.
Pour une utilisation dans ipython il faut d'abord préparer l'environnement pour intégrer matplotlib.
End of explanation
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = 8, 8 # ajuste la taille des figures
Explanation: Chargeons le module matplotlib.
End of explanation
plt.plot(x, y, "bo", label="donnees") # les points (x, y) representes par des points
plt.plot( # droite de regression
[x[0], x[-1]], # valeurs de x
[a * x[0] + b, a * x[-1] + b], # valeurs de y
"r-", # couleur rouge avec un trait continu
label="regression") # legende
plt.xlabel("x") # nom de l'axe x
plt.ylabel("y") # nom de l'axe y
plt.xlim(0, 11) # échelle axe x
plt.legend() # la legende
plt.title("Regression Lineaire") # titre de graphique
Explanation: Préparons maintenant le graphique.
End of explanation
x, y = np.loadtxt("data/regLinData.dat", unpack=True)
Explanation: Pour aller plus loin : les résidus
Nous allons maintenant utiliser les méthodes vues précédemment sur un échantillon plus large. De plus, nous représenterons les résidus qui correspondent à la différence entre les points issus des données et la droite obtenue par la régression linéaire.
Commençons par lire les données dans le fichier regLinData.dat
End of explanation
a, b, r, p_vall, std_err = linregress(x, y)
print("a = ", a, " b = ", b, " r^2 = ", r**2)
Explanation: Utilisons maintenant la fonction linregress pour trouver les paramètres de la régression linéaire :
End of explanation
# graphique 1: les données et la droite de régression
plt.subplot(2, 1, 1)
plt.plot(x, y, "bo", label="data")
plt.plot(x, a * x + b, "r-", label="regression")
plt.xlabel("x")
plt.ylabel("y")
plt.title("Regression linéaire avec résidus")
plt.legend(loc="lower right")
# graphique 2: les résidus
plt.subplot(2, 1, 2)
plt.plot(x, y - (a * x + b), "g-", label="residus")
plt.xlabel("x")
plt.ylabel("Résidus")
plt.legend(loc="upper center")
Explanation: Nous allons maintenant visualiser les points et la droite de régression dans un premier graphique, puis, les résidus dans un second graphique. Pour ce faire, nous utiliserons la méthode subplot de matplotlib qui place des graphiques sur une grille. Les trois arguments de subplot sont le nombre de lignes, le nombre de colonnes et le numéro du graphique.
End of explanation
p0, p1, p2 = np.polyfit(x, y, 2)
print("p0 = ", p0, " p1 = ", p1, " p2 = ", p2)
Explanation: Les résidus sont un bon indicateur de la qualité du modèle. Ici, on dit que les résidus sont 'structurés'. Ils ne sont pas aléatoirement répartis autour de zéro, ils présentent une variation parabolique. Cette structure des résidus indique qu'une fonction affine n'est pas adaptée pour représenter les données.
Utilisons la méthode polyfit pour ajuster une parabolle :
End of explanation
# graphique 1: les données et la droite de régression
plt.subplot(2, 1, 1)
plt.plot(x, y, "bo", label="data")
plt.plot(x, p0*x**2 + p1*x + p2, "r-", label="parabolle")
plt.xlabel("x")
plt.ylabel("y")
plt.title("Regression linéaire avec résidus")
plt.legend(loc="lower right")
# graphique 2: les résidus
plt.subplot(2, 1, 2)
plt.plot(x, y - (p0*x**2 + p1*x + p2), "g-", label="residus")
plt.xlabel("x")
plt.ylabel("Résidus")
plt.legend(loc="upper center")
Explanation: Reprennons les lignes précédentes pour représenter les données graphiquement.
End of explanation
R2 = ((p0*x**2 + p1*x + p2 - y.mean())**2).sum() / ((y - y.mean())**2).sum()
print(R2)
Explanation: Cette fois, les résidus sont bien répartis aléatoirement autour de l'origine. Calculons le coefficient de détermination selon
\begin{equation}
R^2 = \frac{\sum_k (y^{calc}_k - \overline{y})^2}{\sum_k (y_k - \overline{y})^2}
\end{equation}
où les $y_k$ sont les données, les $y^{calc}$ sont ceux calculés avec le polynômes et $\overline{y}$ est la moyenne des valeurs $y_k$.
End of explanation |
15,034 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Label and feature engineering
This lab is optional. It demonstrates advanced SQL queries for time-series engineering. For real-world problems, this type of feature engineering code is essential. If you are pursuing a time-series project for open project week, feel free to use this code as a template.
Learning objectives
Step1: Create time-series features and determine label based on market movement
Summary of base tables
TODO
Step2: Label engineering
Ultimately, we need to end up with a single label for each day. The label takes on 3 values
Step3: TODO
Step5: Add time series features
<h3><font color="#4885ed">Compute price features using analytics functions</font> </h3>
In addition, we will also build time-series features using the min, max, mean, and std (can you think of any over functions to use?). To do this, let's use analytic functions in BigQuery (also known as window functions).
An analytic function is a function that computes aggregate values over a group of rows. Unlike aggregate functions, which return a single aggregate value for a group of rows, analytic functions return a single value for each row by computing the function over a group of input rows.
Using the AVG analytic function, we can compute the average close price of a given symbol over the past week (5 business days)
Step6: Compute percentage change, then self join with prices from S&P index.
We will also compute price change of S&P index, GSPC. We do this so we can compute the normalized percentage change.
<h3><font color="#4885ed">Compute normalized price change (%)</font> </h3>
Before we can create our labels we need to normalize the price change using the S&P 500 index. The normalization using the S&P index fund helps ensure that the future price of a stock is not due to larger market effects. Normalization helps us isolate the factors contributing to the performance of a stock_market.
Let's use the normalization scheme from by subtracting the scaled difference in the S&P 500 index during the same time period.
In Python
Step7: Compute normalized price change (shown above).
Let's join scaled price change (tomorrow_close / close) with the gspc symbol (symbol for the S&P index). Then we can normalize using the scheme described above.
Learning objective 3
TODO
Step8: Verify results
Step9: <h3><font color="#4885ed">Join with S&P 500 table and Create labels
Step10: TODO
Step11: The dataset is still quite large and the majority of the days the market STAYs. Let's focus our analysis on dates where earnings per share (EPS) information is released by the companies. The EPS data has 3 key columns surprise, reported_EPS, and consensus_EPS
Step12: The surprise column indicates the difference between the expected (consensus expected eps by analysts) and the reported eps. We can join this table with our derived table to focus our analysis during earnings periods
Step14: Feature exploration
Now that we have created our recent movements of the company’s stock price, let's visualize our features. This will help us understand the data better and possibly spot errors we may have made during our calculations.
As a reminder, we calculated the scaled prices 1 week, 1 month, and 1 year before the date that we are predicting at.
Let's write a re-usable function for aggregating our features.
Learning objective 2
Step15: TODO Use the get_aggregate_stats from above to visualize the normalized_change column.
Step16: Let's look at results by day-of-week, month, etc. | Python Code:
PROJECT = 'your-gcp-project' # Replace with your project ID.
import pandas as pd
from google.cloud import bigquery
from IPython.core.magic import register_cell_magic
from IPython import get_ipython
bq = bigquery.Client(project = PROJECT)
# Allow you to easily have Python variables in SQL query.
@register_cell_magic('with_globals')
def with_globals(line, cell):
contents = cell.format(**globals())
if 'print' in line:
print(contents)
get_ipython().run_cell(contents)
def create_dataset():
dataset = bigquery.Dataset(bq.dataset("stock_market"))
try:
bq.create_dataset(dataset) # Will fail if dataset already exists.
print("Dataset created")
except:
print("Dataset already exists")
create_dataset()
Explanation: Label and feature engineering
This lab is optional. It demonstrates advanced SQL queries for time-series engineering. For real-world problems, this type of feature engineering code is essential. If you are pursuing a time-series project for open project week, feel free to use this code as a template.
Learning objectives:
Learn how to use BigQuery to build time-series features and labels for forecasting
Learn how to visualize and explore features.
Learn effective scaling and normalizing techniques to improve our modeling results
Now that we have explored the data, let's start building our features, so we can build a model.
<h3><font color="#4885ed">Feature Engineering</font> </h3>
Use the price_history table, we can look at past performance of a given stock, to try to predict it's future stock price. In this notebook we will be focused on cleaning and creating features from this table.
There are typically two different approaches to creating features with time-series data.
One approach is aggregate the time-series into "static" features, such as "min_price_over_past_month" or "exp_moving_avg_past_30_days". Using this approach, we can use a deep neural network or a more "traditional" ML model to train. Notice we have essentially removed all sequention information after aggregating. This assumption can work well in practice.
A second approach is to preserve the ordered nature of the data and use a sequential model, such as a recurrent neural network. This approach has a nice benefit that is typically requires less feature engineering. Although, training sequentially models typically takes longer.
In this notebook, we will build features and also create rolling windows of the ordered time-series data.
<h3><font color="#4885ed">Label Engineering</font> </h3>
We are trying to predict if the stock will go up or down. In order to do this we will need to "engineer" our label by looking into the future and using that as the label. We will be using the LAG function in BigQuery to do this. Visually this looks like:
Import libraries; setup
End of explanation
%%with_globals
%%bigquery --project {PROJECT}
--# TODO
%%with_globals
%%bigquery --project {PROJECT}
--# TODO
Explanation: Create time-series features and determine label based on market movement
Summary of base tables
TODO: How many rows are in our base tables price_history and snp500?
End of explanation
%%with_globals print
%%bigquery --project {PROJECT} df
CREATE OR REPLACE TABLE `{PROJECT}.stock_market.price_history_delta`
AS
(
WITH shifted_price AS
(
SELECT *,
(LAG(close, 1) OVER (PARTITION BY symbol order by Date DESC)) AS tomorrow_close
FROM `asl-ml-immersion.stock_src.price_history`
WHERE Close > 0
)
SELECT a.*,
(tomorrow_close - Close) AS tomo_close_m_close
FROM shifted_price a
)
%%with_globals
%%bigquery --project {PROJECT}
SELECT *
FROM stock_market.price_history_delta
ORDER by Date
LIMIT 100
Explanation: Label engineering
Ultimately, we need to end up with a single label for each day. The label takes on 3 values: {down, stay, up}, where down and up indicates the normalized price (more on this below) went down 1% or more and up 1% or more, respectively. stay indicates the stock remained within 1%.
The steps are:
Compare close price and open price
Compute price features using analytics functions
Compute normalized price change (%)
Join with S&P 500 table
Create labels (up, down, stay)
<h3><font color="#4885ed">Compare close price and open price</font> </h3>
For each row, get the close price of yesterday and the open price of tomorrow using the LAG function. We will determine tomorrow's close - today's close.
Shift to get tomorrow's close price.
Learning objective 1
End of explanation
%%with_globals print
%%bigquery --project {PROJECT}
SELECT
--# TODO: verify the stock market is going up -- on average.
FROM
stock_market.price_history_delta
Explanation: TODO: Historically, we know that the stock market has been going up. Can you think of a way to verify this using our newly created table price_history_delta?
Learning objective 2
End of explanation
def get_window_fxn(agg_fxn, n_days):
Generate a time-series feature.
E.g., Compute the average of the price over the past 5 days.
SCALE_VALUE = 'close'
sql = '''
({agg_fxn}(close) OVER (PARTITION BY (# TODO)
ORDER BY (# TODO)
ROWS BETWEEN {n_days} (# TODO)))/{scale}
AS close_{agg_fxn}_prior_{n_days}_days'''.format(
agg_fxn=agg_fxn, n_days=n_days, scale=SCALE_VALUE)
return sql
WEEK = 5
MONTH = 20
YEAR = 52*5
agg_funcs = ('MIN', 'MAX', 'AVG', 'STDDEV')
lookbacks = (WEEK, MONTH, YEAR)
sqls = []
for fxn in agg_funcs:
for lookback in lookbacks:
sqls.append(get_window_fxn(fxn, lookback))
time_series_features_sql = ','.join(sqls) # SQL string.
def preview_query():
print(time_series_features_sql[0:1000])
preview_query()
%%with_globals print
%%bigquery --project {PROJECT}
CREATE OR REPLACE TABLE stock_market.price_features_delta
AS
SELECT *
FROM
(SELECT *,
{time_series_features_sql},
-- Also get the raw time-series values; will be useful for the RNN model.
(ARRAY_AGG(close) OVER (PARTITION BY symbol
ORDER BY Date
ROWS BETWEEN 260 PRECEDING AND 1 PRECEDING))
AS close_values_prior_260,
ROW_NUMBER() OVER (PARTITION BY symbol ORDER BY Date) AS days_on_market
FROM stock_market.price_history_delta)
WHERE days_on_market > {YEAR}
%%bigquery --project {PROJECT}
SELECT *
FROM stock_market.price_features_delta
ORDER BY symbol, Date
LIMIT 10
Explanation: Add time series features
<h3><font color="#4885ed">Compute price features using analytics functions</font> </h3>
In addition, we will also build time-series features using the min, max, mean, and std (can you think of any over functions to use?). To do this, let's use analytic functions in BigQuery (also known as window functions).
An analytic function is a function that computes aggregate values over a group of rows. Unlike aggregate functions, which return a single aggregate value for a group of rows, analytic functions return a single value for each row by computing the function over a group of input rows.
Using the AVG analytic function, we can compute the average close price of a given symbol over the past week (5 business days):
python
(AVG(close) OVER (PARTITION BY symbol
ORDER BY Date
ROWS BETWEEN 5 PRECEDING AND 1 PRECEDING)) / close
AS close_avg_prior_5_days
Learning objective 1
TODO: Please fill in the # TODOs in the below query
End of explanation
scaled_change = (50.59 - 50.69) / 50.69
scaled_s_p = (939.38 - 930.09) / 930.09
normalized_change = scaled_change - scaled_s_p
print('''
scaled change: {:2.3f}
scaled_s_p: {:2.3f}
normalized_change: {:2.3f}
'''.format(scaled_change, scaled_s_p, normalized_change))
Explanation: Compute percentage change, then self join with prices from S&P index.
We will also compute price change of S&P index, GSPC. We do this so we can compute the normalized percentage change.
<h3><font color="#4885ed">Compute normalized price change (%)</font> </h3>
Before we can create our labels we need to normalize the price change using the S&P 500 index. The normalization using the S&P index fund helps ensure that the future price of a stock is not due to larger market effects. Normalization helps us isolate the factors contributing to the performance of a stock_market.
Let's use the normalization scheme from by subtracting the scaled difference in the S&P 500 index during the same time period.
In Python:
```python
Example calculation.
scaled_change = (50.59 - 50.69) / 50.69
scaled_s_p = (939.38 - 930.09) / 930.09
normalized_change = scaled_change - scaled_s_p
assert normalized_change == ~1.2%
```
End of explanation
snp500_index = 'gspc'
%%with_globals print
%%bigquery --project {PROJECT}
CREATE OR REPLACE TABLE stock_market.price_features_norm_per_change
AS
WITH
all_percent_changes AS
(
SELECT *, (tomo_close_m_close / Close) AS scaled_change
FROM `{PROJECT}.stock_market.price_features_delta`
),
s_p_changes AS
(SELECT
scaled_change AS s_p_scaled_change,
date
FROM all_percent_changes
WHERE symbol="{snp500_index}")
SELECT all_percent_changes.*,
s_p_scaled_change,
(# TODO) AS normalized_change
FROM
all_percent_changes LEFT JOIN s_p_changes
--# Add S&P change to all rows
ON all_percent_changes.date = s_p_changes.date
Explanation: Compute normalized price change (shown above).
Let's join scaled price change (tomorrow_close / close) with the gspc symbol (symbol for the S&P index). Then we can normalize using the scheme described above.
Learning objective 3
TODO: Please fill in the # TODO in the code below.
End of explanation
%%with_globals print
%%bigquery --project {PROJECT} df
SELECT *
FROM stock_market.price_features_norm_per_change
LIMIT 10
df.head()
Explanation: Verify results
End of explanation
down_thresh = -0.01
up_thresh = 0.01
Explanation: <h3><font color="#4885ed">Join with S&P 500 table and Create labels: {`up`, `down`, `stay`}</font> </h3>
Join the table with the list of S&P 500. This will allow us to limit our analysis to S&P 500 companies only.
Finally we can create labels. The following SQL statement should do:
sql
CASE WHEN normalized_change < -0.01 THEN 'DOWN'
WHEN normalized_change > 0.01 THEN 'UP'
ELSE 'STAY'
END
Learning objective 1
End of explanation
%%with_globals print
%%bigquery --project {PROJECT} df
CREATE OR REPLACE TABLE stock_market.percent_change_sp500
AS
SELECT *,
CASE
--# TODO
END AS direction
FROM stock_market.price_features_norm_per_change features
INNER JOIN `asl-ml-immersion.stock_src.snp500`
USING (symbol)
%%with_globals print
%%bigquery --project {PROJECT}
SELECT direction, COUNT(*) as cnt
FROM stock_market.percent_change_sp500
GROUP BY direction
%%with_globals print
%%bigquery --project {PROJECT} df
SELECT *
FROM stock_market.percent_change_sp500
LIMIT 20
df.columns
Explanation: TODO: Please fill in the CASE function below.
End of explanation
%%with_globals print
%%bigquery --project {PROJECT}
SELECT *
FROM `asl-ml-immersion.stock_src.eps`
LIMIT 10
Explanation: The dataset is still quite large and the majority of the days the market STAYs. Let's focus our analysis on dates where earnings per share (EPS) information is released by the companies. The EPS data has 3 key columns surprise, reported_EPS, and consensus_EPS:
End of explanation
%%with_globals print
%%bigquery --project {PROJECT}
CREATE OR REPLACE TABLE stock_market.eps_percent_change_sp500
AS
SELECT a.*, b.consensus_EPS, b.reported_EPS, b.surprise
FROM stock_market.percent_change_sp500 a
INNER JOIN `asl-ml-immersion.stock_src.eps` b
ON a.Date = b.date
AND a.symbol = b.symbol
%%with_globals print
%%bigquery --project {PROJECT}
SELECT *
FROM stock_market.eps_percent_change_sp500
LIMIT 20
%%with_globals print
%%bigquery --project {PROJECT}
SELECT direction, COUNT(*) as cnt
FROM stock_market.eps_percent_change_sp500
GROUP BY direction
Explanation: The surprise column indicates the difference between the expected (consensus expected eps by analysts) and the reported eps. We can join this table with our derived table to focus our analysis during earnings periods:
End of explanation
def get_aggregate_stats(field, round_digit=2):
Run SELECT ... GROUP BY field, rounding to nearest digit.
df = bq.query('''
SELECT {field}, COUNT(*) as cnt
FROM
(SELECT ROUND({field}, {round_digit}) AS {field}
FROM stock_market.eps_percent_change_sp500) rounded_field
GROUP BY {field}
ORDER BY {field}'''.format(field=field,
round_digit=round_digit,
PROJECT=PROJECT)).to_dataframe()
return df.dropna()
field = 'close_AVG_prior_260_days'
CLIP_MIN, CLIP_MAX = 0.1, 4.
df = get_aggregate_stats(field)
values = df[field].clip(CLIP_MIN, CLIP_MAX)
counts = 100 * df['cnt'] / df['cnt'].sum() # Percentage.
ax = values.hist(weights=counts, bins=30, figsize=(10, 5))
ax.set(xlabel=field, ylabel="%");
Explanation: Feature exploration
Now that we have created our recent movements of the company’s stock price, let's visualize our features. This will help us understand the data better and possibly spot errors we may have made during our calculations.
As a reminder, we calculated the scaled prices 1 week, 1 month, and 1 year before the date that we are predicting at.
Let's write a re-usable function for aggregating our features.
Learning objective 2
End of explanation
field = 'normalized_change'
# TODO
Explanation: TODO Use the get_aggregate_stats from above to visualize the normalized_change column.
End of explanation
VALID_GROUPBY_KEYS = ('DAYOFWEEK', 'DAY', 'DAYOFYEAR',
'WEEK', 'MONTH', 'QUARTER', 'YEAR')
DOW_MAPPING = {1: 'Sun', 2: 'Mon', 3: 'Tues', 4: 'Wed',
5: 'Thur', 6: 'Fri', 7: 'Sun'}
def groupby_datetime(groupby_key, field):
if groupby_key not in VALID_GROUPBY_KEYS:
raise Exception('Please use a valid groupby_key.')
sql = '''
SELECT {groupby_key}, AVG({field}) as avg_{field}
FROM
(SELECT {field},
EXTRACT({groupby_key} FROM date) AS {groupby_key}
FROM stock_market.eps_percent_change_sp500) foo
GROUP BY {groupby_key}
ORDER BY {groupby_key} DESC'''.format(groupby_key=groupby_key,
field=field,
PROJECT=PROJECT)
print(sql)
df = bq.query(sql).to_dataframe()
if groupby_key == 'DAYOFWEEK':
df.DAYOFWEEK = df.DAYOFWEEK.map(DOW_MAPPING)
return df.set_index(groupby_key).dropna()
field = 'normalized_change'
df = groupby_datetime('DAYOFWEEK', field)
ax = df.plot(kind='barh', color='orange', alpha=0.7)
ax.grid(which='major', axis='y', linewidth=0)
field = 'close'
df = groupby_datetime('DAYOFWEEK', field)
ax = df.plot(kind='barh', color='orange', alpha=0.7)
ax.grid(which='major', axis='y', linewidth=0)
field = 'normalized_change'
df = groupby_datetime('MONTH', field)
ax = df.plot(kind='barh', color='blue', alpha=0.7)
ax.grid(which='major', axis='y', linewidth=0)
field = 'normalized_change'
df = groupby_datetime('QUARTER', field)
ax = df.plot(kind='barh', color='green', alpha=0.7)
ax.grid(which='major', axis='y', linewidth=0)
field = 'close'
df = groupby_datetime('YEAR', field)
ax = df.plot(kind='line', color='purple', alpha=0.7)
ax.grid(which='major', axis='y', linewidth=0)
field = 'normalized_change'
df = groupby_datetime('YEAR', field)
ax = df.plot(kind='line', color='purple', alpha=0.7)
ax.grid(which='major', axis='y', linewidth=0)
Explanation: Let's look at results by day-of-week, month, etc.
End of explanation |
15,035 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Facies classification using Random Forest
Contest entry by
Step1: Exploring the dataset
First, we will examine the data set we will use to train the classifier.
Step2: This data is from the Council Grove gas reservoir in Southwest Kansas. The Panoma Council Grove Field is predominantly a carbonate gas reservoir encompassing 2700 square miles in Southwestern Kansas. This dataset is from nine wells (with 4149 examples), consisting of a set of seven predictor variables and a rock facies (class) for each example vector. Validation (test) data (830 examples from two wells) have the same seven predictor variables in the feature vector. Facies are based on examination of cores from nine wells taken vertically at half-foot intervals. Predictor variables include five wireline log measurements and two geologic constraining variables that are derived from geologic knowledge. These are essentially continuous variables sampled at a half-foot sample rate.
The seven predictor variables are
Step3: These are the names of the 10 training wells in the Council Grove reservoir. Data has been recruited into pseudo-well 'Recruit F9' to better represent facies 9, the Phylloid-algal bafflestone.
Before we plot the well data, let's define a color map so the facies are represented by consistent color in all the plots in this tutorial. We also create the abbreviated facies labels, and add those to the facies_vectors dataframe.
Step4: We build a function to create new variables representing the absolute distance of a deposit to an upper marine context (younger) and one for the distance to a deeper marine context (older)
Sedimentary facies tend to be deposited in sequences that reflect the evolution of the paleo-environment (variations in water depth, water temperature, biological activity, currents strenght, detrital input, ...). Each facies represents a specific depositional environment and is in contact with facies that represent a progressive transition to an other environment. The distance to a marine context introduces information on the depositional environment of samples neighbouring the predicted sample and thus improves the classification.
This new variable illustrates our approach, which favorises the combination of machine learning algorithms with feature engineering adapted to the target variables.
Step5: Encoding Formation labels
The nature of the 'Formation' variable is not entirely clear, and we do not know how it has been obtained. In this Notebook, we include it in the predictor variables. However, if 'Formation' appears to be in part derived from 'Facies', it should be removed from the predictor variables.
Formation labels are encoded as integer values.
Step6: Let's see how well NM_M variable discriminates facies
Step7: Now, as the NM_M variable discriminate fairly well facies, we assume that facies 1-3 (SS to FSiS) are non marine and facies 4-9 (SiSh to BS) are marine.
Step8: Instead of dropping samples with the NaN value for the PE variable, we decided to replace NaN with an 'out of the range' value (i.e. -99999). This allow the classifier to take into account all the samples despite missing values for the PE variable.
Step9: We drop unused variables for marine and non-marine dataset (we prepare the list here)
Non-marine facies are dominated by clastic sedimentary rocks, and marine facies by carbonate sedimentary rocks. The best predictive variables for each group differ. Thus, marine and non-marine rocks are classified separately here, and a different set of predictive variables is chosen for each group.
**update
Step10: **Update
Step11: Applying the classification model to new data
Step12: Create new variables for the test dataset
we compute the absolute distance of a deposit to an upper marine context (younger) and one for the distance to a deeper marine context (older)
we encode the formation category
Step13: We train our classifier to the entire available dataset (10 wells), keeping the separation between marine and non-marine sediments
Step14: Train and making prediction for the the well test | Python Code:
%matplotlib inline
# to install watermark magic command: pip install ipyext
%load_ext watermark
%watermark -v -p numpy,scipy,pandas,matplotlib,seaborn,sklearn
Explanation: Facies classification using Random Forest
Contest entry by :geoLEARN
Martin Blouin, Lorenzo Perozzi, Antoine Caté <br>
in collaboration with Erwan Gloaguen
Original contest notebook by Brendon Hall, Enthought
In this notebook we will train a machine learning algorithm to predict facies from well log data. The dataset comes from a class exercise from The University of Kansas on Neural Networks and Fuzzy Systems. This exercise is based on a consortium project to use machine learning techniques to create a reservoir model of the largest gas fields in North America, the Hugoton and Panoma Fields. For more info on the origin of the data, see Bohling and Dubois (2003) and Dubois et al. (2007).
The dataset consists of log data from nine wells that have been labeled with a facies type based on observation of core. We will use this log data to train a Random Forest model to classify facies types.
Original approach
We focus on feature engineering instead of focusing on algorithm optimization.
The prediction approach has been split in two steps. 1) try to predict non-marine facies; 2) try to predict marine facies.
Validation scoring has been computed after the prediction results of the two approaches have been stacked together.
N.B We are using exhaustively an interpreted variable (NM_M). We are well aware that this may not be representative of a true machine learning experience where you don't have access to interpreted information. But for now, since we have access to it, why not use it. Moreover, we could easily adapt our approach in a 2 steps prediction where we would first discriminate marine from non-marine sediments and then classify each facies.
Print the versions of the used packages
End of explanation
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.colors as colors
from mpl_toolkits.axes_grid1 import make_axes_locatable
from sklearn import preprocessing
from pandas import set_option
set_option("display.max_rows",10)
pd.options.mode.chained_assignment = None
# turn off ipython warnings
import warnings
warnings.filterwarnings('ignore')
filename = '../facies_vectors.csv'
training_data = pd.read_csv(filename)
training_data.describe()
Explanation: Exploring the dataset
First, we will examine the data set we will use to train the classifier.
End of explanation
training_data['Well Name'] = training_data['Well Name'].astype('category')
training_data['Formation'] = training_data['Formation'].astype('category')
training_data['Well Name'].unique()
Explanation: This data is from the Council Grove gas reservoir in Southwest Kansas. The Panoma Council Grove Field is predominantly a carbonate gas reservoir encompassing 2700 square miles in Southwestern Kansas. This dataset is from nine wells (with 4149 examples), consisting of a set of seven predictor variables and a rock facies (class) for each example vector. Validation (test) data (830 examples from two wells) have the same seven predictor variables in the feature vector. Facies are based on examination of cores from nine wells taken vertically at half-foot intervals. Predictor variables include five wireline log measurements and two geologic constraining variables that are derived from geologic knowledge. These are essentially continuous variables sampled at a half-foot sample rate.
The seven predictor variables are:
* Five wire line log curves include gamma ray (GR), resistivity logging (ILD_log10),
photoelectric effect (PE), neutron-density porosity difference and average neutron-density porosity (DeltaPHI and PHIND). Note, some wells do not have PE.
* Two geologic constraining variables: nonmarine-marine indicator (NM_M) and relative position (RELPOS)
The nine discrete facies (classes of rocks) are:
1. Nonmarine sandstone
2. Nonmarine coarse siltstone
3. Nonmarine fine siltstone
4. Marine siltstone and shale
5. Mudstone (limestone)
6. Wackestone (limestone)
7. Dolomite
8. Packstone-grainstone (limestone)
9. Phylloid-algal bafflestone (limestone)
These facies aren't discrete, and gradually blend into one another. Some have neighboring facies that are rather close. Mislabeling within these neighboring facies can be expected to occur. The following table lists the facies, their abbreviated labels and their approximate neighbors.
Facies |Label| Adjacent Facies
:---: | :---: |:--:
1 |SS| 2
2 |CSiS| 1,3
3 |FSiS| 2
4 |SiSh| 5
5 |MS| 4,6
6 |WS| 5,7
7 |D| 6,8
8 |PS| 6,7,9
9 |BS| 7,8
Let's clean up this dataset. The 'Well Name' and 'Formation' columns can be turned into a categorical data type.
End of explanation
# 1=sandstone 2=c_siltstone 3=f_siltstone
# 4=marine_silt_shale 5=mudstone 6=wackestone 7=dolomite
# 8=packstone 9=bafflestone
facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00',
'#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D']
facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS',
'WS', 'D','PS', 'BS']
#facies_color_map is a dictionary that maps facies labels
#to their respective colors
facies_color_map = {}
for ind, label in enumerate(facies_labels):
facies_color_map[label] = facies_colors[ind]
def label_facies(row, labels):
return labels[ row['Facies'] -1]
training_data.loc[:,'FaciesLabels'] = training_data.apply(lambda row: label_facies(row, facies_labels), axis=1)
training_data.describe()
Explanation: These are the names of the 10 training wells in the Council Grove reservoir. Data has been recruited into pseudo-well 'Recruit F9' to better represent facies 9, the Phylloid-algal bafflestone.
Before we plot the well data, let's define a color map so the facies are represented by consistent color in all the plots in this tutorial. We also create the abbreviated facies labels, and add those to the facies_vectors dataframe.
End of explanation
def make_dist_mar_vars(wells_df):
grouped = wells_df.groupby(['Well Name'])
new_df = pd.DataFrame()
for key in grouped.groups.keys():
NM_M = grouped.get_group(key)['NM_M'].values
#We create a temporary dataframe that we reset for every well
temp_df = pd.DataFrame()
temp_df['Depth'] = grouped.get_group(key)['Depth']
temp_df['Well Name'] = [key for _ in range(len(NM_M))]
#We initialize new variables
dist_mar_up = np.zeros(len(NM_M))
dist_mar_down = np.zeros(len(NM_M))
# A variable counting the interval from the upper marine deposit an one for bottom lower deposit
# We initialize them to -99999 since we do not know what's above the first log
count = -99999
#we build them in two seperate loops
for i in range(len(NM_M)):
if ((NM_M[i] == 1) & (count>-99999)):
count+=0.5
dist_mar_up[i] += count
elif NM_M[i] == 2:
count=0
else:
dist_mar_up[i] = count
#********************************************#
#we reset count
count = -99999
for i in range(len(NM_M)-1,-1,-1):
if ((NM_M[i] == 1) & (count>-99999)):
count+=0.5
dist_mar_down[i] += count
elif NM_M[i] == 2:
count=0
else:
dist_mar_down[i] = count
#********************************************#
temp_df['dist_mar_up'] = dist_mar_up
temp_df['dist_mar_down'] = dist_mar_down
# We append each well variable to a larger dataframe
# We use a dataframe to preserve the index
new_df = new_df.append(temp_df)
new_df = new_df.sort_index()
new_df = new_df.drop(['Well Name','Depth'],axis=1)
#We don't use merge as it creates duplicates for curious reasons that we later have to drop
return pd.concat([wells_df,new_df],axis=1)
training_data = make_dist_mar_vars(training_data)
training_data.head()
Explanation: We build a function to create new variables representing the absolute distance of a deposit to an upper marine context (younger) and one for the distance to a deeper marine context (older)
Sedimentary facies tend to be deposited in sequences that reflect the evolution of the paleo-environment (variations in water depth, water temperature, biological activity, currents strenght, detrital input, ...). Each facies represents a specific depositional environment and is in contact with facies that represent a progressive transition to an other environment. The distance to a marine context introduces information on the depositional environment of samples neighbouring the predicted sample and thus improves the classification.
This new variable illustrates our approach, which favorises the combination of machine learning algorithms with feature engineering adapted to the target variables.
End of explanation
LE = preprocessing.LabelEncoder()
training_data['Formation_category'] = LE.fit_transform(training_data.Formation)
Explanation: Encoding Formation labels
The nature of the 'Formation' variable is not entirely clear, and we do not know how it has been obtained. In this Notebook, we include it in the predictor variables. However, if 'Formation' appears to be in part derived from 'Facies', it should be removed from the predictor variables.
Formation labels are encoded as integer values.
End of explanation
import seaborn as sns
training_data['NM_M_label'] = training_data['NM_M'].apply(lambda x: 'non_marine' if x == 1 else 'marine')
plt.figure(figsize=(8,6))
sns.countplot(x="FaciesLabels", hue="NM_M_label",
order=['SS', 'CSiS', 'FSiS', 'SiSh','MS','WS','D','PS','BS'],
data=training_data, palette = "RdBu");
Explanation: Let's see how well NM_M variable discriminates facies
End of explanation
training_data.head()
training_data = training_data.drop(['NM_M_label','FaciesLabels'], axis=1)
NM_training_data = training_data[training_data['NM_M'] == 1]
M_training_data = training_data[training_data['NM_M'] == 2]
Explanation: Now, as the NM_M variable discriminate fairly well facies, we assume that facies 1-3 (SS to FSiS) are non marine and facies 4-9 (SiSh to BS) are marine.
End of explanation
NM_training_data.replace(to_replace=np.nan,value=-99999,inplace=True)
M_training_data.replace(to_replace=np.nan,value=-99999,inplace=True)
Explanation: Instead of dropping samples with the NaN value for the PE variable, we decided to replace NaN with an 'out of the range' value (i.e. -99999). This allow the classifier to take into account all the samples despite missing values for the PE variable.
End of explanation
nm_drop_list = ['Formation', 'Well Name','Facies','NM_M']
m_drop_list = ['Formation', 'Well Name','Facies','dist_mar_down','dist_mar_up','NM_M']
Explanation: We drop unused variables for marine and non-marine dataset (we prepare the list here)
Non-marine facies are dominated by clastic sedimentary rocks, and marine facies by carbonate sedimentary rocks. The best predictive variables for each group differ. Thus, marine and non-marine rocks are classified separately here, and a different set of predictive variables is chosen for each group.
**update: even if dropping less important variables increase our cross-validation f1-score on the "leave one well out strategy", the non-stationnarity in the dataset makes it dangerous to ignore them. So we will accept a lower CV score here.
End of explanation
from sklearn import ensemble
from sklearn import metrics
clf = ensemble.RandomForestClassifier(n_estimators=100,n_jobs=-1,min_samples_leaf=10,random_state=42)
names = list(np.unique(training_data['Well Name']))
nm_grouped = NM_training_data.groupby(['Well Name'])
m_grouped = M_training_data.groupby(['Well Name'])
new_df = pd.DataFrame()
scores = []
for name in names:
temp_df = pd.DataFrame()
# We need to isolate Recruit F9 since it has only marine facies
if name == 'Recruit F9':
#Build list of well names and remove blind well (and remove recruit F9 for NM)
m_train_names = names.copy()
m_train_names.remove(name)
# Do it for marine sediments
m_test = m_grouped.get_group(name)
m_test_depth = m_test.Depth
m_X_test = m_test.drop(m_drop_list, axis=1).values
y_test = m_test['Facies'].values
m_train = pd.DataFrame()
for train_name in m_train_names:
m_train = m_train.append(m_grouped.get_group(train_name))
id_train_M = m_train.Facies >= 4
m_train = m_train [id_train_M]
m_X_train = m_train.drop(m_drop_list, axis=1).values
m_y_train = m_train['Facies'].values
#Then we do random forest classification and prediction
clf.fit(m_X_train, m_y_train)
y_pred = clf.predict(m_X_test)
else:
#Build list of well names and remove blind well (and remove recruit F9 for NM)
m_train_names = names.copy()
m_train_names.remove(name)
nm_train_names = m_train_names.copy()
nm_train_names.remove('Recruit F9')
# Do it for non-marine sediments
nm_test = nm_grouped.get_group(name)
nm_test_depth = nm_test.Depth
nm_X_test = nm_test.drop(nm_drop_list, axis=1).values
nm_y_test = nm_test['Facies'].values
nm_train = pd.DataFrame()
for train_name in nm_train_names:
nm_train = nm_train.append(nm_grouped.get_group(train_name))
id_train_NM = nm_train.Facies <= 3
nm_train = nm_train [id_train_NM]
nm_X_train = nm_train.drop(nm_drop_list, axis=1).values
nm_y_train = nm_train['Facies'].values
#Then we do random forest classification and prediction
clf.fit(nm_X_train, nm_y_train)
nm_y_pred = clf.predict(nm_X_test)
print(clf.feature_importances_)
#*********************************************************************#
# Do it for marine sediments
m_test = m_grouped.get_group(name)
m_test_depth = m_test.Depth
m_X_test = m_test.drop(m_drop_list, axis=1).values
m_y_test = m_test['Facies'].values
m_train = pd.DataFrame()
for train_name in m_train_names:
m_train = m_train.append(m_grouped.get_group(train_name))
id_train_M = m_train.Facies >= 4
m_train = m_train [id_train_M]
m_X_train = m_train.drop(m_drop_list, axis=1).values
m_y_train = m_train['Facies'].values
#Then we do random forest classification and prediction
clf.fit(m_X_train, m_y_train)
m_y_pred = clf.predict(m_X_test)
print(clf.feature_importances_)
#================================================================#
# combine results
#================================================================#
y_test = np.hstack((nm_y_test,m_y_test))
y_pred = np.hstack((nm_y_pred,m_y_pred))
#Scoring
conf_mat = metrics.confusion_matrix(y_test,y_pred)
print(conf_mat)
try:
score = metrics.f1_score(y_test, y_pred,average='weighted')
except:
#this exception is for Recruit F9
score = metrics.f1_score(y_test, y_pred)
scores.append(score)
print('********')
print('Blind well is {0}, F1 score : {1:.4%}\n'.format(name,score))
if name == 'Recruit F9':
depth = m_test_depth
else:
depth = np.hstack((nm_test_depth,m_test_depth))
idx = np.argsort(depth)
temp_df['Depth'] = depth[idx]
temp_df['True Facies'] = y_test[idx]
temp_df['Predicted Facies'] = y_pred[idx]
temp_df['Well Name'] = [name for _ in range(len(depth))]
new_df = new_df.append(temp_df)
print("="*30)
print('*********** RESULT ***********')
print("="*30)
print('\nAverage F1-score is {:.4%}'.format(np.mean(scores)))
Explanation: **Update: Using smaller trees
Using fully expanded trees in the initial submission helped us to achieve a high average f1-score for the "one by one" well prediction of the training set. Here we are using smaller trees to increase variance and reduce bias towards our training wells.
End of explanation
filename = '../validation_data_nofacies.csv'
test_data = pd.read_csv(filename)
test_data
Explanation: Applying the classification model to new data
End of explanation
# absoulte distance
test_data = make_dist_mar_vars(test_data)
# formation category encoding
test_data['Formation_category'] = LE.fit_transform(test_data.Formation)
Explanation: Create new variables for the test dataset
we compute the absolute distance of a deposit to an upper marine context (younger) and one for the distance to a deeper marine context (older)
we encode the formation category
End of explanation
# nm = non-marine and m = marine
nm_drop_list.remove('Facies')
m_drop_list.remove('Facies')
Explanation: We train our classifier to the entire available dataset (10 wells), keeping the separation between marine and non-marine sediments
End of explanation
grouped = test_data.groupby('Well Name')
new_df = pd.DataFrame()
for name in grouped.groups.keys():
temp_df = pd.DataFrame()
temp_df['Depth'] = grouped.get_group(name)['Depth']
nm_test = grouped.get_group(name)[test_data['NM_M'] == 1]
m_test = grouped.get_group(name)[test_data['NM_M'] == 2]
# Do it for non-marine sediments
nm_test_depth = nm_test.Depth
nm_X_test = nm_test.drop(nm_drop_list, axis=1).values
id_train_NM = NM_training_data.Facies <= 3
nm_train = NM_training_data[id_train_NM]
nm_X_train = nm_train.drop(nm_drop_list, axis=1)
nm_X_train = nm_X_train.drop('Facies', axis=1).values
nm_y_train = nm_train['Facies'].values
#Then we do random forest classification
clf.fit(nm_X_train, nm_y_train)
nm_y_pred = clf.predict(nm_X_test)
#*********************************************************************#
# Do it for marine sediments
m_test_depth = m_test.Depth
m_X_test = m_test.drop(m_drop_list, axis=1).values
id_train_M = M_training_data.Facies >= 4
m_train = M_training_data[id_train_M]
m_X_train = m_train.drop('Facies', axis=1)
m_X_train = m_X_train.drop(m_drop_list,axis=1).values
m_y_train = m_train['Facies'].values
#Then we do random forest classification
clf.fit(m_X_train, m_y_train)
m_y_pred = clf.predict(m_X_test)
#================================================================#
# combine results
#================================================================#
y_pred = np.hstack((nm_y_pred,m_y_pred))
depth = np.hstack((nm_test_depth,m_test_depth))
idx = np.argsort(depth)
temp_df['Predicted Facies'] = y_pred[idx]
temp_df['Well Name'] = [name for _ in range(len(depth))]
new_df = new_df.append(temp_df)
new_df = new_df.sort_index()
new_df.to_pickle('Prediction_blind_wells_RF_b.pkl')
new_df
Explanation: Train and making prediction for the the well test
End of explanation |
15,036 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Decision Analysis
Think Bayes, Second Edition
Copyright 2020 Allen B. Downey
License
Step1: This chapter presents a problem inspired by the game show The Price is Right.
It is a silly example, but it demonstrates a useful process called Bayesian decision analysis.
As in previous examples, we'll use data and prior distribution to compute a posterior distribution; then we'll use the posterior distribution to choose an optimal strategy in a game that involves bidding.
As part of the solution, we will use kernel density estimation (KDE) to estimate the prior distribution, and a normal distribution to compute the likelihood of the data.
And at the end of the chapter, I pose a related problem you can solve as an exercise.
The Price Is Right Problem
On November 1, 2007, contestants named Letia and Nathaniel appeared on The Price is Right, an American television game show. They competed in a game called "The Showcase", where the objective is to guess the price of a collection of prizes. The contestant who comes closest to the actual price, without going over, wins the prizes.
Nathaniel went first. His showcase included a dishwasher, a wine cabinet, a laptop computer, and a car. He bid \$26,000.
Letia's showcase included a pinball machine, a video arcade game, a pool table, and a cruise of the Bahamas. She bid \$21,500.
The actual price of Nathaniel's showcase was \$25,347. His bid was too high, so he lost.
The actual price of Letia's showcase was \$21,578.
She was only off by \$78, so she won her showcase and, because her bid was off by less than 250, she also won Nathaniel's showcase.
For a Bayesian thinker, this scenario suggests several questions
Step3: The following function reads the data and cleans it up a little.
Step4: I'll read both files and concatenate them.
Step5: Here's what the dataset looks like
Step7: The first two columns, Showcase 1 and Showcase 2, are the values of the showcases in dollars.
The next two columns are the bids the contestants made.
The last two columns are the differences between the actual values and the bids.
Kernel Density Estimation
This dataset contains the prices for 313 previous showcases, which we can think of as a sample from the population of possible prices.
We can use this sample to estimate the prior distribution of showcase prices. One way to do that is kernel density estimation (KDE), which uses the sample to estimate a smooth distribution. If you are not familiar with KDE, you can read about it here.
SciPy provides gaussian_kde, which takes a sample and returns an object that represents the estimated distribution.
The following function takes sample, makes a KDE, evaluates it at a given sequence of quantities, qs, and returns the result as a normalized PMF.
Step8: We can use it to estimate the distribution of values for Showcase 1
Step9: Here's what it looks like
Step10: Exercise
Step11: Distribution of Error
To update these priors, we have to answer these questions
Step12: To visualize the distribution of these differences, we can use KDE again.
Step13: Here's what these distributions look like
Step14: It looks like the bids are too low more often than too high, which makes sense. Remember that under the rules of the game, you lose if you overbid, so contestants probably underbid to some degree deliberately.
For example, if they guess that the value of the showcase is \$40,000, they might bid \$36,000 to avoid going over.
It looks like these distributions are well modeled by a normal distribution, so we can summarize them with their mean and standard deviation.
For example, here is the mean and standard deviation of Diff for Player 1.
Step15: Now we can use these differences to model the contestant's distribution of errors.
This step is a little tricky because we don't actually know the contestant's guesses; we only know what they bid.
So we have to make some assumptions
Step16: The result is an object that provides pdf, which evaluates the probability density function of the normal distribution.
For example, here is the probability density of error=-100, based on the distribution of errors for Player 1.
Step17: By itself, this number doesn't mean very much, because probability densities are not probabilities. But they are proportional to probabilities, so we can use them as likelihoods in a Bayesian update, as we'll see in the next section.
Update
Suppose you are Player 1. You see the prizes in your showcase and your guess for the total price is \$23,000.
From your guess I will subtract away each hypothetical price in the prior distribution; the result is your error under each hypothesis.
Step18: Now suppose we know, based on past performance, that your estimation error is well modeled by error_dist1.
Under that assumption we can compute the likelihood of your error under each hypothesis.
Step19: The result is an array of likelihoods, which we can use to update the prior.
Step20: Here's what the posterior distribution looks like
Step21: Because your initial guess is in the lower end of the range, the posterior distribution has shifted to the left. We can compute the posterior mean to see by how much.
Step22: Before you saw the prizes, you expected to see a showcase with a value close to \$30,000.
After making a guess of \$23,000, you updated the prior distribution.
Based on the combination of the prior and your guess, you now expect the actual price to be about \$26,000.
Exercise
Step24: Probability of Winning
Now that we have a posterior distribution for each player, let's think about strategy.
First, from the point of view of Player 1, let's compute the probability that Player 2 overbids. To keep it simple, I'll use only the performance of past players, ignoring the value of the showcase.
The following function takes a sequence of past bids and returns the fraction that overbid.
Step25: Here's an estimate for the probability that Player 2 overbids.
Step27: Now suppose Player 1 underbids by \$5000.
What is the probability that Player 2 underbids by more?
The following function uses past performance to estimate the probability that a player underbids by more than a given amount, diff
Step28: Here's the probability that Player 2 underbids by more than \$5000.
Step29: And here's the probability they underbid by more than \$10,000.
Step31: We can combine these functions to compute the probability that Player 1 wins, given the difference between their bid and the actual price
Step32: Here's the probability that you win, given that you underbid by \$5000.
Step33: Now let's look at the probability of winning for a range of possible differences.
Step34: Here's what it looks like
Step35: If you underbid by \$30,000, the chance of winning is about 30%, which is mostly the chance your opponent overbids.
As your bids gets closer to the actual price, your chance of winning approaches 1.
And, of course, if you overbid, you lose (even if your opponent also overbids).
Exercise
Step37: Decision Analysis
In the previous section we computed the probability of winning, given that we have underbid by a particular amount.
In reality the contestants don't know how much they have underbid by, because they don't know the actual price.
But they do have a posterior distribution that represents their beliefs about the actual price, and they can use that to estimate their probability of winning with a given bid.
The following function takes a possible bid, a posterior distribution of actual prices, and a sample of differences for the opponent.
It loops through the hypothetical prices in the posterior distribution and, for each price,
Computes the difference between the bid and the hypothetical price,
Computes the probability that the player wins, given that difference, and
Adds up the weighted sum of the probabilities, where the weights are the probabilities in the posterior distribution.
Step38: This loop implements the law of total probability
Step39: Now we can loop through a series of possible bids and compute the probability of winning for each one.
Step40: Here are the results.
Step41: And here's the bid that maximizes Player 1's chance of winning.
Step42: Recall that your guess was \$23,000.
Using your guess to compute the posterior distribution, the posterior mean is about \$26,000.
But the bid that maximizes your chance of winning is \$21,000.
Exercise
Step44: Maximizing Expected Gain
In the previous section we computed the bid that maximizes your chance of winning.
And if that's your goal, the bid we computed is optimal.
But winning isn't everything.
Remember that if your bid is off by \$250 or less, you win both showcases.
So it might be a good idea to increase your bid a little
Step45: For example, if the actual price is \$35000
and you bid \$30000,
you will win about \$23,600 worth of prizes on average, taking into account your probability of losing, winning one showcase, or winning both.
Step47: In reality we don't know the actual price, but we have a posterior distribution that represents what we know about it.
By averaging over the prices and probabilities in the posterior distribution, we can compute the expected gain for a particular bid.
In this context, "expected" means the average over the possible showcase values, weighted by their probabilities.
Step48: For the posterior we computed earlier, based on a guess of \$23,000, the expected gain for a bid of \$21,000 is about \$16,900.
Step49: But can we do any better?
To find out, we can loop through a range of bids and find the one that maximizes expected gain.
Step50: Here are the results.
Step51: Here is the optimal bid.
Step52: With that bid, the expected gain is about \$17,400.
Step53: Recall that your initial guess was \$23,000.
The bid that maximizes the chance of winning is \$21,000.
And the bid that maximizes your expected gain is \$22,000.
Exercise
Step59: Summary
There's a lot going on this this chapter, so let's review the steps
Step60: To test these functions, suppose we get exactly 10 orders per week for eight weeks
Step61: If you print 60 books, your net profit is \$200, as in the example.
Step62: If you print 100 books, your net profit is \$310.
Step63: Of course, in the context of the problem you don't know how many books will be ordered in any given week. You don't even know the average rate of orders. However, given the data and some assumptions about the prior, you can compute the distribution of the rate of orders.
You'll have a chance to do that, but to demonstrate the decision analysis part of the problem, I'll start with the arbitrary assumption that order rates come from a gamma distribution with mean 9.
Here's a Pmf that represents this distribution.
Step64: And here's what it looks like
Step65: Now, we could generate a predictive distribution for the number of books ordered in a given week, but in this example we have to deal with a complicated cost function. In particular, out_of_stock_cost depends on the sequence of orders.
So, rather than generate a predictive distribution, I suggest we run simulations. I'll demonstrate the steps.
First, from our hypothetical distribution of rates, we can draw a random sample of 1000 values.
Step66: For each possible rate, we can generate a sequence of 8 orders.
Step68: Each row of this array is a hypothetical sequence of orders based on a different hypothetical order rate.
Now, if you tell me how many books you printed, I can compute your expected profits, averaged over these 1000 possible sequences.
Step69: For example, here are the expected profits if you order 70, 80, or 90 books.
Step70: Now, let's sweep through a range of values and compute expected profits as a function of the number of books you print.
Step71: Here is the optimal order and the expected profit.
Step72: Now it's your turn. Choose a prior that you think is reasonable, update it with the data you are given, and then use the posterior distribution to do the analysis I just demonstrated. | Python Code:
# If we're running on Colab, install empiricaldist
# https://pypi.org/project/empiricaldist/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install empiricaldist
# Get utils.py
from os.path import basename, exists
def download(url):
filename = basename(url)
if not exists(filename):
from urllib.request import urlretrieve
local, _ = urlretrieve(url, filename)
print('Downloaded ' + local)
download('https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py')
from utils import set_pyplot_params
set_pyplot_params()
Explanation: Decision Analysis
Think Bayes, Second Edition
Copyright 2020 Allen B. Downey
License: Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
End of explanation
# Load the data files
download('https://raw.githubusercontent.com/AllenDowney/ThinkBayes2/master/data/showcases.2011.csv')
download('https://raw.githubusercontent.com/AllenDowney/ThinkBayes2/master/data/showcases.2012.csv')
Explanation: This chapter presents a problem inspired by the game show The Price is Right.
It is a silly example, but it demonstrates a useful process called Bayesian decision analysis.
As in previous examples, we'll use data and prior distribution to compute a posterior distribution; then we'll use the posterior distribution to choose an optimal strategy in a game that involves bidding.
As part of the solution, we will use kernel density estimation (KDE) to estimate the prior distribution, and a normal distribution to compute the likelihood of the data.
And at the end of the chapter, I pose a related problem you can solve as an exercise.
The Price Is Right Problem
On November 1, 2007, contestants named Letia and Nathaniel appeared on The Price is Right, an American television game show. They competed in a game called "The Showcase", where the objective is to guess the price of a collection of prizes. The contestant who comes closest to the actual price, without going over, wins the prizes.
Nathaniel went first. His showcase included a dishwasher, a wine cabinet, a laptop computer, and a car. He bid \$26,000.
Letia's showcase included a pinball machine, a video arcade game, a pool table, and a cruise of the Bahamas. She bid \$21,500.
The actual price of Nathaniel's showcase was \$25,347. His bid was too high, so he lost.
The actual price of Letia's showcase was \$21,578.
She was only off by \$78, so she won her showcase and, because her bid was off by less than 250, she also won Nathaniel's showcase.
For a Bayesian thinker, this scenario suggests several questions:
Before seeing the prizes, what prior beliefs should the contestants have about the price of the showcase?
After seeing the prizes, how should the contestants update those beliefs?
Based on the posterior distribution, what should the contestants bid?
The third question demonstrates a common use of Bayesian methods: decision analysis.
This problem is inspired by an example in Cameron Davidson-Pilon's book, Probablistic Programming and Bayesian Methods for Hackers.
The Prior
To choose a prior distribution of prices, we can take advantage of data from previous episodes. Fortunately, fans of the show keep detailed records.
For this example, I downloaded files containing the price of each showcase from the 2011 and 2012 seasons and the bids offered by the contestants.
The following cells load the data files.
End of explanation
import pandas as pd
def read_data(filename):
Read the showcase price data.
df = pd.read_csv(filename, index_col=0, skiprows=[1])
return df.dropna().transpose()
Explanation: The following function reads the data and cleans it up a little.
End of explanation
df2011 = read_data('showcases.2011.csv')
df2012 = read_data('showcases.2012.csv')
df = pd.concat([df2011, df2012], ignore_index=True)
print(df2011.shape, df2012.shape, df.shape)
Explanation: I'll read both files and concatenate them.
End of explanation
df.head(3)
Explanation: Here's what the dataset looks like:
End of explanation
from scipy.stats import gaussian_kde
from empiricaldist import Pmf
def kde_from_sample(sample, qs):
Make a kernel density estimate from a sample.
kde = gaussian_kde(sample)
ps = kde(qs)
pmf = Pmf(ps, qs)
pmf.normalize()
return pmf
Explanation: The first two columns, Showcase 1 and Showcase 2, are the values of the showcases in dollars.
The next two columns are the bids the contestants made.
The last two columns are the differences between the actual values and the bids.
Kernel Density Estimation
This dataset contains the prices for 313 previous showcases, which we can think of as a sample from the population of possible prices.
We can use this sample to estimate the prior distribution of showcase prices. One way to do that is kernel density estimation (KDE), which uses the sample to estimate a smooth distribution. If you are not familiar with KDE, you can read about it here.
SciPy provides gaussian_kde, which takes a sample and returns an object that represents the estimated distribution.
The following function takes sample, makes a KDE, evaluates it at a given sequence of quantities, qs, and returns the result as a normalized PMF.
End of explanation
import numpy as np
qs = np.linspace(0, 80000, 81)
prior1 = kde_from_sample(df['Showcase 1'], qs)
Explanation: We can use it to estimate the distribution of values for Showcase 1:
End of explanation
from utils import decorate
def decorate_value(title=''):
decorate(xlabel='Showcase value ($)',
ylabel='PMF',
title=title)
prior1.plot(label='Prior 1')
decorate_value('Prior distribution of showcase value')
Explanation: Here's what it looks like:
End of explanation
# Solution goes here
# Solution goes here
Explanation: Exercise: Use this function to make a Pmf that represents the prior distribution for Showcase 2, and plot it.
End of explanation
sample_diff1 = df['Bid 1'] - df['Showcase 1']
sample_diff2 = df['Bid 2'] - df['Showcase 2']
Explanation: Distribution of Error
To update these priors, we have to answer these questions:
What data should we consider and how should we quantify it?
Can we compute a likelihood function; that is, for each hypothetical price, can we compute the conditional likelihood of the data?
To answer these questions, I will model each contestant as a price-guessing instrument with known error characteristics.
In this model, when the contestant sees the prizes, they guess the price of each prize and add up the prices.
Let's call this total guess.
Now the question we have to answer is, "If the actual price is price, what is the likelihood that the contestant's guess would be guess?"
Equivalently, if we define error = guess - price, we can ask, "What is the likelihood that the contestant's guess is off by error?"
To answer this question, I'll use the historical data again.
For each showcase in the dataset, let's look at the difference between the contestant's bid and the actual price:
End of explanation
qs = np.linspace(-40000, 20000, 61)
kde_diff1 = kde_from_sample(sample_diff1, qs)
kde_diff2 = kde_from_sample(sample_diff2, qs)
Explanation: To visualize the distribution of these differences, we can use KDE again.
End of explanation
kde_diff1.plot(label='Diff 1', color='C8')
kde_diff2.plot(label='Diff 2', color='C4')
decorate(xlabel='Difference in value ($)',
ylabel='PMF',
title='Difference between bid and actual value')
Explanation: Here's what these distributions look like:
End of explanation
mean_diff1 = sample_diff1.mean()
std_diff1 = sample_diff1.std()
print(mean_diff1, std_diff1)
Explanation: It looks like the bids are too low more often than too high, which makes sense. Remember that under the rules of the game, you lose if you overbid, so contestants probably underbid to some degree deliberately.
For example, if they guess that the value of the showcase is \$40,000, they might bid \$36,000 to avoid going over.
It looks like these distributions are well modeled by a normal distribution, so we can summarize them with their mean and standard deviation.
For example, here is the mean and standard deviation of Diff for Player 1.
End of explanation
from scipy.stats import norm
error_dist1 = norm(0, std_diff1)
Explanation: Now we can use these differences to model the contestant's distribution of errors.
This step is a little tricky because we don't actually know the contestant's guesses; we only know what they bid.
So we have to make some assumptions:
I'll assume that contestants underbid because they are being strategic, and that on average their guesses are accurate. In other words, the mean of their errors is 0.
But I'll assume that the spread of the differences reflects the actual spread of their errors. So, I'll use the standard deviation of the differences as the standard deviation of their errors.
Based on these assumptions, I'll make a normal distribution with parameters 0 and std_diff1.
SciPy provides an object called norm that represents a normal distribution with the given mean and standard deviation.
End of explanation
error = -100
error_dist1.pdf(error)
Explanation: The result is an object that provides pdf, which evaluates the probability density function of the normal distribution.
For example, here is the probability density of error=-100, based on the distribution of errors for Player 1.
End of explanation
guess1 = 23000
error1 = guess1 - prior1.qs
Explanation: By itself, this number doesn't mean very much, because probability densities are not probabilities. But they are proportional to probabilities, so we can use them as likelihoods in a Bayesian update, as we'll see in the next section.
Update
Suppose you are Player 1. You see the prizes in your showcase and your guess for the total price is \$23,000.
From your guess I will subtract away each hypothetical price in the prior distribution; the result is your error under each hypothesis.
End of explanation
likelihood1 = error_dist1.pdf(error1)
Explanation: Now suppose we know, based on past performance, that your estimation error is well modeled by error_dist1.
Under that assumption we can compute the likelihood of your error under each hypothesis.
End of explanation
posterior1 = prior1 * likelihood1
posterior1.normalize()
Explanation: The result is an array of likelihoods, which we can use to update the prior.
End of explanation
prior1.plot(color='C5', label='Prior 1')
posterior1.plot(color='C4', label='Posterior 1')
decorate_value('Prior and posterior distribution of showcase value')
Explanation: Here's what the posterior distribution looks like:
End of explanation
prior1.mean(), posterior1.mean()
Explanation: Because your initial guess is in the lower end of the range, the posterior distribution has shifted to the left. We can compute the posterior mean to see by how much.
End of explanation
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
Explanation: Before you saw the prizes, you expected to see a showcase with a value close to \$30,000.
After making a guess of \$23,000, you updated the prior distribution.
Based on the combination of the prior and your guess, you now expect the actual price to be about \$26,000.
Exercise: Now suppose you are Player 2. When you see your showcase, you guess that the total price is \$38,000.
Use diff2 to construct a normal distribution that represents the distribution of your estimation errors.
Compute the likelihood of your guess for each actual price and use it to update prior2.
Plot the posterior distribution and compute the posterior mean. Based on the prior and your guess, what do you expect the actual price of the showcase to be?
End of explanation
def prob_overbid(sample_diff):
Compute the probability of an overbid.
return np.mean(sample_diff > 0)
Explanation: Probability of Winning
Now that we have a posterior distribution for each player, let's think about strategy.
First, from the point of view of Player 1, let's compute the probability that Player 2 overbids. To keep it simple, I'll use only the performance of past players, ignoring the value of the showcase.
The following function takes a sequence of past bids and returns the fraction that overbid.
End of explanation
prob_overbid(sample_diff2)
Explanation: Here's an estimate for the probability that Player 2 overbids.
End of explanation
def prob_worse_than(diff, sample_diff):
Probability opponent diff is worse than given diff.
return np.mean(sample_diff < diff)
Explanation: Now suppose Player 1 underbids by \$5000.
What is the probability that Player 2 underbids by more?
The following function uses past performance to estimate the probability that a player underbids by more than a given amount, diff:
End of explanation
prob_worse_than(-5000, sample_diff2)
Explanation: Here's the probability that Player 2 underbids by more than \$5000.
End of explanation
prob_worse_than(-10000, sample_diff2)
Explanation: And here's the probability they underbid by more than \$10,000.
End of explanation
def compute_prob_win(diff, sample_diff):
Probability of winning for a given diff.
# if you overbid you lose
if diff > 0:
return 0
# if the opponent overbids, you win
p1 = prob_overbid(sample_diff)
# or of their bid is worse than yours, you win
p2 = prob_worse_than(diff, sample_diff)
# p1 and p2 are mutually exclusive, so we can add them
return p1 + p2
Explanation: We can combine these functions to compute the probability that Player 1 wins, given the difference between their bid and the actual price:
End of explanation
compute_prob_win(-5000, sample_diff2)
Explanation: Here's the probability that you win, given that you underbid by \$5000.
End of explanation
xs = np.linspace(-30000, 5000, 121)
ys = [compute_prob_win(x, sample_diff2)
for x in xs]
Explanation: Now let's look at the probability of winning for a range of possible differences.
End of explanation
import matplotlib.pyplot as plt
plt.plot(xs, ys)
decorate(xlabel='Difference between bid and actual price ($)',
ylabel='Probability of winning',
title='Player 1')
Explanation: Here's what it looks like:
End of explanation
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
Explanation: If you underbid by \$30,000, the chance of winning is about 30%, which is mostly the chance your opponent overbids.
As your bids gets closer to the actual price, your chance of winning approaches 1.
And, of course, if you overbid, you lose (even if your opponent also overbids).
Exercise: Run the same analysis from the point of view of Player 2. Using the sample of differences from Player 1, compute:
The probability that Player 1 overbids.
The probability that Player 1 underbids by more than \$5000.
The probability that Player 2 wins, given that they underbid by \$5000.
Then plot the probability that Player 2 wins for a range of possible differences between their bid and the actual price.
End of explanation
def total_prob_win(bid, posterior, sample_diff):
Computes the total probability of winning with a given bid.
bid: your bid
posterior: Pmf of showcase value
sample_diff: sequence of differences for the opponent
returns: probability of winning
total = 0
for price, prob in posterior.items():
diff = bid - price
total += prob * compute_prob_win(diff, sample_diff)
return total
Explanation: Decision Analysis
In the previous section we computed the probability of winning, given that we have underbid by a particular amount.
In reality the contestants don't know how much they have underbid by, because they don't know the actual price.
But they do have a posterior distribution that represents their beliefs about the actual price, and they can use that to estimate their probability of winning with a given bid.
The following function takes a possible bid, a posterior distribution of actual prices, and a sample of differences for the opponent.
It loops through the hypothetical prices in the posterior distribution and, for each price,
Computes the difference between the bid and the hypothetical price,
Computes the probability that the player wins, given that difference, and
Adds up the weighted sum of the probabilities, where the weights are the probabilities in the posterior distribution.
End of explanation
total_prob_win(25000, posterior1, sample_diff2)
Explanation: This loop implements the law of total probability:
$$P(win) = \sum_{price} P(price) ~ P(win ~|~ price)$$
Here's the probability that Player 1 wins, based on a bid of \$25,000 and the posterior distribution posterior1.
End of explanation
bids = posterior1.qs
probs = [total_prob_win(bid, posterior1, sample_diff2)
for bid in bids]
prob_win_series = pd.Series(probs, index=bids)
Explanation: Now we can loop through a series of possible bids and compute the probability of winning for each one.
End of explanation
prob_win_series.plot(label='Player 1', color='C1')
decorate(xlabel='Bid ($)',
ylabel='Probability of winning',
title='Optimal bid: probability of winning')
Explanation: Here are the results.
End of explanation
prob_win_series.idxmax()
prob_win_series.max()
Explanation: And here's the bid that maximizes Player 1's chance of winning.
End of explanation
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
Explanation: Recall that your guess was \$23,000.
Using your guess to compute the posterior distribution, the posterior mean is about \$26,000.
But the bid that maximizes your chance of winning is \$21,000.
Exercise: Do the same analysis for Player 2.
End of explanation
def compute_gain(bid, price, sample_diff):
Compute expected gain given a bid and actual price.
diff = bid - price
prob = compute_prob_win(diff, sample_diff)
# if you are within 250 dollars, you win both showcases
if -250 <= diff <= 0:
return 2 * price * prob
else:
return price * prob
Explanation: Maximizing Expected Gain
In the previous section we computed the bid that maximizes your chance of winning.
And if that's your goal, the bid we computed is optimal.
But winning isn't everything.
Remember that if your bid is off by \$250 or less, you win both showcases.
So it might be a good idea to increase your bid a little: it increases the chance you overbid and lose, but it also increases the chance of winning both showcases.
Let's see how that works out.
The following function computes how much you will win, on average, given your bid, the actual price, and a sample of errors for your opponent.
End of explanation
compute_gain(30000, 35000, sample_diff2)
Explanation: For example, if the actual price is \$35000
and you bid \$30000,
you will win about \$23,600 worth of prizes on average, taking into account your probability of losing, winning one showcase, or winning both.
End of explanation
def expected_gain(bid, posterior, sample_diff):
Compute the expected gain of a given bid.
total = 0
for price, prob in posterior.items():
total += prob * compute_gain(bid, price, sample_diff)
return total
Explanation: In reality we don't know the actual price, but we have a posterior distribution that represents what we know about it.
By averaging over the prices and probabilities in the posterior distribution, we can compute the expected gain for a particular bid.
In this context, "expected" means the average over the possible showcase values, weighted by their probabilities.
End of explanation
expected_gain(21000, posterior1, sample_diff2)
Explanation: For the posterior we computed earlier, based on a guess of \$23,000, the expected gain for a bid of \$21,000 is about \$16,900.
End of explanation
bids = posterior1.qs
gains = [expected_gain(bid, posterior1, sample_diff2) for bid in bids]
expected_gain_series = pd.Series(gains, index=bids)
Explanation: But can we do any better?
To find out, we can loop through a range of bids and find the one that maximizes expected gain.
End of explanation
expected_gain_series.plot(label='Player 1', color='C2')
decorate(xlabel='Bid ($)',
ylabel='Expected gain ($)',
title='Optimal bid: expected gain')
Explanation: Here are the results.
End of explanation
expected_gain_series.idxmax()
Explanation: Here is the optimal bid.
End of explanation
expected_gain_series.max()
Explanation: With that bid, the expected gain is about \$17,400.
End of explanation
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
Explanation: Recall that your initial guess was \$23,000.
The bid that maximizes the chance of winning is \$21,000.
And the bid that maximizes your expected gain is \$22,000.
Exercise: Do the same analysis for Player 2.
End of explanation
def print_cost(printed):
Compute print costs.
printed: integer number printed
if printed < 100:
return printed * 5
else:
return printed * 4.5
def total_income(printed, orders):
Compute income.
printed: integer number printed
orders: sequence of integer number of books ordered
sold = min(printed, np.sum(orders))
return sold * 10
def inventory_cost(printed, orders):
Compute inventory costs.
printed: integer number printed
orders: sequence of integer number of books ordered
excess = printed - np.sum(orders)
if excess > 0:
return excess * 2
else:
return 0
def out_of_stock_cost(printed, orders):
Compute out of stock costs.
printed: integer number printed
orders: sequence of integer number of books ordered
weeks = len(orders)
total_orders = np.cumsum(orders)
for i, total in enumerate(total_orders):
if total > printed:
return (weeks-i) * 50
return 0
def compute_profit(printed, orders):
Compute profit.
printed: integer number printed
orders: sequence of integer number of books ordered
return (total_income(printed, orders) -
print_cost(printed)-
out_of_stock_cost(printed, orders) -
inventory_cost(printed, orders))
Explanation: Summary
There's a lot going on this this chapter, so let's review the steps:
First we used KDE and data from past shows to estimate prior distributions for the values of the showcases.
Then we used bids from past shows to model the distribution of errors as a normal distribution.
We did a Bayesian update using the distribution of errors to compute the likelihood of the data.
We used the posterior distribution for the value of the showcase to compute the probability of winning for each possible bid, and identified the bid that maximizes the chance of winning.
Finally, we used probability of winning to compute the expected gain for each possible bid, and identified the bid that maximizes expected gain.
Incidentally, this example demonstrates the hazard of using the word "optimal" without specifying what you are optimizing.
The bid that maximizes the chance of winning is not generally the same as the bid that maximizes expected gain.
Discussion
When people discuss the pros and cons of Bayesian estimation, as contrasted with classical methods sometimes called "frequentist", they often claim that in many cases Bayesian methods and frequentist methods produce the same results.
In my opinion, this claim is mistaken because Bayesian and frequentist method produce different kinds of results:
The result of frequentist methods is usually a single value that is considered to be the best estimate (by one of several criteria) or an interval that quantifies the precision of the estimate.
The result of Bayesian methods is a posterior distribution that represents all possible outcomes and their probabilities.
Granted, you can use the posterior distribution to choose a "best" estimate or compute an interval.
And in that case the result might be the same as the frequentist estimate.
But doing so discards useful information and, in my opinion, eliminates the primary benefit of Bayesian methods: the posterior distribution is more useful than a single estimate, or even an interval.
The example in this chapter demonstrates the point.
Using the entire posterior distribution, we can compute the bid that maximizes the probability of winning, or the bid that maximizes expected gain, even if the rules for computing the gain are complicated (and nonlinear).
With a single estimate or an interval, we can't do that, even if they are "optimal" in some sense.
In general, frequentist estimation provides little guidance for decision-making.
If you hear someone say that Bayesian and frequentist methods produce the same results, you can be confident that they don't understand Bayesian methods.
Exercises
Exercise: When I worked in Cambridge, Massachusetts, I usually took the subway to South Station and then a commuter train home to Needham. Because the subway was unpredictable, I left the office early enough that I could wait up to 15 minutes and still catch the commuter train.
When I got to the subway stop, there were usually about 10 people waiting on the platform. If there were fewer than that, I figured I just missed a train, so I expected to wait a little longer than usual. And if there there more than that, I expected another train soon.
But if there were a lot more than 10 passengers waiting, I inferred that something was wrong, and I expected a long wait. In that case, I might leave and take a taxi.
We can use Bayesian decision analysis to quantify the analysis I did intuitively. Given the number of passengers on the platform, how long should we expect to wait? And when should we give up and take a taxi?
My analysis of this problem is in redline.ipynb, which is in the repository for this book. Click here to run this notebook on Colab.
Exercise: This exercise is inspired by a true story. In 2001 I created Green Tea Press to publish my books, starting with Think Python. I ordered 100 copies from a short run printer and made the book available for sale through a distributor.
After the first week, the distributor reported that 12 copies were sold. Based that report, I thought I would run out of copies in about 8 weeks, so I got ready to order more. My printer offered me a discount if I ordered more than 1000 copies, so I went a little crazy and ordered 2000.
A few days later, my mother called to tell me that her copies of the book had arrived. Surprised, I asked how many. She said ten.
It turned out I had sold only two books to non-relatives. And it took a lot longer than I expected to sell 2000 copies.
The details of this story are unique, but the general problem is something almost every retailer has to figure out. Based on past sales, how do you predict future sales? And based on those predictions, how do you decide how much to order and when?
Often the cost of a bad decision is complicated. If you place a lot of small orders rather than one big one, your costs are likely to be higher. If you run out of inventory, you might lose customers. And if you order too much, you have to pay the various costs of holding inventory.
So, let's solve a version of the problem I faced. It will take some work to set up the problem; the details are in the notebook for this chapter.
Suppose you start selling books online. During the first week you sell 10 copies (and let's assume that none of the customers are your mother). During the second week you sell 9 copies.
Assuming that the arrival of orders is a Poisson process, we can think of the weekly orders as samples from a Poisson distribution with an unknown rate.
We can use orders from past weeks to estimate the parameter of this distribution, generate a predictive distribution for future weeks, and compute the order size that maximized expected profit.
Suppose the cost of printing the book is \$5 per copy,
But if you order 100 or more, it's \$4.50 per copy.
For every book you sell, you get \$10.
But if you run out of books before the end of 8 weeks, you lose \$50 in future sales for every week you are out of stock.
If you have books left over at the end of 8 weeks, you lose \$2 in inventory costs per extra book.
For example, suppose you get orders for 10 books per week, every week. If you order 60 books,
The total cost is \$300.
You sell all 60 books, so you make \$600.
But the book is out of stock for two weeks, so you lose \$100 in future sales.
In total, your profit is \$200.
If you order 100 books,
The total cost is \$450.
You sell 80 books, so you make \$800.
But you have 20 books left over at the end, so you lose \$40.
In total, your profit is \$310.
Combining these costs with your predictive distribution, how many books should you order to maximize your expected profit?
To get you started, the following functions compute profits and costs according to the specification of the problem:
End of explanation
always_10 = [10] * 8
always_10
Explanation: To test these functions, suppose we get exactly 10 orders per week for eight weeks:
End of explanation
compute_profit(60, always_10)
Explanation: If you print 60 books, your net profit is \$200, as in the example.
End of explanation
compute_profit(100, always_10)
Explanation: If you print 100 books, your net profit is \$310.
End of explanation
from scipy.stats import gamma
alpha = 9
qs = np.linspace(0, 25, 101)
ps = gamma.pdf(qs, alpha)
pmf = Pmf(ps, qs)
pmf.normalize()
pmf.mean()
Explanation: Of course, in the context of the problem you don't know how many books will be ordered in any given week. You don't even know the average rate of orders. However, given the data and some assumptions about the prior, you can compute the distribution of the rate of orders.
You'll have a chance to do that, but to demonstrate the decision analysis part of the problem, I'll start with the arbitrary assumption that order rates come from a gamma distribution with mean 9.
Here's a Pmf that represents this distribution.
End of explanation
pmf.plot(color='C1')
decorate(xlabel=r'Book ordering rate ($\lambda$)',
ylabel='PMF')
Explanation: And here's what it looks like:
End of explanation
rates = pmf.choice(1000)
np.mean(rates)
Explanation: Now, we could generate a predictive distribution for the number of books ordered in a given week, but in this example we have to deal with a complicated cost function. In particular, out_of_stock_cost depends on the sequence of orders.
So, rather than generate a predictive distribution, I suggest we run simulations. I'll demonstrate the steps.
First, from our hypothetical distribution of rates, we can draw a random sample of 1000 values.
End of explanation
np.random.seed(17)
order_array = np.random.poisson(rates, size=(8, 1000)).transpose()
order_array[:5, :]
Explanation: For each possible rate, we can generate a sequence of 8 orders.
End of explanation
def compute_expected_profits(printed, order_array):
Compute profits averaged over a sample of orders.
printed: number printed
order_array: one row per sample, one column per week
profits = [compute_profit(printed, orders)
for orders in order_array]
return np.mean(profits)
Explanation: Each row of this array is a hypothetical sequence of orders based on a different hypothetical order rate.
Now, if you tell me how many books you printed, I can compute your expected profits, averaged over these 1000 possible sequences.
End of explanation
compute_expected_profits(70, order_array)
compute_expected_profits(80, order_array)
compute_expected_profits(90, order_array)
Explanation: For example, here are the expected profits if you order 70, 80, or 90 books.
End of explanation
printed_array = np.arange(70, 110)
t = [compute_expected_profits(printed, order_array)
for printed in printed_array]
expected_profits = pd.Series(t, printed_array)
expected_profits.plot(label='')
decorate(xlabel='Number of books printed',
ylabel='Expected profit ($)')
Explanation: Now, let's sweep through a range of values and compute expected profits as a function of the number of books you print.
End of explanation
expected_profits.idxmax(), expected_profits.max()
Explanation: Here is the optimal order and the expected profit.
End of explanation
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
Explanation: Now it's your turn. Choose a prior that you think is reasonable, update it with the data you are given, and then use the posterior distribution to do the analysis I just demonstrated.
End of explanation |
15,037 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Collating XML with embedded markup
This notebook illustrates one strategy for collating XML with embedded markup. It flattens the markup into milestones and retains the milestones in the "t" values and the output, while ignoring them in the "n" values for collation purposes. This ignores markup for collation purposes, but retains it (transformed) in the output, which may or may not be what you want. It is not possible to retain the original element structure completely unchanged because elements may begin in the middle of a word token, which invites an overlap conflict. Flattening the original markup is one strategy for negotiating the conflict.
The challenge
The principal challenge involves tokenizing the input into words for collation purposes while avoiding overlap. For example, if the input includes the following
Step1: Imports
We use the os module to read the filenames from the directory that contains the input files. We use pulldom to parse the input XML and minidom to pretty print the XML output. We use re because we strip markup (to create the "n" values) using a regex. We use collatex for collation.
Step2: Constants
We create a regex that matches all XML tags, which we’ll use to strip them from the "n" values. To avoid having to interpret it repeatedly, we use re.compile() to assign it to a constant. CollateX will output the <milestone> elements having escaped the angle brackets as &lt; and &gt;, and we use a regex as part of a process to restore the original angle brackets in the TEI(-like) output.
Step4: Functions
We define several functions that we’ll use to process our data.
tokenize()
The tokenize() function uses pulldom to read the XML input and create a list of word tokens for use in collation. We care only about <w> elements inside <p> elements, so we keep track of where we are with the in_p variable. There are <p> elements that we don’t care about (e.g., in the <teiHeader>), but our XSLT didn’t introduce <w> elements into them, so we don’t have to worry about them.
Step5: create_token()
The create_token() function builds a dictionary with "t" and "n" values for CollateX.
Step6: normalize()
The normalize() function creates an "n" value from a "t" value by stripping out markup.
Step7: Process the witnesses
We create a list (witness_input) to hold the data for the fur witnesses (CollateX will require this), and we process each witness in a loop, appending the results of the processing to the list. We extract a siglum value from the witness filename.
Step8: Collate the witness information and output the results
We include the witness information in the JSON structure that CollateX expects as JSON input, and then generate several types of output.
Step9: Plain text table
The plain text table outputs the "t" values in the cells.
Step10: JSON output
The JSON output lets us confirm that our normalization is working correctly.
Step11: html2
html2 output is color coded, to make it easy to see agreements and disagreements. html2 output is always vertical.
Step12: Variant graph
The first version of the variant graph groups the tokens according to blocks of agreement and disagreement#.
Step13: We can turn off segmentation (grouping) to see each token in its own node.
Step14: We can generate generic XML markup, suitable for an interlinear collation, and pretty-print it with | Python Code:
#!/usr/bin/env python
# Filename: xml_and_python.py
# Developer: David J. Birnbaum ([email protected])
# First version: 2017-07-23
# Last revised: 2017-07-27
#
# Syntax: python xml_and_python.py
# Input: Representative input is embedded in the Python script
# Output: stdout
#
# Synopsis: Collate XML around embedded markup
#
# Original TEI documents are processed with XSLT to tag words as <w> elements. Markup
# intermingled with text is flattened into <milestone> elements. The python script
# retains the flattened markup in the "t" values and strips it from the "n" value.
Explanation: Collating XML with embedded markup
This notebook illustrates one strategy for collating XML with embedded markup. It flattens the markup into milestones and retains the milestones in the "t" values and the output, while ignoring them in the "n" values for collation purposes. This ignores markup for collation purposes, but retains it (transformed) in the output, which may or may not be what you want. It is not possible to retain the original element structure completely unchanged because elements may begin in the middle of a word token, which invites an overlap conflict. Flattening the original markup is one strategy for negotiating the conflict.
The challenge
The principal challenge involves tokenizing the input into words for collation purposes while avoiding overlap. For example, if the input includes the following:
```xml
<p>Cur<hi rend="underline">ly Lar</hi>ry Moe</p>
```
there are three word tokens: “Curly”, “Larry”, “Moe”. But because the <hi> start tag is in the middle of the first word and the matching end tag is in the middle of the second, it isn’t possible simply to wrap the words in <w> tags (never mind the challenges of tokenizing mixed content in the first place). The problem is that the XML model doesn’t match the task because in the XML model the three names are not content objects, that is, element or text() nodes. The only reason we don’t see the overlap until we try to convert the pseudo-markup into real markup is because its implicit nature disguises the reality.
The data
The input in this example consists of the four sample files (in the witnesses subdirectory), which are valid (if also vapid) TEI. Here’s one example:
witnessA.xml
```xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-model href="http://www.tei-c.org/release/xml/tei/custom/schema/relaxng/tei_all.rng" type="application/xml" schematypens="http://relaxng.org/ns/structure/1.0"?>
<?xml-model href="http://www.tei-c.org/release/xml/tei/custom/schema/relaxng/tei_all.rng" type="application/xml"
schematypens="http://purl.oclc.org/dsdl/schematron"?>
<TEI xmlns="http://www.tei-c.org/ns/1.0">
<teiHeader>
<fileDesc>
<titleStmt>
<title>Title</title>
</titleStmt>
<publicationStmt>
<p>Publication Information</p>
</publicationStmt>
<sourceDesc>
<p>Information about the source</p>
</sourceDesc>
</fileDesc>
</teiHeader>
<text>
<body>
<p>This is the first paragraph.</p>
<p>This is <subst><add>the</add><del>a</del></subst> sec<add>ond</add> paragraph.</p>
</body>
</text>
</TEI>```
In human terms, the second paragraph reads something like:
This is the second paragraph.
except that “the” has been added as a replacement for “a” and the last three letters of “second” have been inserted. that is, the original version read something like:
This is a sec paragraph.
We pass these four files through three XSLT transformation before letting python perform the collation. We use XSLT because python support for complex XML processing is limited. The only XML-idiomatic of the major python libraries, lxml, supports only XSLT 1.0, and pulldom doesn’t even know how to distinguish significant and insignificant white space. It is possible to do all of the processing within python using either of these libraries, but awareness of white-space significance plus features available in XSLT 2.0, such as tokenize(), <xsl:analyze-string>, and <xsl:for-each-group>, make XSLT 2.0 processing more idiomatic than the python alternatives.
In Real Life we would probably combine the three XSLT transformations into one to avoid the overhead of launching new java processes for each step in the pipeline. We have separated the steps here to make them easier to examine independently of one another.
XSLT transformations
The three XSLT transformation work as follows.
flatten.xsl
```xml
<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:xs="http://www.w3.org/2001/XMLSchema"
xpath-default-namespace="http://www.tei-c.org/ns/1.0"
xmlns:math="http://www.w3.org/2005/xpath-functions/math" exclude-result-prefixes="xs math"
version="3.0" xmlns="http://www.tei-c.org/ns/1.0">
<xsl:strip-space elements="*"/>
<!--
remove <del> elements paired with <add> inside <subst>
flatten all other markup in text//p
-->
<xsl:template match="node()">
<xsl:copy>
<xsl:apply-templates select="@* | node()"/>
</xsl:copy>
</xsl:template>
<xsl:template match="subst" priority="10">
<xsl:apply-templates select="add"/>
</xsl:template>
<xsl:template match="//text//p//*">
<milestone unit="{name()}" type="start"/>
<xsl:apply-templates/>
<milestone unit="{name()}" type="end"/>
</xsl:template>
</xsl:stylesheet>
```
This removes all <del> elements inside <subst> (but retains them elsewhere) and flattens all other decendants of <p> elements within the <text> into <milestone> elements. These decisions are governed by the desired collation outcome, and we could have made alternative decisions. The interim output is difficult to read because all insignificant white space has been removed (which makes the subsequent word tokenization more natural), but it looks like the following:
```xml
<?xml version="1.0" encoding="UTF-8"?>
<TEI xmlns="http://www.tei-c.org/ns/1.0"><teiHeader><fileDesc><titleStmt><title>Title</title></titleStmt><publicationStmt>
<?xml-model href="http://www.tei-c.org/release/xml/tei/custom/schema/relaxng/tei_all.rng" type="application/xml" schematypens="http://relaxng.org/ns/structure/1.0"?><?xml-model href="http://www.tei-c.org/release/xml/tei/custom/schema/relaxng/tei_all.rng" type="application/xml"
schematypens="http://purl.oclc.org/dsdl/schematron"?><p>Publication Information</p>
</publicationStmt><sourceDesc>
<p>Information about the source</p>
</sourceDesc></fileDesc></teiHeader><text>
<body><p>This is the first paragraph.</p><p>This is <milestone unit="add" type="start"/>the<milestone unit="add" type="end"/> sec<milestone unit="add" type="start"/>ond<milestone unit="add" type="end"/> paragraph.</p></body>
</text></TEI>
```
Note that, for example, an original <add> element has been replaced by empty <milestone unit="add" type="start"/> and <milestone unit="add" type="end"/> elements. If what were originally start and end tags wind up inside different word tokens, this flattening ensures that no syntactic overlap will occur. The output of this and the other two XSLT transformations is TEI conformant, although these are a transient file within which TEI conformity doesn’t really matter.
add-wb.xsl
add-wb.xsl inserts empty <milestone unit="wb"/> elements in place of significant white space within paragraphs.
```xml
<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:xs="http://www.w3.org/2001/XMLSchema"
xpath-default-namespace="http://www.tei-c.org/ns/1.0"
xmlns:math="http://www.w3.org/2005/xpath-functions/math" exclude-result-prefixes="xs math"
version="3.0" xmlns="http://www.tei-c.org/ns/1.0">
<xsl:template match="*">
<xsl:copy>
<xsl:apply-templates select="node() | @*"/>
</xsl:copy>
</xsl:template>
<xsl:template match="milestone">
<xsl:sequence select="."/>
</xsl:template>
<xsl:template match="text()">
<xsl:analyze-string select="." regex="\s+">
<xsl:matching-substring>
<milestone unit="wb"/>
</xsl:matching-substring>
<xsl:non-matching-substring>
<xsl:sequence select="."/>
</xsl:non-matching-substring>
</xsl:analyze-string>
</xsl:template>
</xsl:stylesheet>
```
The interim output looks like the following, and it is also difficult to read because insignificant white space has been removed. The key features are the new <milestone unit="wb"> elements where there used to be white space between words inside text() nodes.
```xml
<?xml version="1.0" encoding="UTF-8"?>
<TEI xmlns="http://www.tei-c.org/ns/1.0"><teiHeader><fileDesc><titleStmt><title>Title</title></titleStmt><publicationStmt>
<p>Publication<milestone unit="wb"/>Information</p>
</publicationStmt><sourceDesc>
<p>Information<milestone unit="wb"/>about<milestone unit="wb"/>the<milestone unit="wb"/>source</p>
</sourceDesc></fileDesc></teiHeader><text>
<body><p>This<milestone unit="wb"/>is<milestone unit="wb"/>the<milestone unit="wb"/>first<milestone unit="wb"/>paragraph.</p><p>This<milestone unit="wb"/>is<milestone unit="wb"/><milestone unit="add" type="start"/>the<milestone unit="add" type="end"/><milestone unit="wb"/>sec<milestone unit="add" type="start"/>ond<milestone unit="add" type="end"/><milestone unit="wb"/>paragraph.</p></body>
</text></TEI>
```
wrap_words.xsl
wrap_words.xsl converts the <milestone unit="wb"> milestone tags to <w> container tags around each word token, including adjacent flattened <milestone> elements.
```xml
<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:xs="http://www.w3.org/2001/XMLSchema"
xpath-default-namespace="http://www.tei-c.org/ns/1.0" xmlns="http://www.tei-c.org/ns/1.0"
xmlns:math="http://www.w3.org/2005/xpath-functions/math" exclude-result-prefixes="xs math"
version="3.0">
<xsl:output method="xml" indent="yes"/>
<xsl:template match="node() | @*">
<xsl:copy>
<xsl:apply-templates select="node() | @*"/>
</xsl:copy>
</xsl:template>
<xsl:template match="//text//p">
<p>
<xsl:for-each-group select="node()" group-starting-with="milestone[@unit eq 'wb']">
<w>
<xsl:copy-of select="current-group()[not(self::milestone[@unit eq 'wb'])]"/>
</w>
</xsl:for-each-group>
</p>
</xsl:template>
</xsl:stylesheet>```
The output after this third XSLT transformation, which will be parsed by CollateX, looks like the following:
```xml
<?xml version="1.0" encoding="UTF-8"?>
<TEI xmlns="http://www.tei-c.org/ns/1.0">
<teiHeader>
<fileDesc>
<titleStmt>
<title>Title</title>
</titleStmt>
<publicationStmt>
<p>Publication<milestone unit="wb"/>Information</p>
</publicationStmt>
<sourceDesc>
<p>Information<milestone unit="wb"/>about<milestone unit="wb"/>the<milestone unit="wb"/>source</p>
</sourceDesc>
</fileDesc>
</teiHeader>
<text>
<body>
<p>
<w>This</w>
<w>is</w>
<w>the</w>
<w>first</w>
<w>paragraph.</w>
</p>
<p>
<w>This</w>
<w>is</w>
<w>
<milestone unit="add" type="start"/>the<milestone unit="add" type="end"/>
</w>
<w>sec<milestone unit="add" type="start"/>ond<milestone unit="add" type="end"/>
</w>
<w>paragraph.</w>
</p>
</body>
</text>
</TEI>
```
Python
Python processing takes over from here.
Documentation
We begin the python file with documentation. For information about the first line, look up “shebang” in a search engine.
End of explanation
import os
from xml.dom import pulldom
from xml.dom.minidom import parseString
import re
from collatex import *
Explanation: Imports
We use the os module to read the filenames from the directory that contains the input files. We use pulldom to parse the input XML and minidom to pretty print the XML output. We use re because we strip markup (to create the "n" values) using a regex. We use collatex for collation.
End of explanation
RE_MARKUP = re.compile(r'<.+?>')
RE_MILESTONE = re.compile(r'<milestone(.*?)>')
Explanation: Constants
We create a regex that matches all XML tags, which we’ll use to strip them from the "n" values. To avoid having to interpret it repeatedly, we use re.compile() to assign it to a constant. CollateX will output the <milestone> elements having escaped the angle brackets as &lt; and &gt;, and we use a regex as part of a process to restore the original angle brackets in the TEI(-like) output.
End of explanation
def tokenize(input_xml):
Return list of word tokens, with internal milestone markup, as strings
We did our tokenization with XSLT, and the input into CollateX has word
tokens tagged as <w> elements. Tokenization at this next stage involves
selecting the <w> elements and ignoring the rest.
in_p = False # the only elements inside <body> are <p>, and, inside <p>, <milestone> and <w>
words = []
doc = pulldom.parseString(input_xml)
for event, node in doc:
# <p>
if event == pulldom.START_ELEMENT and node.localName == 'p':
in_p = True
elif event == pulldom.END_ELEMENT and node.localName == 'p':
in_p = False
# descendants of <p>
elif event == pulldom.START_ELEMENT and in_p:
if node.localName == 'w':
doc.expandNode(node)
words.append(re.sub(r'\n|<w>|</w>', r' ', node.toxml()).strip() + " ")
return words
Explanation: Functions
We define several functions that we’ll use to process our data.
tokenize()
The tokenize() function uses pulldom to read the XML input and create a list of word tokens for use in collation. We care only about <w> elements inside <p> elements, so we keep track of where we are with the in_p variable. There are <p> elements that we don’t care about (e.g., in the <teiHeader>), but our XSLT didn’t introduce <w> elements into them, so we don’t have to worry about them.
End of explanation
def create_token(word):
return {"t": word, "n": normalize(word)}
Explanation: create_token()
The create_token() function builds a dictionary with "t" and "n" values for CollateX.
End of explanation
def normalize(word_token):
return RE_MARKUP.sub('', word_token)
Explanation: normalize()
The normalize() function creates an "n" value from a "t" value by stripping out markup.
End of explanation
collatex_witness_input = []
witnesses = os.scandir('word_tagged')
for witness_file in witnesses:
witness_siglum = witness_file.name.split('.xml')[0]
with open(witness_file, 'r', encoding='utf-8') as f:
witness_xml = f.read()
word_tokens = tokenize(witness_xml)
token_list = [create_token(token) for token in word_tokens]
witness_data = {"id": witness_siglum, "tokens": token_list}
collatex_witness_input.append(witness_data)
Explanation: Process the witnesses
We create a list (witness_input) to hold the data for the fur witnesses (CollateX will require this), and we process each witness in a loop, appending the results of the processing to the list. We extract a siglum value from the witness filename.
End of explanation
collatex_json_input = {"witnesses": collatex_witness_input}
Explanation: Collate the witness information and output the results
We include the witness information in the JSON structure that CollateX expects as JSON input, and then generate several types of output.
End of explanation
table = collate(collatex_json_input, layout="vertical", segmentation=False)
print(table)
Explanation: Plain text table
The plain text table outputs the "t" values in the cells.
End of explanation
json_output = collate(collatex_json_input, output="json")
print(json_output)
Explanation: JSON output
The JSON output lets us confirm that our normalization is working correctly.
End of explanation
collate(collatex_json_input, output="html2")
Explanation: html2
html2 output is color coded, to make it easy to see agreements and disagreements. html2 output is always vertical.
End of explanation
collate(collatex_json_input, output="svg")
Explanation: Variant graph
The first version of the variant graph groups the tokens according to blocks of agreement and disagreement#.
End of explanation
collate(collatex_json_input, segmentation=False, output="svg")
Explanation: We can turn off segmentation (grouping) to see each token in its own node.
End of explanation
xml_output = parseString(collate(collatex_json_input, output="xml"))
print(xml_output.toprettyxml())
Explanation: We can generate generic XML markup, suitable for an interlinear collation, and pretty-print it with:
End of explanation |
15,038 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interact Exercise 4
Imports
Step2: Line with Gaussian noise
Write a function named random_line that creates x and y data for a line with y direction random noise that has a normal distribution $N(0,\sigma^2)$
Step5: Write a function named plot_random_line that takes the same arguments as random_line and creates a random line using random_line and then plots the x and y points using Matplotlib's scatter function
Step6: Use interact to explore the plot_random_line function using | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
Explanation: Interact Exercise 4
Imports
End of explanation
def random_line(m, b, sigma, size=10):
Create a line y = m*x + b + N(0,sigma**2) between x=[-1.0,1.0]
Parameters
----------
m : float
The slope of the line.
b : float
The y-intercept of the line.
sigma : float
The standard deviation of the y direction normal distribution noise.
size : int
The number of points to create for the line.
Returns
-------
x : array of floats
The array of x values for the line with `size` points.
y : array of floats
The array of y values for the lines with `size` points.
x=np.linspace(-1.0,1.0,size)
if sigma==0.0: #worked with Jack Porter to find N(o,sigma) and to work out sigma 0.0 case also explained to him list comprehension
y=np.array([i*m+b for i in x]) #creates an array of y values
else:
# N=1/(sigma*np.pi**.5)*np.exp(-(x**2)/(2*sigma**2)) #incorrectly thought this would need to be the N(0,sigma)
y=np.array([i*m+b+np.random.normal(0,sigma**2) for i in x]) #creates an array of y values for each value of x so that y has gaussian noise
return x,y
# plt.plot(x,y,'b' )
# plt.box(False)
# plt.axvline(x=0,linewidth=.2,color='k')
# plt.axhline(y=0,linewidth=.2,color='k')
# ax=plt.gca()
# ax.get_xaxis().tick_bottom()
# ax.get_yaxis().tick_left()
m = 0.0; b = 1.0; sigma=0.0; size=3
x, y = random_line(m, b, sigma, size)
assert len(x)==len(y)==size
assert list(x)==[-1.0,0.0,1.0]
assert list(y)==[1.0,1.0,1.0]
sigma = 1.0
m = 0.0; b = 0.0
size = 500
x, y = random_line(m, b, sigma, size)
assert np.allclose(np.mean(y-m*x-b), 0.0, rtol=0.1, atol=0.1)
assert np.allclose(np.std(y-m*x-b), sigma, rtol=0.1, atol=0.1)
Explanation: Line with Gaussian noise
Write a function named random_line that creates x and y data for a line with y direction random noise that has a normal distribution $N(0,\sigma^2)$:
$$
y = m x + b + N(0,\sigma^2)
$$
Be careful about the sigma=0.0 case.
End of explanation
def ticks_out(ax):
Move the ticks to the outside of the box.
ax.get_xaxis().set_tick_params(direction='out', width=1, which='both')
ax.get_yaxis().set_tick_params(direction='out', width=1, which='both')
def plot_random_line(m, b, sigma, size=10, color='red'):
Plot a random line with slope m, intercept b and size points.
x,y=random_line(m,b,sigma,size) #worked with Jack Porter, before neither of us reassigned x,y
plt.plot(x,y,color )
plt.box(False)
plt.axvline(x=0,linewidth=.2,color='k')
plt.axhline(y=0,linewidth=.2,color='k')
plt.xlim(-1.1,1.1)
plt.ylim(-10.0,10.0)
ax=plt.gca()
ax.get_xaxis().tick_bottom()
ax.get_yaxis().tick_left()
plt.xlabel('x')
plt.ylabel('y')
plt.title('Line w/ Gaussian Noise')
plot_random_line(5.0, -1.0, 2.0, 50)
assert True # use this cell to grade the plot_random_line function
Explanation: Write a function named plot_random_line that takes the same arguments as random_line and creates a random line using random_line and then plots the x and y points using Matplotlib's scatter function:
Make the marker color settable through a color keyword argument with a default of red.
Display the range $x=[-1.1,1.1]$ and $y=[-10.0,10.0]$.
Customize your plot to make it effective and beautiful.
End of explanation
interact(plot_random_line, m=(-10.0,10.0,0.1),b=(-5.0,5.0,0.1),sigma=(0.0,5.0,0.1),size=(10,100,10), color={'red':'r','green':'g','blue':'b'})
#### assert True # use this cell to grade the plot_random_line interact
Explanation: Use interact to explore the plot_random_line function using:
m: a float valued slider from -10.0 to 10.0 with steps of 0.1.
b: a float valued slider from -5.0 to 5.0 with steps of 0.1.
sigma: a float valued slider from 0.0 to 5.0 with steps of 0.01.
size: an int valued slider from 10 to 100 with steps of 10.
color: a dropdown with options for red, green and blue.
End of explanation |
15,039 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regular Expressions
A regular expression (RegEx) is a sequence of chatacters that expresses a pattern to be searched withing a longer piece of text. re is a Python library for regular expressions, which has several nice methods for working with strings. The list of Frequenctly used characters is available on Moodle and the course page on GitHub.
Sometimes people use RegEx for scraping the web, yet one is not encouraged to do so, as better/safer alternatives exist. Yet, once text data is scraped, RegEx is an important tool for cleaningsand tidying up the dataset.
Let's now go to Project Gutenberg page and find some book to download. Below, I used as an example the Financier by Theodore Dreiser. The latter can be downloaded and read from local stroage or directly from the URL. We will go for the "download and read" option.
Step1: Let's see how many times Mr. Dreiser uses the $ sign in his book. For that purpose, the findall() function from the re library will be used. The function receives the expression to search for asa first argument (always inside quotes) and the string to conduct the search on as a second argument. Please, note
Step2: Let's see at what occasions he used it. More precicely, let's read the amount of money cited in the book. Amount usually comes after the sign, so we will look for all non-whitespace characters after the dollar sign that are followed by a whitespace (that's where the amoun ends). The brackets indicate the component we want to receive as an output.
Step3: Let's use the | operator (i.e. or) to understand how many $ or @ signs were used by Mr. Dreiser.
Step4: Let's see how many times the word euro is used. Yet, we do not know whether the author typed Euro with a capital letter or not. So we will have to search both. If we simply put () Python will think that's the text we need to receive. So we must explicitly mention (using ?
Step5: Of course, there is an easier approac using flags
Step6: Now about substitution. If you want to find some text in the file and substitute it with something else, then re.sub command may come in handy. Let's promote me to Harvard
Step7: When brackets are used in RegEx, they for an enumerated group, that can be further called based on its order (e.g. first part of the string inside brackets fill be enumerated as the group 1). | Python Code:
import re
with open("financier.txt","r") as f:
financier = f.readlines()
print financier[2:4]
type(financier)
Explanation: Regular Expressions
A regular expression (RegEx) is a sequence of chatacters that expresses a pattern to be searched withing a longer piece of text. re is a Python library for regular expressions, which has several nice methods for working with strings. The list of Frequenctly used characters is available on Moodle and the course page on GitHub.
Sometimes people use RegEx for scraping the web, yet one is not encouraged to do so, as better/safer alternatives exist. Yet, once text data is scraped, RegEx is an important tool for cleaningsand tidying up the dataset.
Let's now go to Project Gutenberg page and find some book to download. Below, I used as an example the Financier by Theodore Dreiser. The latter can be downloaded and read from local stroage or directly from the URL. We will go for the "download and read" option.
End of explanation
output = re.findall("\$",str(financier))
print output
Explanation: Let's see how many times Mr. Dreiser uses the $ sign in his book. For that purpose, the findall() function from the re library will be used. The function receives the expression to search for asa first argument (always inside quotes) and the string to conduct the search on as a second argument. Please, note:
as financier is a list, we convert it to string to be able to pass as an argument to our fucntion,
as dollar sign is a special character for RegEx, we use the forward slach before to indicate that in this case we do not use "$" as a special character, instead it is just a text.
End of explanation
output = re.findall("(\$\S*)\s",str(financier))
print output
Explanation: Let's see at what occasions he used it. More precicely, let's read the amount of money cited in the book. Amount usually comes after the sign, so we will look for all non-whitespace characters after the dollar sign that are followed by a whitespace (that's where the amoun ends). The brackets indicate the component we want to receive as an output.
End of explanation
output = re.findall("(@|\$)",str(financier))
print output
Explanation: Let's use the | operator (i.e. or) to understand how many $ or @ signs were used by Mr. Dreiser.
End of explanation
output = re.findall("(?:E|e)uro",str(financier))
print output
Explanation: Let's see how many times the word euro is used. Yet, we do not know whether the author typed Euro with a capital letter or not. So we will have to search both. If we simply put () Python will think that's the text we need to receive. So we must explicitly mention (using ?:) that the text inside brackets is only for OR function, still not meaning that it is the only part of text we want to receive.
End of explanation
output = re.findall("euro",str(financier),re.IGNORECASE)
print output
Explanation: Of course, there is an easier approac using flags:
End of explanation
sample_text = "My email is [email protected]"
# Let's match e-mail first
output = re.findall('\S+@.+',sample_text)
print output
Explanation: Now about substitution. If you want to find some text in the file and substitute it with something else, then re.sub command may come in handy. Let's promote me to Harvard:
End of explanation
# Let's now promote me to Harvard
print re.sub(r'(\S+@)(.+)', r'\1harvard.edu', sample_text)
Explanation: When brackets are used in RegEx, they for an enumerated group, that can be further called based on its order (e.g. first part of the string inside brackets fill be enumerated as the group 1).
End of explanation |
15,040 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I have a set of data and I want to compare which line describes it best (polynomials of different orders, exponential or logarithmic). | Problem:
import numpy as np
import scipy.optimize
y = np.array([1, 7, 20, 50, 79])
x = np.array([10, 19, 30, 35, 51])
p0 = (4, 0.1, 1)
result = scipy.optimize.curve_fit(lambda t,a,b, c: a*np.exp(b*t) + c, x, y, p0=p0)[0] |
15,041 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Reageren op de Raspberry Pi GPIO pins
Met deze IPython Notebook verbinden we hardware met software. We zullen een knop verbinden met de General Purpose Input/Output (GPIO) pinnen* op de Raspberry Pi en wanneer de knop ingedrukt wordt, de Pi een functie laten uitvoeren.
GPIO pinnen zijn de 40 (genummerde) pinnen tegenover de HDMI/scherm aansluiting waarop elektronische componenten aangesloten kunnen worden.
Maar eerst maken we de functie die we willen aanroepen. Er staan al enkele geluiden in de vorm van .wav files op de Pi (d.i. zoiets als MP3 bestanden) en er staat een bibliotheek op de Pi die deze geluiden kan afspelen (de pygame bibliotheek die gebruikt wordt om spelletjes te maken in Python)
Dus we importeren de mixer module van de pygame bibliotheek en initaliseren ze
Step1: We vragen de pygame bibliotheek een .wav bestand in het geheugen te laden en er een geluid van te maken
(en omdat we het niet kunnen laten spelen we het ineens een keertje af; vergeet geen boxen of hoofdtelefoon aan te sluiten)
nota
Step2: De play() methode van het drum geluid kunnen we in een zelfgemaakte functie stoppen, zodat we ze later kunnen hergebruiken.
Step3: nota
Step4: Hoog tijd om een knop aan te sluiten. We gebruiken de BCM nummering, zoals gegraveerd op de case en geprint op het GPIO pinout document.
Gebruik de onderstaande illustratie als leidraad
Step5: En met dit alles achter de rug kunnen we Python vragen een event te registreren.
Telkens er een "FALLING" event gedetecteerd wordt op pin 17, moet Python de eerder gecreëerde functie play aanroepen. Bovendien zal Python, ook al gebruiken we het nog niet, het pin nummer van de pin die het event veroorzaakte, meegeven aan de play functie.
Omdat de add_event_detect functie aan de play functie moet kunnen vertellen welke pin het signaal heeft gedetecteerd, voegen we wel eerst nog een pin_number parameter toe aan de play functie
Step6: Test maar uit door op de knop te drukken.
Als je tevreden bent van het resultaat, kan je het event weer verwijderen met de volgende functie
Step7: Ok, dit was zo leuk dat we er nog wel eentje kunnen doen
Step8: Alleen pakken we het ditmaal lichtjes anders aan. We zullen de geluiden opslaan in een dictionary, waarbij we het pin nummer koppelen aan het geluid dat we willen laten horen
Step9: Eens proberen?
Step10: Dat maakt dat we, in plaats van twee functies te maken die we apart registreren, we met één functie kunnen volstaan door gebruik te maken van het feit dat het pin nummer wordt doorgegeven aan de play functie en de dictionary die ons vertelt welk geluid erbij hoort.
Step11: Zie hieronder het schema dat nagebouwd kan worden
Step12: Music time!
probeer maar uit...
En dan kunnen we op dezelfde manier de event registraties weer verwijderen. | Python Code:
import pygame.mixer
pygame.mixer.init()
Explanation: Reageren op de Raspberry Pi GPIO pins
Met deze IPython Notebook verbinden we hardware met software. We zullen een knop verbinden met de General Purpose Input/Output (GPIO) pinnen* op de Raspberry Pi en wanneer de knop ingedrukt wordt, de Pi een functie laten uitvoeren.
GPIO pinnen zijn de 40 (genummerde) pinnen tegenover de HDMI/scherm aansluiting waarop elektronische componenten aangesloten kunnen worden.
Maar eerst maken we de functie die we willen aanroepen. Er staan al enkele geluiden in de vorm van .wav files op de Pi (d.i. zoiets als MP3 bestanden) en er staat een bibliotheek op de Pi die deze geluiden kan afspelen (de pygame bibliotheek die gebruikt wordt om spelletjes te maken in Python)
Dus we importeren de mixer module van de pygame bibliotheek en initaliseren ze:
Instructies:
Plaats je cursor in de cel hieronder en druk Shift+Enter of klik op de Play knop in de menubalk bovenaan om de code in de cel uit te voeren]
Shift + Enter: Voer de cel uit en spring naar de volgende cel
Ctrl + Enter: Voer de cel uit, maar blijf op de huidige cel staan
Alt + Enter: Voer de cel uit en maak een nieuwe cel aan
Zolang er een [*] staat links van de cel is de code nog aan het lopen. Zodra de code beëindigd is, komt er een volgnummmer en wordt eventuele output uitgeprint onder de cel
End of explanation
drum = pygame.mixer.Sound("/opt/sonic-pi/etc/samples/drum_tom_mid_hard.wav")
drum.play()
Explanation: We vragen de pygame bibliotheek een .wav bestand in het geheugen te laden en er een geluid van te maken
(en omdat we het niet kunnen laten spelen we het ineens een keertje af; vergeet geen boxen of hoofdtelefoon aan te sluiten)
nota: het wav bestand halen we uit een map die geïnstalleerd is als onderdeel van de Sonic Pi software
End of explanation
def play():
print("Drumroffel !")
drum.play()
Explanation: De play() methode van het drum geluid kunnen we in een zelfgemaakte functie stoppen, zodat we ze later kunnen hergebruiken.
End of explanation
play()
Explanation: nota: de "pin" parameter gebruiken we (nog) niet, maar verderop hebben we ze wel nodig
Laat ons de play functie al eens aanroepen met een willekeurige pin waarde.
End of explanation
#GPIO bibliotheek inladen
import RPi.GPIO as GPIO
#Methode van pin nummering (BCM) instellen
GPIO.setmode(GPIO.BCM)
#pin 17 activeren als input en de ingebouwde pull-up/pull-down instellen als pull-up
GPIO.setup(17, GPIO.IN, GPIO.PUD_UP)
Explanation: Hoog tijd om een knop aan te sluiten. We gebruiken de BCM nummering, zoals gegraveerd op de case en geprint op het GPIO pinout document.
Gebruik de onderstaande illustratie als leidraad:
<img src="MusicBox01.png" height="300"/>
End of explanation
def play(pin_number):
print("Drumroffel !")
drum.play()
GPIO.add_event_detect(17, GPIO.FALLING, play, 200)
Explanation: En met dit alles achter de rug kunnen we Python vragen een event te registreren.
Telkens er een "FALLING" event gedetecteerd wordt op pin 17, moet Python de eerder gecreëerde functie play aanroepen. Bovendien zal Python, ook al gebruiken we het nog niet, het pin nummer van de pin die het event veroorzaakte, meegeven aan de play functie.
Omdat de add_event_detect functie aan de play functie moet kunnen vertellen welke pin het signaal heeft gedetecteerd, voegen we wel eerst nog een pin_number parameter toe aan de play functie:
End of explanation
GPIO.remove_event_detect(17)
Explanation: Test maar uit door op de knop te drukken.
Als je tevreden bent van het resultaat, kan je het event weer verwijderen met de volgende functie:
End of explanation
cymbal = pygame.mixer.Sound("/opt/sonic-pi/etc/samples/drum_cymbal_open.wav")
cymbal.play()
Explanation: Ok, dit was zo leuk dat we er nog wel eentje kunnen doen:
End of explanation
sound_pins = {
17: drum,
27: cymbal,
}
Explanation: Alleen pakken we het ditmaal lichtjes anders aan. We zullen de geluiden opslaan in een dictionary, waarbij we het pin nummer koppelen aan het geluid dat we willen laten horen:
End of explanation
sound_pins[17].play()
Explanation: Eens proberen?
End of explanation
def play(pin):
sound = sound_pins[pin]
print("Geluid spelen voor pin %s" % pin)
sound.play()
Explanation: Dat maakt dat we, in plaats van twee functies te maken die we apart registreren, we met één functie kunnen volstaan door gebruik te maken van het feit dat het pin nummer wordt doorgegeven aan de play functie en de dictionary die ons vertelt welk geluid erbij hoort.
End of explanation
for pin in sound_pins:
GPIO.setup(pin, GPIO.IN, GPIO.PUD_UP)
GPIO.add_event_detect(pin, GPIO.FALLING, play, 200)
Explanation: Zie hieronder het schema dat nagebouwd kan worden:
<img src="MusicBox02.png" height="300" />
Ook het opzetten van de pins en de registratie van een event kunnen we zo "aan de lopende band" doen door ze in een loop te stoppen die één keer uitgevoerd wordt voor elke pin in de dictionary (twee dus, in dit geval, maar het zou een hele piano kunnen zijn)
End of explanation
for pin in sound_pins:
GPIO.remove_event_detect(pin)
#Alle GPIO instellingen weer opkuisen
GPIO.cleanup()
Explanation: Music time!
probeer maar uit...
En dan kunnen we op dezelfde manier de event registraties weer verwijderen.
End of explanation |
15,042 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Environment Preparation
Install Java 8
Run the cell on the Google Colab to install jdk 1.8.
Note
Step2: Install BigDL Orca
Conda is needed to prepare the Python environment for running this example.
Note
Step3: You can install the latest pre-release version using pip install --pre --upgrade bigdl-orca[automl].
Step4: Automated hyper-parameter search for PyTorch using Orca APIs
In this guide we will describe how to enable automated hyper-parameter search for PyTorch using Orca AutoEstimator in 5 simple steps.
Step5: Step 1
Step6: This is the only place where you need to specify local or distributed mode. View Orca Context for more details.
Note
Step7: After defining your model, you need to define a Model Creator Function that returns an instance of your model, and a Optimizer Creator Function that returns a PyTorch optimizer. Note that both the Model Creator Function and the Optimizer Creator Function should take config as input and get the hyper-parameter values from config.
Step8: Step 3
Step9: Step 4
Step10: Step 5
Step11: Next, use the auto estimator to fit and search for the best hyper-parameter set.
Step12: Finally, you can get the best learned model and the best hyper-parameters.
Step13: You can use the best learned model and the best hyper-parameters as you want. Here, we demonstrate how to evaluate on the test dataset.
Step14: You can find the accuracy of the best model has reached 98%. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
Explanation: <a href="https://colab.research.google.com/github/intel-analytics/BigDL/blob/branch-2.0/python/orca/colab-notebook/quickstart/autoestimator_pytorch_lenet_mnist.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Copyright 2016 The BigDL Authors.
End of explanation
# Install jdk8
!apt-get install openjdk-8-jdk-headless -qq > /dev/null
import os
# Set environment variable JAVA_HOME.
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
!update-alternatives --set java /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java
!java -version
Explanation: Environment Preparation
Install Java 8
Run the cell on the Google Colab to install jdk 1.8.
Note: if you run this notebook on your computer, root permission is required when running the cell to install Java 8. (You may ignore this cell if Java 8 has already been set up in your computer).
End of explanation
import sys
# Set current python version
python_version = "3.7.10"
# Install Miniconda
!wget https://repo.continuum.io/miniconda/Miniconda3-4.5.4-Linux-x86_64.sh
!chmod +x Miniconda3-4.5.4-Linux-x86_64.sh
!./Miniconda3-4.5.4-Linux-x86_64.sh -b -f -p /usr/local
# Update Conda
!conda install --channel defaults conda python=$python_version --yes
!conda update --channel defaults --all --yes
# Append to the sys.path
_ = (sys.path
.append(f"/usr/local/lib/python3.7/site-packages"))
os.environ['PYTHONHOME']="/usr/local"
Explanation: Install BigDL Orca
Conda is needed to prepare the Python environment for running this example.
Note: The following code cell is specific for setting up conda environment on Colab; for general conda installation, please refer to the install guide for more details.
End of explanation
# Install latest pre-release version of BigDL Orca
# Installing BigDL Orca from pip will automatically install pyspark, bigdl, and their dependencies.
!pip install --pre --upgrade bigdl-orca[automl]
# Install python dependencies
!pip install torch==1.8.1 torchvision==0.9.1
Explanation: You can install the latest pre-release version using pip install --pre --upgrade bigdl-orca[automl].
End of explanation
# import necesary libraries and modules
from __future__ import print_function
import os
import argparse
from bigdl.orca import init_orca_context, stop_orca_context
from bigdl.orca import OrcaContext
Explanation: Automated hyper-parameter search for PyTorch using Orca APIs
In this guide we will describe how to enable automated hyper-parameter search for PyTorch using Orca AutoEstimator in 5 simple steps.
End of explanation
# recommended to set it to True when running BigDL in Jupyter notebook.
OrcaContext.log_output = True # (this will display terminal's stdout and stderr in the Jupyter notebook).
cluster_mode = "local"
if cluster_mode == "local":
init_orca_context(cores=4, memory="2g", init_ray_on_spark=True) # run in local mode
elif cluster_mode == "k8s":
init_orca_context(cluster_mode="k8s", num_nodes=2, cores=4, init_ray_on_spark=True) # run on K8s cluster
elif cluster_mode == "yarn":
init_orca_context(
cluster_mode="yarn-client", cores=4, num_nodes=2, memory="2g", init_ray_on_spark=True,
driver_memory="10g", driver_cores=1) # run on Hadoop YARN cluster
Explanation: Step 1: Init Orca Context
End of explanation
import torch
import torch.nn as nn
import torch.nn.functional as F
class LeNet(nn.Module):
def __init__(self, fc1_hidden_size=500):
super(LeNet, self).__init__()
self.conv1 = nn.Conv2d(1, 20, 5, 1)
self.conv2 = nn.Conv2d(20, 50, 5, 1)
self.fc1 = nn.Linear(4*4*50, fc1_hidden_size)
self.fc2 = nn.Linear(fc1_hidden_size, 10)
def forward(self, x):
x = F.relu(self.conv1(x))
x = F.max_pool2d(x, 2, 2)
x = F.relu(self.conv2(x))
x = F.max_pool2d(x, 2, 2)
x = x.view(-1, 4*4*50)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return F.log_softmax(x, dim=1)
criterion = nn.NLLLoss()
Explanation: This is the only place where you need to specify local or distributed mode. View Orca Context for more details.
Note: You should export HADOOP_CONF_DIR=/path/to/hadoop/conf/dir when you run on Hadoop YARN cluster.
Step 2: Define the Model
You may define your model, loss and optimizer in the same way as in any standard PyTorch program.
End of explanation
def model_creator(config):
model = LeNet(fc1_hidden_size=config["fc1_hidden_size"])
return model
def optim_creator(model, config):
return torch.optim.Adam(model.parameters(), lr=config["lr"])
Explanation: After defining your model, you need to define a Model Creator Function that returns an instance of your model, and a Optimizer Creator Function that returns a PyTorch optimizer. Note that both the Model Creator Function and the Optimizer Creator Function should take config as input and get the hyper-parameter values from config.
End of explanation
import torch
from torchvision import datasets, transforms
torch.manual_seed(0)
dir = '/tmp/dataset'
test_batch_size = 640
def train_loader_creator(config):
train_loader = torch.utils.data.DataLoader(
datasets.MNIST(dir, train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=config["batch_size"], shuffle=True)
return train_loader
def test_loader_creator(config):
test_loader = torch.utils.data.DataLoader(
datasets.MNIST(dir, train=False, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=test_batch_size, shuffle=False)
return test_loader
Explanation: Step 3: Define Dataset
You can define the train and validation datasets using Data Creator Function that has one parameter of config and returns a PyTorch DataLoader.
End of explanation
from bigdl.orca.automl import hp
search_space = {
"fc1_hidden_size": hp.choice([500, 600]),
"lr": hp.choice([0.001, 0.003]),
"batch_size": hp.choice([160, 320, 640]),
}
Explanation: Step 4: Define search space
You should define a dictionary as your hyper-parameter search space. The keys are hyper-parameter names which should be the same with those in your creators, and you can specify how you want to sample each hyper-parameter in the values of the search space. See automl.hp for more details.
End of explanation
from bigdl.orca.automl.auto_estimator import AutoEstimator
auto_est = AutoEstimator.from_torch(model_creator=model_creator,
optimizer=optim_creator,
loss=criterion,
logs_dir="/tmp/orca_automl_logs",
resources_per_trial={"cpu": 2},
name="lenet_mnist")
Explanation: Step 5: Automatically fit and search with Orca AutoEstimator
First, create an AutoEstimator. You can refer to AutoEstimator API doc for more details.
End of explanation
auto_est.fit(data=train_loader_creator,
validation_data=test_loader_creator,
search_space=search_space,
n_sampling=2,
epochs=1,
metric="accuracy")
Explanation: Next, use the auto estimator to fit and search for the best hyper-parameter set.
End of explanation
best_model = auto_est.get_best_model()
best_config = auto_est.get_best_config()
print(best_config)
Explanation: Finally, you can get the best learned model and the best hyper-parameters.
End of explanation
test_loader = test_loader_creator(best_config)
best_model.eval()
correct = 0
with torch.no_grad():
for data, target in test_loader:
output = best_model(data)
pred = output.data.max(1, keepdim=True)[1]
correct += pred.eq(target.data.view_as(pred)).sum().numpy()
accuracy = 100. * correct / len(test_loader.dataset)
print(f"accuracy is {accuracy}%")
Explanation: You can use the best learned model and the best hyper-parameters as you want. Here, we demonstrate how to evaluate on the test dataset.
End of explanation
# stop orca context when program finishes
stop_orca_context()
Explanation: You can find the accuracy of the best model has reached 98%.
End of explanation |
15,043 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A note about sigmas
We are regularly asked about the "sigma" levels in the 2D histograms. These are not the 68%, etc. values that we're used to for 1D distributions. In two dimensions, a Gaussian density is given by
Step1: First, plot this using the correct (default) 1-sigma level
Step2: Compare this to the 68% mass level and specifically compare to how the contour compares to the marginalized 68% quantile | Python Code:
import corner
import numpy as np
import matplotlib.pyplot as pl
# Generate some fake data from a Gaussian
np.random.seed(42)
x = np.random.randn(50000, 2)
Explanation: A note about sigmas
We are regularly asked about the "sigma" levels in the 2D histograms. These are not the 68%, etc. values that we're used to for 1D distributions. In two dimensions, a Gaussian density is given by:
pdf(r) = exp(-(r/s)^2/2) / (2*pi*s^2)
The integral under this density (using polar coordinates and implicitly integrating out the angle) is:
cdf(x) = Integral(r * exp(-(r/s)^2/2) / s^2, {r, 0, x})
= 1 - exp(-(x/s)^2/2)
This means that within "1-sigma", the Gaussian contains 1-exp(-0.5) ~ 0.393 or 39.3% of the volume. Therefore the relevant 1-sigma levels for a 2D histogram of samples is 39% not 68%. If you must use 68% of the mass, use the levels keyword argument when you call corner.corner.
We can visualize the difference between sigma definitions:
End of explanation
fig = corner.corner(x, quantiles=(0.16, 0.84), levels=(1-np.exp(-0.5),))
fig.suptitle("correct `one-sigma' level");
Explanation: First, plot this using the correct (default) 1-sigma level:
End of explanation
fig = corner.corner(x, quantiles=(0.16, 0.84), levels=(0.68,))
fig.suptitle("incorrect `one-sigma' level");
Explanation: Compare this to the 68% mass level and specifically compare to how the contour compares to the marginalized 68% quantile:
End of explanation |
15,044 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Conditional Execution
Boolean Expressions
We introduce a new value type, the boolean. A boolean can have one of two values
Step1: Comparison Operators
You can compare values together and get a boolean result
| Operator | Meaning|
|------|------|
| x == y | x equal to y |
| x != y | x not equal to y |
| x > y | x greater than y |
| x < y | x less than y |
| x >= y | x greater than or equal to y |
| x <= y | x less than or equal to y |
| x is y | x is the same as y |
| x is not y | x is not the same as y |
By using the operators in an expression the result evaluates to a boolean. x and y can be any type of value
Step2: TRY IT
See if 5.0000001 is greater than 5
Conditional Execution
We can write programs that change their behavior depending on the conditions.
We use an if statement to run a block of code if a condition is true. It won't run if the condition is false.
if (condition)
Step3: You can include more than one statement in the block of code in the if statement. You can tell python that this code should be part of the if statement by indenting it. This is called a 'block' of code
if (condition)
Step4: In alternative execution, there are two possibilities. One that happens if the condition is true, and one that happens if it is false. It is not possible to have both execute.
You use if/else syntax
if (condition)
Step5: Chained conditionals allow you to check several conditions. Only one block of code will ever run, though.
To run a chained conditional, you use if/elif/else syntax. You can use as many elifs as you want.
if (condition1)
Step6: TRY IT
Check if did_homework is true, if so, print out "You can play a video game", otherwise print out "Go get your backpack"
Logical Operators
Logical operators allow you to combine two or more booleans. They are and, or, not
and Truth table (only true if both values are true)
| val1 | val2 | val1 and val2 |
|-- |------|------|
| true | true | true |
| true | false | false |
| false | true | false |
| false | false | false |
or Truth table (true if at least 1 value is true)
| val1 | val2 | val1 or val2 |
|------|------|------|
| true | true | true |
| true | false | true |
| false | true | true |
| false | false | false |
not Truth table (the opposite of the value)
| val1 | not val1 |
|------|------|
| true | false |
| false | true |
Step7: You can use the logical operators in if statements
Step8: TRY IT
Check if the room is clean or the trash is taken out and if so print "Here is your allowance"
Nested Conditionals
You can nest conditional branches inside another. You just indent each level more.
if (condition)
Step9: Catching exceptions using try and except
You can put code into a try/except block. If the code has an error in the try block, it will stop running and go to the except block. If there is no error, the try block completes and the except block never runs.
try
Step10: A finally statment can be added to the try/except block and it executes at the end whether or not there is an error. This can be useful for cleaning up resources.
try
Step11: This can be useful when evaluating a user's input, to make sure it is what you expected.
Step12: TRY IT
Try converting the string 'candy' into an integer. If there is an error, print "What did you think would happen?"
Short-circuit evaluation of logical expressions
Python (and most other languages) are very lazy about logical expressions. As soon as it knows the value of the whole expression, it stops evaluating the expression.
if (condition1 and condition2) | Python Code:
cleaned_room = True
took_out_trash = False
print(cleaned_room)
print(type(took_out_trash))
Explanation: Conditional Execution
Boolean Expressions
We introduce a new value type, the boolean. A boolean can have one of two values: True or False
End of explanation
print(5 == 6)
print("Coke" != "Pepsi")
# You can compare to variables too
allowance = 5
print(5 >= allowance)
print(allowance is True)
Explanation: Comparison Operators
You can compare values together and get a boolean result
| Operator | Meaning|
|------|------|
| x == y | x equal to y |
| x != y | x not equal to y |
| x > y | x greater than y |
| x < y | x less than y |
| x >= y | x greater than or equal to y |
| x <= y | x less than or equal to y |
| x is y | x is the same as y |
| x is not y | x is not the same as y |
By using the operators in an expression the result evaluates to a boolean. x and y can be any type of value
End of explanation
# cleaned_room is true
if cleaned_room:
print("Good kid! You can watch TV.")
# took_out_trash if false
if took_out_trash:
print("Thank you!")
print(took_out_trash)
Explanation: TRY IT
See if 5.0000001 is greater than 5
Conditional Execution
We can write programs that change their behavior depending on the conditions.
We use an if statement to run a block of code if a condition is true. It won't run if the condition is false.
if (condition):
code_to_execute # if condition is true
In python indentation matters. The code to execute must be indented (4 spaces is best, though I like tabs) more than the if condition.
End of explanation
# cleaned_room is true
if cleaned_room:
print("Good job! You can watch TV.")
print("Or play outside")
# took_out_trash is false
if took_out_trash:
print("Thank you!")
print("You are a good helper")
print("It is time for lunch")
Explanation: You can include more than one statement in the block of code in the if statement. You can tell python that this code should be part of the if statement by indenting it. This is called a 'block' of code
if (condition):
statement1
statement2
statement3
You can tell python that the statement is not part of the if block by dedenting it to the original level
if (condition):
statement1
statement2
statement3 # statement3 will run even if condition is false
End of explanation
candies_taken = 4
if candies_taken < 3:
print('Enjoy!')
else:
print('Put some back')
Explanation: In alternative execution, there are two possibilities. One that happens if the condition is true, and one that happens if it is false. It is not possible to have both execute.
You use if/else syntax
if (condition):
code_runs_if_true
else:
code_runs_if_false
Again, note the colons and spacing. These are necessary in python.
End of explanation
did_homework = True
took_out_trash = True
cleaned_room = False
allowance = 0
if cleaned_room:
allowance = 10
elif took_out_trash:
allowance = 5
elif did_homework:
allowance = 4
else:
allowance = 2
print(allowance)
Explanation: Chained conditionals allow you to check several conditions. Only one block of code will ever run, though.
To run a chained conditional, you use if/elif/else syntax. You can use as many elifs as you want.
if (condition1):
run_this_code1
elif (condition2):
run_this_code2
elif (condition3):
run_this_code3
else:
run_this_code4
You are not required to have an else block.
if (condition1):
run_this_code1
elif (condition2):
run_this_code2
Each condition is checked in order. If the first is false, the next is checked, and so on. If one of them is true, the corresponding branch executes, and the statement ends. Even if more than one condition is true, only the first true branch executes.
End of explanation
print(True and True)
print(False or True)
print(not False)
Explanation: TRY IT
Check if did_homework is true, if so, print out "You can play a video game", otherwise print out "Go get your backpack"
Logical Operators
Logical operators allow you to combine two or more booleans. They are and, or, not
and Truth table (only true if both values are true)
| val1 | val2 | val1 and val2 |
|-- |------|------|
| true | true | true |
| true | false | false |
| false | true | false |
| false | false | false |
or Truth table (true if at least 1 value is true)
| val1 | val2 | val1 or val2 |
|------|------|------|
| true | true | true |
| true | false | true |
| false | true | true |
| false | false | false |
not Truth table (the opposite of the value)
| val1 | not val1 |
|------|------|
| true | false |
| false | true |
End of explanation
cleaned_room = True
took_out_trash = False
if cleaned_room and took_out_trash:
print("Let's go to Chuck-E-Cheese's.")
else:
print("Get to work!")
if not did_homework:
print("You're going to get a bad grade.")
Explanation: You can use the logical operators in if statements
End of explanation
allowance = 5
if allowance > 2:
if allowance >= 8:
print("Buy toys!")
else:
print("Buy candy!")
else:
print("Save it until I have enough to buy something good.")
Explanation: TRY IT
Check if the room is clean or the trash is taken out and if so print "Here is your allowance"
Nested Conditionals
You can nest conditional branches inside another. You just indent each level more.
if (condition):
run_this
else:
if (condition2):
run_this2
else:
run_this3
Avoid nesting too deep, it becomes difficult to read.
End of explanation
try:
print("Before")
y = 5/0
print("After")
except:
print("I'm sorry, the universe doesn't work that way...")
Explanation: Catching exceptions using try and except
You can put code into a try/except block. If the code has an error in the try block, it will stop running and go to the except block. If there is no error, the try block completes and the except block never runs.
try:
code
except:
code_runs_if_error
End of explanation
try:
print("Before")
y = 5/0
print("After")
except:
print("I'm sorry, the universe doesn't work that way...")
finally:
print("Even if you break python, you still have to do your chores")
Explanation: A finally statment can be added to the try/except block and it executes at the end whether or not there is an error. This can be useful for cleaning up resources.
try:
code
except:
code_runs_if_error
finally:
code runs at end with or without error
End of explanation
inp = input('Enter Fahrenheit Temperature:')
try:
fahr = float(inp)
cel = (fahr - 32.0) * 5.0 / 9.0
print(cel)
except:
print('Please enter a number')
Explanation: This can be useful when evaluating a user's input, to make sure it is what you expected.
End of explanation
if (1 < 2) or (5/0):
print("How did we do that?")
Explanation: TRY IT
Try converting the string 'candy' into an integer. If there is an error, print "What did you think would happen?"
Short-circuit evaluation of logical expressions
Python (and most other languages) are very lazy about logical expressions. As soon as it knows the value of the whole expression, it stops evaluating the expression.
if (condition1 and condition2):
run_code
In the above example, if condition1 is false then condition2 is never evaluated.
End of explanation |
15,045 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A sample of plots
The idea of this notebook is to show off a number of plot types, and act as a simple check of the plotting output. It requires matplotlib and does not attempt to describe the plots (see the help for the plot constructor for this!).
Step1: One dimensional data plots
Step2: We can have some fun with the plot options (these are a mixture of generic options, such as xlog, and ones specific to the plotting backend - which here is matplotlib - such as marker).
Step3: The plot object contains the preferences - here we look at the default plot settings. Note that some plot types have different - and even multiple - preference settings.
Step4: Error bars - here on the dependent axis - can be displayed too
Step5: Histogram-style data (with low and high edges) are handled similarly
Step6: If we want to see the data drawn "like a histogram" then we need to set the linestyle attribute
Step7: The histogram-style plots are an example of a plot using a different name for the preference settings, in this case histo_prefs
Step8: Previously we explicitly set the error values, but we can also use one of the chi-square statistics to come up with error values. In this case it's just the square-root of the data value (so, for $x \sim 1$ bin, we have an error of 0)
Step9: PHA-related plots
We start with an ARF from a rather simple instrument. This time we also use the SplitPlot class to create multiple plots (although this could be done just as easily with matplotlib functions)
Step10: The preferences for the split plot has a different "flavor" to the other types
Step11: It does allow us to tweak the plot layout
Step12: A PHA, which matches the ARF, can be created (with a sinusoidal pattern just to show something different)
Step13: Adding the ARF to the data allows us to change to energy units
Step14: Grouping the data - in this case in 20-channel groups - allows us to check the "x errorbar" handling (the 'errors' here just indicate the bin width, and so match the overplotted orange line)
Step15: We can see how a model looks for this dataset - in this case a simple sinusoidal model which is multiplied by the ARF (shown earlier), and so is not going to match the data.
Step16: Note that the ModelHistogram class does not use the grouping of the PHA dataset, so it shows the model evaluated per channel
Step17: The discontinuity at 4 keV is because of the step function in the ARF (200 cm$^2$ below this energy and 100 cm$^2$ above it).
The ModelPHAHistogram class does group the model to match the data
Step18: Object-less plots
There are a number of plot classes that don't need a data object, such as scatter plots
Step19: and cumulative plots | Python Code:
import numpy as np
%matplotlib inline
from sherpa import data
from sherpa.astro import data as astrodata
from sherpa import plot
from sherpa.astro import plot as astroplot
Explanation: A sample of plots
The idea of this notebook is to show off a number of plot types, and act as a simple check of the plotting output. It requires matplotlib and does not attempt to describe the plots (see the help for the plot constructor for this!).
End of explanation
x1 = np.asarray([100, 200, 600, 1200])
y1 = np.asarray([2000, 2100, 1400, 3050])
d1 = data.Data1D('oned', x1, y1)
plot1 = plot.DataPlot()
plot1.prepare(d1)
plot1.plot()
Explanation: One dimensional data plots
End of explanation
plot1.plot(xlog=True, linestyle='dotted', marker='*', markerfacecolor='orange', markersize=20, color='black')
Explanation: We can have some fun with the plot options (these are a mixture of generic options, such as xlog, and ones specific to the plotting backend - which here is matplotlib - such as marker).
End of explanation
plot.DataPlot.plot_prefs
Explanation: The plot object contains the preferences - here we look at the default plot settings. Note that some plot types have different - and even multiple - preference settings.
End of explanation
dy1 = np.asarray([100, 50, 200, 300])
d2 = data.Data1D('errors', x1, y1, dy1)
plot2 = plot.DataPlot()
plot2.prepare(d2)
plot2.plot()
plot2.plot(capsize=4)
Explanation: Error bars - here on the dependent axis - can be displayed too:
End of explanation
xlo2 = np.asarray([0.1, 0.2, 0.4, 0.8, 1.5])
xhi2 = np.asarray([0.2, 0.4, 0.6, 1.1, 2.0])
y2 = np.asarray([10, 12, 3, 0, 4])
data3 = data.Data1DInt('int1', xlo2, xhi2, y2)
plot3 = plot.DataHistogramPlot()
plot3.prepare(data3)
plot3.plot(xlog=True)
Explanation: Histogram-style data (with low and high edges) are handled similarly:
End of explanation
plot3.plot(xlog=True, linestyle='solid')
Explanation: If we want to see the data drawn "like a histogram" then we need to set the linestyle attribute:
End of explanation
plot.DataHistogramPlot.histo_prefs
Explanation: The histogram-style plots are an example of a plot using a different name for the preference settings, in this case histo_prefs:
End of explanation
from sherpa.stats import Chi2DataVar
plot4 = plot.DataHistogramPlot()
plot4.prepare(data3, stat=Chi2DataVar())
plot4.plot(linestyle='dashed', marker=None, ecolor='orange', capsize=4)
Explanation: Previously we explicitly set the error values, but we can also use one of the chi-square statistics to come up with error values. In this case it's just the square-root of the data value (so, for $x \sim 1$ bin, we have an error of 0):
End of explanation
energies = np.arange(0.1, 11, 0.01)
elo = energies[:-1]
ehi = energies[1:]
arf = 100 * np.ones_like(elo)
arf[elo < 4] = 200
darf = astrodata.DataARF('arf', elo, ehi, arf)
aplot = astroplot.ARFPlot()
aplot.prepare(darf)
splot = plot.SplitPlot()
splot.addplot(aplot)
splot.addplot(aplot, xlog=True)
Explanation: PHA-related plots
We start with an ARF from a rather simple instrument. This time we also use the SplitPlot class to create multiple plots (although this could be done just as easily with matplotlib functions):
End of explanation
splot.plot_prefs
Explanation: The preferences for the split plot has a different "flavor" to the other types:
End of explanation
splot.reset()
splot.plot_prefs['hspace'] = 0.6
splot.addplot(aplot)
splot.addplot(aplot, xlog=True)
Explanation: It does allow us to tweak the plot layout:
End of explanation
chans = np.arange(1, len(elo) + 1, dtype=np.int16)
counts = 5 + 5 * np.sin(elo * 4)
counts = counts.astype(np.int)
dpha = astrodata.DataPHA('pha', chans, counts)
pplot = astroplot.DataPHAPlot()
pplot.prepare(dpha)
pplot.plot()
Explanation: A PHA, which matches the ARF, can be created (with a sinusoidal pattern just to show something different):
End of explanation
dpha.set_arf(darf)
dpha.set_analysis('energy')
pplot.prepare(dpha)
pplot.plot(linestyle='solid', marker=None)
Explanation: Adding the ARF to the data allows us to change to energy units:
End of explanation
dpha.group_bins(20)
pplot.prepare(dpha, stat=Chi2DataVar())
pplot.plot(xerrorbars=True, yerrorbars=True)
pplot.overplot(linestyle='solid', alpha=0.5, marker=None)
Explanation: Grouping the data - in this case in 20-channel groups - allows us to check the "x errorbar" handling (the 'errors' here just indicate the bin width, and so match the overplotted orange line):
End of explanation
from sherpa.models.basic import Sin
from sherpa.astro.instrument import Response1D
mdl = Sin()
mdl.period = 4
# Note that the response information - in this case the ARF and channel-to-energy mapping - needs
# to be applied to the model, which is done by the Response1D class in this example.
#
rsp = Response1D(dpha)
full_model = rsp(mdl)
print(full_model)
Explanation: We can see how a model looks for this dataset - in this case a simple sinusoidal model which is multiplied by the ARF (shown earlier), and so is not going to match the data.
End of explanation
mplot = astroplot.ModelHistogram()
mplot.prepare(dpha, full_model)
mplot.plot()
Explanation: Note that the ModelHistogram class does not use the grouping of the PHA dataset, so it shows the model evaluated per channel:
End of explanation
mplot2 = astroplot.ModelPHAHistogram()
mplot2.prepare(dpha, full_model)
mplot.plot()
mplot2.overplot()
Explanation: The discontinuity at 4 keV is because of the step function in the ARF (200 cm$^2$ below this energy and 100 cm$^2$ above it).
The ModelPHAHistogram class does group the model to match the data:
End of explanation
np.random.seed(1273)
# I've never used the Wald distribution before, so let's see how it looks...
#
z1 = np.random.wald(1000, 20, size=1000)
z2 = np.random.wald(1000, 2000, size=1000)
splot = plot.ScatterPlot()
splot.prepare(z1, z2, xlabel='z$_1$', ylabel='z$_2$', name='(z$_1$, z$_2$)')
splot.plot(xlog=True)
Explanation: Object-less plots
There are a number of plot classes that don't need a data object, such as scatter plots:
End of explanation
cplot = plot.CDFPlot()
cplot.prepare(z1, xlabel='z', name='z')
cplot.plot(xlog=True)
cplot.prepare(z2)
cplot.overplot()
Explanation: and cumulative plots:
End of explanation |
15,046 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Supply curve figures of coal and petroleum resources
Sources I used for parts of this, and other things that might be helpful
Step1: Load coal data
Data is from
Step2: Fortuntely the Cost values are already sorted in ascending order. Cost will be on the y-axis, and cumulative recoverable resources will be on the x-axis.
Step3: Create a set of names to use for assigning colors and creating the legend
I'm not being picky about the order of colors.
Step4: Define a function that returns the integer color choice based on the region name
Use the function color_match to create a Series with rgb colors that will be used for each box in the figure. Do this using the map operation, which applies a function to each element in a Pandas Series.
Step5: color has rgb values for each resource
Step6: Define the edges of the patch objects that will be drawn on the plot
Step7: Make the figure (coal)
Step8: Load oil data
Data is from
Step9: Fortuntely the Cost values are already sorted in ascending order. Cost will be on the y-axis, and cumulative recoverable resources will be on the x-axis.
Step10: Create arrays of values with easy to type names
Step11: Create a set of names to use for assigning colors and creating the legend
I'm not being picky about the order of colors.
Step12: Define a function that returns the integer color choice based on the region name
Use the function color_match to create a Series with rgb colors that will be used for each box in the figure. Do this using the map operation, which applies a function to each element in a Pandas Series.
Step13: Define the edges of the patch objects that will be drawn on the plot
Step14: Make the figure | Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib.patches as patches
import matplotlib.path as path
from palettable.colorbrewer.qualitative import Paired_11
Explanation: Supply curve figures of coal and petroleum resources
Sources I used for parts of this, and other things that might be helpful:
- Patches to make a histogram
- Text properties and layout
- Add text with arrow line pointing somewhere (annotate) (you can probably get rid of the arrow head for just a line)
I had trouble finding a default color set with 11 colors for the petroleum data. Eventually discovered Palettable, and am using Paired_11 here. It is not my first choice of a color plot, but is one of the few defaults with that many colors. I generally recommend using one of the default seaborn color palettes such as muted or deep. Check out the seaborn tutorial for a basic primer colors.
End of explanation
fn = 'nature14016-f1.xlsx'
sn = 'Coal data'
coal_df = pd.read_excel(fn, sn)
Explanation: Load coal data
Data is from:
McGlade, C & Ekins, P. The geographical distribution of fossil fuels unused when limiting global warming to 2 °C. Nature 517, 187–190. (2015) doi:10.1038/nature14016
Coal data from Figure 1c.
End of explanation
coal_df.head()
coal_df.tail()
names = coal_df['Resource'].values
amount = coal_df['Quantity (ZJ)'].values
cost = coal_df['Cost (2010$/GJ)'].values
Explanation: Fortuntely the Cost values are already sorted in ascending order. Cost will be on the y-axis, and cumulative recoverable resources will be on the x-axis.
End of explanation
name_set = set(names)
name_set
color_dict = {}
for i, area in enumerate(name_set):
color_dict[area] = i #Assigning index position as value to resource name keys
color_dict
sns.color_palette('deep', n_colors=4, desat=.8)
sns.palplot(sns.color_palette('deep', n_colors=4, desat=.8))
Explanation: Create a set of names to use for assigning colors and creating the legend
I'm not being picky about the order of colors.
End of explanation
def color_match(name):
return sns.color_palette('deep', n_colors=4, desat=.8)[color_dict[name]]
Explanation: Define a function that returns the integer color choice based on the region name
Use the function color_match to create a Series with rgb colors that will be used for each box in the figure. Do this using the map operation, which applies a function to each element in a Pandas Series.
End of explanation
color = coal_df['Resource'].map(color_match)
color.head()
Explanation: color has rgb values for each resource
End of explanation
# get the corners of the rectangles for the histogram
left = np.cumsum(np.insert(amount, 0, 0))
right = np.cumsum(np.append(amount, .01))
bottom = np.zeros(len(left))
top = np.append(cost, 0)
Explanation: Define the edges of the patch objects that will be drawn on the plot
End of explanation
sns.set_style('whitegrid')
fig, ax = plt.subplots(figsize=(10,5))
# we need a (numrects x numsides x 2) numpy array for the path helper
# function to build a compound path
for i, name in enumerate(names):
XY = np.array([[left[i:i+1], left[i:i+1], right[i:i+1], right[i:i+1]],
[bottom[i:i+1], top[i:i+1], top[i:i+1], bottom[i:i+1]]]).T
# get the Path object
barpath = path.Path.make_compound_path_from_polys(XY)
# make a patch out of it (a patch is the shape drawn on the plot)
patch = patches.PathPatch(barpath, facecolor=color[i], ec='0.2')
ax.add_patch(patch)
#Create patch elements for a custom legend
#The legend function expects multiple patch elements as a list
patch = [patches.Patch(color=sns.color_palette('deep', 4, 0.8)[color_dict[i]], label=i)
for i in color_dict]
# Axis labels/limits, remove horizontal gridlines, etc
plt.ylabel('Cost (2010$/GJ)', size=14)
plt.xlabel('Quantity (ZJ)', size=14)
ax.set_xlim(left[0], right[-1])
ax.set_ylim(bottom.min(), 12)
ax.yaxis.grid(False)
ax.xaxis.grid(False)
#remove top and right spines (box lines around figure)
sns.despine()
#Add the custom legend
plt.legend(handles=patch, loc=2, fontsize=12)
plt.savefig('Example Supply Curve (coal).png')
Explanation: Make the figure (coal)
End of explanation
fn = 'nature14016-f1.xlsx'
sn = 'Oil data'
df = pd.read_excel(fn, sn)
Explanation: Load oil data
Data is from:
McGlade, C & Ekins, P. The geographical distribution of fossil fuels unused when limiting global warming to 2 °C. Nature 517, 187–190. (2015) doi:10.1038/nature14016
I'm using data from Figure 1a.
End of explanation
df.head()
df.tail()
Explanation: Fortuntely the Cost values are already sorted in ascending order. Cost will be on the y-axis, and cumulative recoverable resources will be on the x-axis.
End of explanation
names = df['Resource'].values
amount = df['Quantity (Gb)'].values
cost = df['Cost (2010$/bbl)'].values
Explanation: Create arrays of values with easy to type names
End of explanation
name_set = set(names)
name_set
color_dict = {}
for i, area in enumerate(name_set):
color_dict[area] = i #Assigning index position as value to resource name keys
color_dict
sns.palplot(Paired_11.mpl_colors)
Explanation: Create a set of names to use for assigning colors and creating the legend
I'm not being picky about the order of colors.
End of explanation
def color_match(name):
return Paired_11.mpl_colors[color_dict[name]]
def color_match(name):
return sns.husl_palette(n_colors=11, h=0.1, s=0.9, l=0.6)[color_dict[name]]
color_match('NGL')
color = df['Resource'].map(color_match)
Explanation: Define a function that returns the integer color choice based on the region name
Use the function color_match to create a Series with rgb colors that will be used for each box in the figure. Do this using the map operation, which applies a function to each element in a Pandas Series.
End of explanation
# get the corners of the rectangles for the histogram
left = np.cumsum(np.insert(amount, 0, 0))
right = np.cumsum(np.append(amount, .01))
bottom = np.zeros(len(left))
top = np.append(cost, 0)
Explanation: Define the edges of the patch objects that will be drawn on the plot
End of explanation
sns.set_style('whitegrid')
fig, ax = plt.subplots(figsize=(10,5))
# we need a (numrects x numsides x 2) numpy array for the path helper
# function to build a compound path
for i, name in enumerate(names):
XY = np.array([[left[i:i+1], left[i:i+1], right[i:i+1], right[i:i+1]],
[bottom[i:i+1], top[i:i+1], top[i:i+1], bottom[i:i+1]]]).T
# get the Path object
barpath = path.Path.make_compound_path_from_polys(XY)
# make a patch out of it (a patch is the shape drawn on the plot)
patch = patches.PathPatch(barpath, facecolor=color[i], ec='0.8')
ax.add_patch(patch)
#Create patch elements for a custom legend
#The legend function expects multiple patch elements as a list
patch = []
for i in color_dict:
patch.append(patches.Patch(color=Paired_11.mpl_colors[color_dict[i]],
label=i))
# Axis labels/limits, remove horizontal gridlines, etc
plt.ylabel('Cost (2010$/bbl)', size=14)
plt.xlabel('Quantity (Gb)', size=14)
ax.set_xlim(left[0], right[-2])
ax.set_ylim(bottom.min(), 120)
ax.yaxis.grid(False)
ax.xaxis.grid(False)
#remove top and right spines (box lines around figure)
sns.despine()
#Add the custom legend
plt.legend(handles=patch, loc=2, fontsize=12,
ncol=2)
plt.savefig('Example Supply Curve.png')
Explanation: Make the figure
End of explanation |
15,047 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="https
Step1: %%stata magic
In order to use ipystata you will need to use the %%stata magic. Let's see the help for it.
Step2: First example
Let's run some commands in Stata from this notebook. Let's run the same code as in the Stata Notebook Example. To do so, we will use the %%stata magic.
Step3: Notice that it returned everything except the graph. To be able to get the graph we need to provide the option -s graph_session to the %%stata magic.
Step4: Looks like there are issues preventing Stata to pass the figure back to Jupyter. Nonetheless, we can save it in Stata and open it here.
Step5: Let's import the figure to our notebook.
Moving data between Stata and Python
As we have seen Python is very powerful for data munging and cleaning. Also, we have seen that figures may look much nicer. But, since we already know Stata for econometric analyses, let's use both languages to get the best of each. We can do this by passing additional options to %%stata. First, let's get the data from auto.dta from Stata as a pandas dataframe.
Step6: Some analyses in Python
Now that we have the data in python we can do some analyses, merge with other datasets, or create some plots.
Step7: Creating some data
Step8: Analyzing the new data in Stata | Python Code:
import numpy as np
import pandas as pd
import ipystata
%pylab --no-import-all
%matplotlib inline
Explanation: <img src="https://www.stata.com/includes/images/stata-fb.jpg" alt="Stata" width="200"/> in a <img src="https://www.python.org/static/community_logos/python-logo-inkscape.svg" alt="Python" width=200/> <img src="https://raw.githubusercontent.com/adebar/awesome-jupyter/master/logo.png" alt="Jupyter" width=250/> Notebook
You can work with Stata in a Python notebook by using the package ipystata. Just like r2py, which allows us to use R in Python, we can now use both (or if you want all three!) programming languages in one notebook.
Setup
Let's start by importing all the packages we want to use.
End of explanation
%%stata?
Explanation: %%stata magic
In order to use ipystata you will need to use the %%stata magic. Let's see the help for it.
End of explanation
%%stata
sysuse auto.dta
summ
desc
reg price mpg rep78 headroom trunk weight length turn displacement gear_ratio foreign, r
scatter price mpg, mlabel(make)
Explanation: First example
Let's run some commands in Stata from this notebook. Let's run the same code as in the Stata Notebook Example. To do so, we will use the %%stata magic.
End of explanation
%%stata -gr
sysuse auto.dta
summ
desc
reg price mpg rep78 headroom trunk weight length turn displacement gear_ratio foreign, r
scatter price mpg, mlabel(make)
Explanation: Notice that it returned everything except the graph. To be able to get the graph we need to provide the option -s graph_session to the %%stata magic.
End of explanation
%%stata -gr
sysuse auto.dta
summ
desc
reg price mpg rep78 headroom trunk weight length turn displacement gear_ratio foreign, r
scatter price mpg, mlabel(make)
graph export "./graphs/price-mpg.png", replace
Explanation: Looks like there are issues preventing Stata to pass the figure back to Jupyter. Nonetheless, we can save it in Stata and open it here.
End of explanation
%%stata -o car_df
sysuse auto.dta
car_df
Explanation: Let's import the figure to our notebook.
Moving data between Stata and Python
As we have seen Python is very powerful for data munging and cleaning. Also, we have seen that figures may look much nicer. But, since we already know Stata for econometric analyses, let's use both languages to get the best of each. We can do this by passing additional options to %%stata. First, let's get the data from auto.dta from Stata as a pandas dataframe.
End of explanation
# Import matplotlib
import matplotlib as mpl
# Import seaborn
import seaborn as sns
sns.set()
# paths
pathgraphs = './graphs/'
# Define our function to plot
def ScatterPlot(dfin, var0='mpg', var1='price', labelvar='make',
dx=0.006125, dy=0.006125,
xlabel='Miles per Gallon',
ylabel='Price',
linelabel='Price',
filename='price-mpg.pdf'):
'''
Plot the association between var0 and var in dataframe using labelvar for labels.
'''
sns.set(rc={'figure.figsize':(11.7,8.27)})
sns.set_context("talk")
df = dfin.copy()
df = df.dropna(subset=[var0, var1]).reset_index(drop=True)
# Plot
k = 0
fig, ax = plt.subplots()
sns.regplot(x=var0, y=var1, data=df, ax=ax, label=linelabel)
movex = df[var0].mean() * dx
movey = df[var1].mean() * dy
for line in range(0,df.shape[0]):
ax.text(df[var0][line]+movex, df[var1][line]+movey, df[labelvar][line], horizontalalignment='left', fontsize=14, color='black')
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
plt.xlim([df[var0].min()-1, df[var0].max()+1])
plt.ylim([0, df[var1].max()+1000])
ax.tick_params(axis = 'both', which = 'major', labelsize=16)
ax.tick_params(axis = 'both', which = 'minor', labelsize=8)
ax.yaxis.set_major_formatter(mpl.ticker.StrMethodFormatter('{x:,.0f}'))
#ax.legend()
plt.savefig(pathgraphs + filename, dpi=300, bbox_inches='tight')
pass
ScatterPlot(car_df)
Explanation: Some analyses in Python
Now that we have the data in python we can do some analyses, merge with other datasets, or create some plots.
End of explanation
car_df['mpg_sq'] = car_df.mpg ** 2
Explanation: Creating some data
End of explanation
%%stata -d car_df
reg price mpg mpg_sq rep78 headroom trunk weight length turn displacement gear_ratio foreign, r
Explanation: Analyzing the new data in Stata
End of explanation |
15,048 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exercises for Chapter 1
Training Machine Learning Algorithms for Classification
Question 1. In the file algos/blank/perceptron.py, implement Rosenblatt's perceptron algorithm by fleshing out the class Perceptron. When you're finished, run the code in the block below to test your implementation.
Step1: Question 2. Raschka claims that without an epoch or a threshold of acceptable misclassification, the perceptron may not ever stop updating. Explain why this can happen, and give an example.
Step2: Question 3. The following diagram comes from Raschka's book. Try to answer the questions about it without looking back at the text.
What is being depicted in the diagram on the left? How about the diagram on the right?
Step3: Describe in words what the following symbols represent in the diagram on the left
Step4: Describe in words what the following symbols represent in the diagram on the right
Step5: True or False
Step6: True or False
Step7: Question 4. Plot $X$ and its standardized form $X'$ following the feature scaling algorithm that Raschka uses in the book. How does scaling the feature using the $t$-statistic change the sample distribution?
Step8: Question 5. In the file algos/blank/adaline.py, implement the Adaline rule in the class Adaline. When you're finished, run the code in the block below to test your implementation.
Step9: Question 6. Implement stochastic gradient descent as an option for the Adaline class. Then, run the test code below.
Step10: Question 7. Describe a situation in which you would choose to use batch gradient descent, a situation in which you would choose to use stochastic gradient descent, and a situation in which you would choose to use mini-batch gradient descent.
Step11: Question 8. Implement online learning as an option for the Adaline class. Then, run the test code below.
Step12: Question 9. Raschka claims that stochastic gradient descent could result in "cycles" if the order in which the samples were read (and corresponding weights updated) wasn't randomized, or "shuffled," before every iteration. Explain the intuition behind this idea, and describe what a "cycle" might look like.
Step13: Question 10. Verify that stochastic gradient descent improves the speed of convergence for Adaline in the case of the Iris dataset by plotting the errors against the iteration epoch in both cases. Then, briefly explain why this is the case. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from algos.blank.perceptron import Perceptron
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data', header=None)
y = df.iloc[0:100, 4].values
y = np.where(y == 'Iris-setosa', 1, -1)
X = df.iloc[0:100, [0, 2]].values
ppn = Perceptron()
ppn.fit(X, y)
if (ppn.errors[-1] == 0):
print('Looks good!')
else:
print("Looks like your classifier didn't converge to 0 :(")
Explanation: Exercises for Chapter 1
Training Machine Learning Algorithms for Classification
Question 1. In the file algos/blank/perceptron.py, implement Rosenblatt's perceptron algorithm by fleshing out the class Perceptron. When you're finished, run the code in the block below to test your implementation.
End of explanation
# Write your answer here
Explanation: Question 2. Raschka claims that without an epoch or a threshold of acceptable misclassification, the perceptron may not ever stop updating. Explain why this can happen, and give an example.
End of explanation
# Write your answer here
Explanation: Question 3. The following diagram comes from Raschka's book. Try to answer the questions about it without looking back at the text.
What is being depicted in the diagram on the left? How about the diagram on the right?
End of explanation
# Write your answer here
Explanation: Describe in words what the following symbols represent in the diagram on the left:
The axes, $w^{T}x$ and $\phi(w^{T}x)$
The thick black line
End of explanation
# Write your answer here
Explanation: Describe in words what the following symbols represent in the diagram on the right:
The red circles
The blue pluses
The axes, $X_{1}$ and $X_{2}$
The vertical dashed line
End of explanation
# Write your answer here
Explanation: True or False: In the diagram on the right, $X_{1} = \phi(w^{T}x) = 0$. Explain your reasoning.
End of explanation
# Write your answer here
Explanation: True or False: in the general relationship depicted by the diagram on the right ($X_{1}$ vs. $X_{2}$), the dashed line must always be vertical. Explain your reasoning.
End of explanation
# Write your code here
# Write your answer here
Explanation: Question 4. Plot $X$ and its standardized form $X'$ following the feature scaling algorithm that Raschka uses in the book. How does scaling the feature using the $t$-statistic change the sample distribution?
End of explanation
from algos.blank.adaline import Adaline
ada = Adaline()
ada.fit(X_std, y)
if (ada.cost[-1] < 5):
print('Looks good!')
else:
print("Looks like your classifier didn't find the minimum :(")
Explanation: Question 5. In the file algos/blank/adaline.py, implement the Adaline rule in the class Adaline. When you're finished, run the code in the block below to test your implementation.
End of explanation
ada_sgd = Adaline(stochastic=True)
ada_sgd.fit(X_std, y)
if (ada_sgd.cost[1] < 1):
print('Looks good!')
else:
print("Looks like your stochastic model isn't performing well enough :(")
Explanation: Question 6. Implement stochastic gradient descent as an option for the Adaline class. Then, run the test code below.
End of explanation
# Write your answer here
Explanation: Question 7. Describe a situation in which you would choose to use batch gradient descent, a situation in which you would choose to use stochastic gradient descent, and a situation in which you would choose to use mini-batch gradient descent.
End of explanation
new_X = df.iloc[100, [0, 2]]
new_X = new_X - (np.mean(X, axis=0)) / np.std(X, axis=0)
new_y = df.iloc[100, 4]
new_y = np.where(new_y == 'Iris-setosa', 1, -1)
ada_sgd.partial_fit(new_X, new_y)
Explanation: Question 8. Implement online learning as an option for the Adaline class. Then, run the test code below.
End of explanation
# Write your answer here
Explanation: Question 9. Raschka claims that stochastic gradient descent could result in "cycles" if the order in which the samples were read (and corresponding weights updated) wasn't randomized, or "shuffled," before every iteration. Explain the intuition behind this idea, and describe what a "cycle" might look like.
End of explanation
# Write your code here
# Write your answer here
Explanation: Question 10. Verify that stochastic gradient descent improves the speed of convergence for Adaline in the case of the Iris dataset by plotting the errors against the iteration epoch in both cases. Then, briefly explain why this is the case.
End of explanation |
15,049 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Basic statistics
Step1: Applicant-apply-Job matrix
A. Number of times an applicant applies a specific job title (position).
Step2: As the number of active days changes with users, we need to calculate the avg. apply frequency by dividing n_apply by n_active_day.
Step3: E. Build applicant-apply-job matrix
Jobs are considered at job title level.
Each entry $ e_{u,j} $ of the matrix is either the number of times (frequency) applicant $u$ applies job title $j$.
This is similar to building the user-item matrix.
To build the matrix, there are two ways.
+ define a function to convert df agg_apps to matrix form
+ use available countvectorizer() in module sklearn.feature_extraction.text, where we consider application history of each user as a document and each job title he applied as a word in the document.
The cons of 2nd way is that to cover all job titles, which have different lengths (# words), countvectorizer() need to repeatedly split each document into n-grams, which is time consuming.
We thus use the 1st way.
Map user ids and item ids into internal user and item indices
The indices will be used to build user-item matrix later.
Step4: Quartiles of the number of times an applicant applies for a specific job
Step5: As expected, for most of the cases (50%), an applicant applies just once for a specific job.
However, we can also see at least 1 extreme case where an applicant applies 582 times for just a job title. Thus, let's look more closely at the distribution of $N_{apply}$.
Step6: From the histogram, we can see that there are cases when a user applies for a job titles at least 100 times. Let's look closer at those extreme cases.
Extreme cases
To get a more complete picture on these extreme cases, let's look at
Step7: B. Number of different job titles an applicant applies
Step8: C. Number of company an applicant applies
Step9: D. Number of (job title, company) an applicant applies
Step10: Thus the dimensions of applicant-apply-job matrix should be 68144 $\times$ 5794.
Step11: F. Build applicant-apply-(job, employer) matrix | Python Code:
n_application, n_applicant, n_job, n_job_title = apps.shape[0], apps['uid'].nunique(), apps['job_id'].nunique(), apps['job_title'].nunique()
n_company = apps['reg_no_uen_ep'].nunique()
stats = pd.DataFrame({'n_application': n_application, 'n_applicant': n_applicant,
'n_job': n_job, 'n_job_title': n_job_title,
'n_company': n_company}, index=[0])
stats
stats.to_csv(DATA_DIR + 'stats/stats.csv', index=False)
Explanation: Basic statistics
End of explanation
agg_apps = pd.read_csv(AGG_DIR + 'timed_apps.csv')
print agg_apps.shape
agg_apps.sort_values('n_apply', ascending=False, inplace=True)
# top 10 extreme cases
agg_apps.head(10)
Explanation: Applicant-apply-Job matrix
A. Number of times an applicant applies a specific job title (position).
End of explanation
agg_apps['apply_freq'] = agg_apps['n_apply']/agg_apps['n_active_day']
agg_apps['apply_freq'] = np.round(agg_apps['apply_freq'], 2)
agg_apps.sort_values(by='apply_freq', ascending=False, inplace=True)
agg_apps.head()
quantile(agg_apps['apply_freq'])
agg_apps.to_csv(AGG_DIR + 'timed_apps.csv', index=False)
Explanation: As the number of active days changes with users, we need to calculate the avg. apply frequency by dividing n_apply by n_active_day.
End of explanation
user_ids = np.unique(agg_apps['uid'])
index_of_users = { user_ids[i]:i for i in range(len(user_ids)) }
item_ids = np.unique(agg_apps['job_title'])
index_of_items = { item_ids[i]:i for i in range(len(item_ids))}
n_user = len(index_of_users.keys())
n_item = len(index_of_items.keys())
# Given index_of_users and index_of_items,
## build user-item matrix from a df with columns (user_col, item_col, rating_col) containing triples (uid, item_id, rating)
def buildUserItemMat(df, user_col = 'uid', item_col = 'item_id', rating_col = 'rating'):
print('Mapping user ids to internal user indices...')
row_ind = list(df.apply(lambda r: index_of_users[r[user_col]], axis=1))
print('Mapping item ids to internal item indices...')
col_ind = list(df.apply(lambda r: index_of_items[r[item_col]], axis=1))
ratings = list(df[rating_col])
n_user, n_item = len(index_of_users.keys()), len(index_of_items.keys())
user_item_mat = sp.csr_matrix((ratings, (row_ind, col_ind)), shape=(n_user, n_item))
print('User-Item matrix built')
return user_item_mat
user_apply_job = buildUserItemMat(df=agg_apps, user_col='uid', item_col='job_title', rating_col='n_apply')
from scipy.io import *
mmwrite(DATA_DIR + 'user_apply_job.mtx', user_apply_job)
df = pd.DataFrame({'uid': index_of_users.keys(), 'u_index': index_of_users.values()})
df.sort_values('u_index', inplace=True)
df.to_csv(DATA_DIR + 'user_dict.csv', index=False)
# index_of_items.keys()[:3]
df = pd.DataFrame({'job_title': index_of_items.keys(), 'item_index': index_of_items.values()})
df.sort_values('item_index', inplace=True)
df.to_csv(DATA_DIR + 'item_dict.csv', index=False)
by_job_title = agg_apps[['job_title', 'n_apply']].groupby('job_title').sum()
by_job_title = by_job_title.add_prefix('total_').reset_index()
# top-10 popular job titles
by_job_title.sort_values('total_n_apply', ascending=False, inplace=True)
by_job_title.head(10)
by_user = agg_apps[['uid', 'n_apply']].groupby('uid').sum()
by_user = by_user.add_prefix('total_').reset_index()
# top-10 hard working job hunters
by_user.sort_values('total_n_apply', ascending=False, inplace=True)
by_user.head(10)
by_job_title.head(10).to_csv(RES_DIR + 'top10_job_titles.csv', index=False)
by_user.head(10).to_csv(RES_DIR + 'top10_job_hunters.csv', index=False)
Explanation: E. Build applicant-apply-job matrix
Jobs are considered at job title level.
Each entry $ e_{u,j} $ of the matrix is either the number of times (frequency) applicant $u$ applies job title $j$.
This is similar to building the user-item matrix.
To build the matrix, there are two ways.
+ define a function to convert df agg_apps to matrix form
+ use available countvectorizer() in module sklearn.feature_extraction.text, where we consider application history of each user as a document and each job title he applied as a word in the document.
The cons of 2nd way is that to cover all job titles, which have different lengths (# words), countvectorizer() need to repeatedly split each document into n-grams, which is time consuming.
We thus use the 1st way.
Map user ids and item ids into internal user and item indices
The indices will be used to build user-item matrix later.
End of explanation
quantile(agg_apps['n_apply'])
Explanation: Quartiles of the number of times an applicant applies for a specific job:
End of explanation
plt.hist(agg_apps['n_apply'], bins=np.unique(agg_apps['n_apply']), log=True)
plt.xlabel(r'$N_{apply}$')
plt.ylabel('# applicant-job pairs (log scale)')
# plt.savefig(DATA_DIR + 'apply_freq.pdf')
plt.show()
plt.close()
Explanation: As expected, for most of the cases (50%), an applicant applies just once for a specific job.
However, we can also see at least 1 extreme case where an applicant applies 582 times for just a job title. Thus, let's look more closely at the distribution of $N_{apply}$.
End of explanation
extremes = pd.read_csv(RES_DIR + 'extremes.csv')
print('No. of extreme cases: {}'.format(extremes.shape[0]))
extremes.head(3)
quantile(extremes['n_active_day'])
Explanation: From the histogram, we can see that there are cases when a user applies for a job titles at least 100 times. Let's look closer at those extreme cases.
Extreme cases
To get a more complete picture on these extreme cases, let's look at:
No. of active days: already aggregated
companies:
End of explanation
apps_by_job_title = pd.read_csv(AGG_DIR + 'apps_by_job_title.csv')
fig = plt.figure(figsize=(10,6))
plt.subplot(1,2,1)
loglog(apps_by_job_title['n_job_title'], xl='# Job titles applied', yl='# applicants')
plt.subplots_adjust(wspace=.5)
plt.subplot(1,2,2)
loglog(apps_by_job_title['n_job'], xl='# Jobs applied', yl='# applicants')
# plt.savefig(FIG_DIR + 'figs/applied_jobs.pdf')
plt.show()
plt.close()
Explanation: B. Number of different job titles an applicant applies
End of explanation
apps_by_comp = pd.read_csv(AGG_DIR + 'apps_by_comp.csv')
apps_by_comp.shape
loglog(apps_by_comp['n_apply'], xl='# applications', yl='# user-apply-company cases')
# plt.savefig(FIG_DIR + 'user_comp.pdf')
plt.show()
plt.close()
Explanation: C. Number of company an applicant applies
End of explanation
apps_by_job_comp = pd.read_csv(AGG_DIR + 'apps_by_job_comp.csv')
apps_by_job_comp.shape
loglog(apps_by_job_comp['n_apply'], xl='# applications', yl='# user-apply-job-at-company cases')
# plt.savefig(FIG_DIR + 'user_job_comp.pdf')
plt.show()
plt.close()
job_comp = apps[['job_title', 'organisation_name_ep']].drop_duplicates()
print('No. of job-company pairs: {}'.format(job_comp.shape[0]))
def getRecords(uids, df):
return df[ df['uid'].isin(uids)]
print('No. of applicants: {}'.format(n_applicant))
print('No. of job titles: {}'.format(n_job_title))
Explanation: D. Number of (job title, company) an applicant applies
End of explanation
apps_by_job_title = pd.read_csv(AGG_DIR + 'apps_by_job_title.csv')
# sanity check
print(apps_by_job_title.shape[0] == n_applicant)
apps_by_job_title.head()
import sklearn.feature_extraction.text as text_manip
import scipy.sparse as sp
docs = apps_by_job_title['job_titles']
job_titles = apps['job_title'].unique()
max_len = max(map(n_word, job_titles))
print('max no. of words in a job title: {}'.format(max_len))
job_title_len = map(n_word, job_titles)
quantile(job_title_len)
plt.hist(job_title_len, bins=np.unique(job_title_len))
plt.xlabel('# words in job title')
plt.ylabel('# job titles')
plt.show()
count_vec = text_manip.CountVectorizer(vocabulary=job_titles, ngram_range=(1,6))
t0 = time()
print('Building applicant-apply-job matrix...')
user_apply_job = count_vec.fit_transform(docs)
print('Done after {}s'.format(time()-t0))
# sparsity of applicant-apply-job
float(user_apply_job.nnz)/(n_applicant * n_job_title)
nrow, ncol = user_apply_job.shape[0], user_apply_job.shape[1]
print('Dimension of applicant-apply-job matrix: {} x {}'.format(nrow, ncol))
feats = count_vec.get_feature_names()
# sum([1 for j in first_user_job_titles if j in feats])
from scipy.io import *
mmwrite(DATA_DIR + 'user_apply_job.mtx', user_apply_job)
# first_user_job_titles = docs[0].split(',')
# n_job_in_vocab = sum([1 for j in first_user_job_titles if j in vocab])
# print('Total # jobs: %d' %len(first_user_job_titles))
# print('# jobs in vocab: %d' %n_job_in_vocab)
# all(j in vocab for j in first_user_job_titles)
Explanation: Thus the dimensions of applicant-apply-job matrix should be 68144 $\times$ 5794.
End of explanation
quantile(apps_by_job_comp['n_apply'])
apps_by_job_comp.rename(columns={'organisation_name_ep': 'employer_name', 'reg_no_uen_ep': 'employer_id'}, inplace=True)
apps_by_job_comp.query('n_apply >= 50')
apps.query('uid == 103204').query('job_title == "analyst"').query('reg_no_uen_ep=="196800306E"').to_csv(RES_DIR + 'tmp.csv')
apps_by_job_comp['job_employer'] = apps_by_job_comp['job_title'] + ' at ' + apps_by_job_comp['employer_name']
apps_by_job_comp.head()
uniq_job_employers = np.unique(apps_by_job_comp['job_employer'])
len(uniq_job_employers)
users = np.unique(apps['uid'])
len(users)
job_employer_idx = { uniq_job_employers[i]:i for i in range(len(uniq_job_employers))}
index_of_users = { users[i]:i for i in range(len(users)) }
apps_by_job_comp.apply(putTriple, axis=1)
Explanation: F. Build applicant-apply-(job, employer) matrix
End of explanation |
15,050 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CARTO Data Observatory
This is a basic template notebook to start exploring your new Dataset from CARTO's Data Observatory via the Python library CARTOframes.
You can find more details about how to use CARTOframes in the Quickstart guide.
Setup
Installation
Make sure that you have the latest version installed. Please, find more information in the Installation guide.
Step1: Credentials
In order to be able to use the Data Observatory via CARTOframes, you need to set your CARTO account credentials first.
Step2: ⚠️ Note about credentials
For security reasons, we recommend storing your credentials in an external file preventing publishing them by accident within your notebook. You can get more information in the section Setting your credentials of the Authentication guide.
Dataset operations
Metadata exploration
In this section we will explore some basic information regarding the Dataset you have licensed. More information on how to explore metadata associated to a Dataset is available in the Data discovery guide.
In order to access the Dataset and its associeted metadata, you need to provide the "ID" which is a unique identifier of that Dataset. The IDs of your Datasets are available from Your Subscriptions page in the CARTO Dashboard and via the Discovery methods in CARTOFrames.
Step3: Access the data
Now that we have explored some basic information about the Dataset, we will proceed to download a sample of the Dataset into a dataframe so we can operate it in Python.
Datasets can be downloaded in full or by applying a filter with a SQL query. More info on how to download the Dataset or portions of it is available in the Data discovery guide.
Step4: Note about SQL filters
Our SQL filtering queries allow for any PostgreSQL and PostGIS operation, so you can filter the rows (by a WHERE condition) or the columns (using the SELECT). Some common examples are filtering the Dataset by bounding box or filtering by column value
Step5: Visualization
Now that we have downloaded some data into a dataframe we can leverage the visualization capabilities of CARTOframes to build an interactive map.
More info about building visualizations with CARTOframes is available in the Visualization guide.
Step6: Note about variables
CARTOframes allows you to make data-driven visualizations from your Dataset variables (columns) via the Style helpers. These functions provide out-of-the-box cartographic styles, legends, popups and widgets.
Style helpers are also highly customizable to reach your desired visualization setting simple parameters. The helpers collection contains functions to visualize by color and size, and also by type
Step7: Upload to CARTO account
In order to operate with the data in CARTO Builder or to build a CARTOFrames visualization reading the data from a table in the Cloud instead of having it in your local environment (with its benefits in performance), you can load the dataframe as a table in your CARTO account.
More info in the Data Management guide.
Step8: Enrichment
Enrichment is the process of adding variables to a geometry, which we call the target (point, line, polygon…) from a spatial Dataset, which we call the source. CARTOFrames has a set of methods for you to augment your data with new variables from a Dataset in the Data Observatory.
In this example, you will need to load a dataframe with the geometries that you want to enrich with a variable or a group of variables from the Dataset. You can detail the variables to get from the Dataset by passing the variable's ID. You can get the variables IDs with the metadata methods.
More info in the Data enrichment guide.
Step9: Save to file
Finally, you can also export the data into a CSV file. More info in the Data discovery guide. | Python Code:
!pip install -U cartoframes
# Note: a kernel restart is required after installing the library
import cartoframes
cartoframes.__version__
Explanation: CARTO Data Observatory
This is a basic template notebook to start exploring your new Dataset from CARTO's Data Observatory via the Python library CARTOframes.
You can find more details about how to use CARTOframes in the Quickstart guide.
Setup
Installation
Make sure that you have the latest version installed. Please, find more information in the Installation guide.
End of explanation
from cartoframes.auth import set_default_credentials
username = 'YOUR_USERNAME'
api_key = 'YOUR_API_KEY' # Master API key. Do not make this file public!
set_default_credentials(username, api_key)
Explanation: Credentials
In order to be able to use the Data Observatory via CARTOframes, you need to set your CARTO account credentials first.
End of explanation
from cartoframes.data.observatory import Dataset
dataset = Dataset.get('YOUR_ID')
# Retrieve some general metadata about the Dataset
dataset.to_dict()
# Explore the first 10 rows of the Dataset
dataset.head()
# Explore the last 10 rows of the Dataset
dataset.tail()
# Get the geographical coverage of the data
dataset.geom_coverage()
# Access the list of variables in the dataset
dataset.variables.to_dataframe()
# Summary of some variable stats
dataset.describe()
Explanation: ⚠️ Note about credentials
For security reasons, we recommend storing your credentials in an external file preventing publishing them by accident within your notebook. You can get more information in the section Setting your credentials of the Authentication guide.
Dataset operations
Metadata exploration
In this section we will explore some basic information regarding the Dataset you have licensed. More information on how to explore metadata associated to a Dataset is available in the Data discovery guide.
In order to access the Dataset and its associeted metadata, you need to provide the "ID" which is a unique identifier of that Dataset. The IDs of your Datasets are available from Your Subscriptions page in the CARTO Dashboard and via the Discovery methods in CARTOFrames.
End of explanation
# Filter by SQL query
query = "SELECT * FROM $dataset$ LIMIT 50"
dataset_df = dataset.to_dataframe(sql_query=query)
Explanation: Access the data
Now that we have explored some basic information about the Dataset, we will proceed to download a sample of the Dataset into a dataframe so we can operate it in Python.
Datasets can be downloaded in full or by applying a filter with a SQL query. More info on how to download the Dataset or portions of it is available in the Data discovery guide.
End of explanation
# First 10 rows of the Dataset sample
dataset_df.head()
Explanation: Note about SQL filters
Our SQL filtering queries allow for any PostgreSQL and PostGIS operation, so you can filter the rows (by a WHERE condition) or the columns (using the SELECT). Some common examples are filtering the Dataset by bounding box or filtering by column value:
SELECT * FROM $dataset$ WHERE ST_IntersectsBox(geom, -74.044467,40.706128,-73.891345,40.837690)
SELECT total_pop, geom FROM $dataset$
A good tool to get the bounding box details for a specific area is bboxfinder.com.
End of explanation
from cartoframes.viz import Layer
Layer(dataset_df, geom_col='geom')
Explanation: Visualization
Now that we have downloaded some data into a dataframe we can leverage the visualization capabilities of CARTOframes to build an interactive map.
More info about building visualizations with CARTOframes is available in the Visualization guide.
End of explanation
from cartoframes.viz import color_bins_style
Layer(dataset_df, color_bins_style('YOUR_VARIABLE_ID'), geom_col='geom')
Explanation: Note about variables
CARTOframes allows you to make data-driven visualizations from your Dataset variables (columns) via the Style helpers. These functions provide out-of-the-box cartographic styles, legends, popups and widgets.
Style helpers are also highly customizable to reach your desired visualization setting simple parameters. The helpers collection contains functions to visualize by color and size, and also by type: category, bins and continuous, depending on the type of the variable.
End of explanation
from cartoframes import to_carto
to_carto(dataset_df, 'my_dataset', geom_col='geom')
# Build a visualization reading the data from your CARTO account
Layer('my_dataset')
Explanation: Upload to CARTO account
In order to operate with the data in CARTO Builder or to build a CARTOFrames visualization reading the data from a table in the Cloud instead of having it in your local environment (with its benefits in performance), you can load the dataframe as a table in your CARTO account.
More info in the Data Management guide.
End of explanation
from cartoframes.data.observatory import Enrichment
enriched_df = Enrichment().enrich_polygons(
df, # Insert here the DataFrame to be enriched
variables=['YOUR_VARIABLE_ID']
)
Explanation: Enrichment
Enrichment is the process of adding variables to a geometry, which we call the target (point, line, polygon…) from a spatial Dataset, which we call the source. CARTOFrames has a set of methods for you to augment your data with new variables from a Dataset in the Data Observatory.
In this example, you will need to load a dataframe with the geometries that you want to enrich with a variable or a group of variables from the Dataset. You can detail the variables to get from the Dataset by passing the variable's ID. You can get the variables IDs with the metadata methods.
More info in the Data enrichment guide.
End of explanation
# Filter by SQL query
query = "SELECT * FROM $dataset$ LIMIT 50"
dataset.to_csv('my_dataset.csv', sql_query=query)
Explanation: Save to file
Finally, you can also export the data into a CSV file. More info in the Data discovery guide.
End of explanation |
15,051 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This is the <a href="https
Step1: How do we define direction of an earth magnetic field?
Earth magnetic field is a vector. To define a vector we need to choose a coordinate system. We use right-handed system
Step2: Magnetic applet
Based on the prism that you made above, below Magnetic applet computes magnetic field at receiver locations, and provide both 2D map (left) and profile line (right).
For the prism, you can alter | Python Code:
import numpy as np
from geoscilabs.mag import Mag, Simulator
%matplotlib inline
Explanation: This is the <a href="https://jupyter.org/">Jupyter Notebook</a>, an interactive coding and computation environment. For this lab, you do not have to write any code, you will only be running it.
To use the notebook:
- "Shift + Enter" runs the code within the cell (so does the forward arrow button near the top of the document)
- You can alter variables and re-run cells
- If you want to start with a clean slate, restart the Kernel either by going to the top, clicking on Kernel: Restart, or by "esc + 00" (if you do this, you will need to re-run the following block of code before running any other cells in the notebook)
This notebook uses code adapted from
SimPEG
- Cockett, R., S. Kang, L.J. Heagy, A. Pidlisecky, D.W. Oldenburg (2015, in review), SimPEG: An open source framework for simulation and gradient based parameter estimation in geophysical applications. Computers and Geosciences
End of explanation
#Input parameters
fileName = 'https://github.com/geoscixyz/geosci-labs/raw/master/assets/mag/data/DO27_TMI.dat'
xyzd = np.genfromtxt(fileName, skip_header=3)
B = np.r_[60308, 83.8, 25.4]
survey, dobj = Mag.createMagSurvey(xyzd, B)
# View the data and chose a profile
param = Simulator.ViewMagSurvey2D(survey, dobj)
display(param)
# Define the parametric model interactively
model = Simulator.ViewPrism(param.result)
display(model)
Explanation: How do we define direction of an earth magnetic field?
Earth magnetic field is a vector. To define a vector we need to choose a coordinate system. We use right-handed system:
- X (Easting),
- Y (Northing), and
- Z (Up).
Here we consider an earth magnetic field ($\vec{B_0}$), of which intensity is one. To define this unit vector, we use inclinatino and declination:
- Declination: An angle from geographic North (Ng) (positive clockwise)
- Inclination: Vertical angle from the N-E plane (positive down)
<img src="https://github.com/geoscixyz/geosci-labs/raw/master/images/mag/earthfield.png?raw=true" style="width: 60%; height: 60%"> </img>
What's data: total field anomaly
We consider a typical form of magnetic data. To illustrate this we consider an suceptible object embedded in the earth.
Based upon the earth magnetic field ($\vec{B}_0$), this object will generate anomalous magnetic field ($\vec{B}_A$). We define an unit vector $\hat{B}_0$ for the earth field as
$$ \hat{B}_0 = \frac{\vec{B}_0}{|\vec{B}_0|}$$
We measure both earth and anomalous magnetic field such that
$$ \vec{B} = \vec{B}_0 + \vec{B}_A$$
Total field anomaly, $\triangle \vec{B}$ can be defined as
$$ |\triangle \vec{B}| = |\vec{B}|-|\vec{B}_E| $$
If $|\vec{B}|\ll|\vec{B}_E|$, then that is total field anomaly $\triangle \vec{B}$ is the projection of the anomalous field onto the direction of the earth field:
$$ |\triangle \vec{B}| \simeq \vec{B}_A \cdot \hat{B}_0=|\vec{B}_A|cos\theta$$
<img src="https://github.com/geoscixyz/geosci-labs/raw/master/images/mag/totalfieldanomaly.png?raw=true" style="width: 50%; height: 50%">
Define a 3D prism
Our model is a rectangular prism. Parameters to define this prism are given below:
dx: length in Easting (x) direction (meter)
dy: length in Northing (y) direction (meter)
dz: length in Depth (z) direction (meter) below the receiver
depth: top boundary of the prism (meter)
pinc: inclination of the prism (reference is a unit northing vector; degree)
pdec: declination of the prism (reference is a unit northing vector; degree)
You can also change the height of the survey grid above the ground
- rx_h: height of the grid (meter)
Green dots show a plane where we measure data.
End of explanation
plotwidget = Simulator.PFSimulator(model, param)
display(plotwidget)
Explanation: Magnetic applet
Based on the prism that you made above, below Magnetic applet computes magnetic field at receiver locations, and provide both 2D map (left) and profile line (right).
For the prism, you can alter:
- sus: susceptibility of the prism
Parameters for the earth field are:
- Einc: inclination of the earth field (degree)
- Edec: declination of the earth field (degree)
- Bigrf: intensity of the earth field (nT)
For data, you can view:
- tf: total field anomaly,
- bx :x-component,
- by :y-component,
- bz :z-component
You can simulate and view remanent magnetization effect with parameters:
- irt: "induced", "remanent", or "total"
- Q: Koenigsberger ratio ($\frac{M_{rem}}{M_{ind}}$)
- rinc: inclination of the remanent magnetization (degree)
- rdec: declination of the remanent magnetization (degree)
End of explanation |
15,052 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ABU量化系统使用文档
<center>
<img src="./image/abu_logo.png" alt="" style="vertical-align
Step1: 受限于沙盒中数据限制,本节示例的相关性分析只限制在abupy内置沙盒数据中,完整示例以及代码请阅读后面章节的全市场回测示例或者《量化交易之路》中相关章节。
首先将内置沙盒中美股,A股,港股, 比特币,莱特币,期货市场中的symbol都列出来
Step2: 1. 切分训练集交易的回测
下面把沙盒数据中的symbol分成两组,一组做为训练symbol
本例训练集symbol使用:沙盒中所有美股 + 沙盒中所有A股 + 沙盒中所有港股 + 比特币
本例测试集symbol使用:沙盒中所有期货 + 莱特币
备注:本例由于symbol数量少所以手动分配训练集,测试集,非沙盒数据环境下使用abupy.env.g_enable_train_test_split等相关设置进行对参数中的symbol或者某个全市场symbol进行自动切割训练集,测试集,详例请阅读《量化交易之路》中相关章节
Step3: 下面的买入因子,卖出因子继续使用之前章节的设置,如下所示:
Step4: 本节示例的裁判系统是建立在机器学习技术基础上的,所以必然会涉及到特征,abu量化系统支持在回测过程中生成特征数据,切分训练测试集,甚至成交买单快照图片,通过下面的一行代码设置即可在生成最终的输出结果数据orders_pd上加上买入时刻的很多信息,比如价格位置、趋势走向、波动情况等等特征:
关于特征的类的具体编写请阅读源代码ABuMLFeature以及ABuKLManager
本节只示例使用内置特征,在之后的章节后示例自定义特征类的实现
Step5: 下面通过abu.run_loop_back进行回测,choice_symbol使用分配好的训练集symbol:
Step6: 回测结束后,看一些orders_pd的columns,可以看到buy_deg_ang252,buy_price_rank90,buy_atr_std,buy_wave_score3等等都是特征列:
Step7: 下面看看生成的特征的具体示例,如下所示:
备注:buy开头的是买入时刻形成的特征,sell开头的是卖出时刻形成的特征
Step8: 下面度量训练集使用returns_cmp模式,即不对比标尺大盘,在无资金限制下所有买入交易都可以成交的模式, 如下所示:
Step9: 如上所示的输出即为无资金限制所有买入交易都可以成交的模式下的度量结果,可以看到:所有交易总盈亏和
Step10: 本节的示例回测的度量都将使用无资金限制下所有买入交易都可以成交的模式。
备注:无资金限制下所有买入交易都可以成交的模式具体实现请阅读AbuMetricsBase
进入本节核心主体量化交易和搜索引擎结果的好坏最相似的地方有两个:
对搜索引擎(量化策略)失败结果的人工分析,注重分析失败的结果以及是否存在改进方案,改进方案是否会引进新的问题
机器学习技术在搜索引擎(量化策略)上的改进,必须赋予宏观上合理的解释
下面将依次展开以上两点:
2. 对交易进行人工分析
对交易进行人工分析最常用的手动即是直接可视化交易的买入卖出点及走势。
下面使用plot_candle_from_order直接将orders_pd(交易单子数据)作为参数传入,save=True将交易当时买入点、卖出点等信息标注在图上并保存在本地,针对保存后的交易快照我们就可以进行人工分析:
Step11: 保存完成后,快照将保存在~/abu/data/save_png下当前日期的文件夹中,可使用如下命令直接打开查看:
Step12: 通过人工分析这些失败的交易,可以观察是否有改进方案,或者不合理的交易,比如下面这笔交易:
从上面趋势图可以看出:之前大幅度下跌后,底部开始向上拉升,可以发现买入点是在前期阻力位的位置,你可以在具体策略中编写代码阻止类似的交易生效,但是这样容易过拟合,并且这种对策略的微调一定也会带来一些负面的影响,很难量化最终的得失。
而且这样会导致你的基础策略太过复杂,基础追求的就应该是简单, 可以一句话说明你的基础策略,针对此类问题的一种解决方法在之前的第九节港股市场的回测中将优化策略的'策略'做为类装饰器进行封装有做过示例讲解,具体效果即是分离基础策略和策略优化监督模块,提高灵活度和适配性,本节示例通过ump来解决此类问题。
abupy中ump模块的设计目标是:
不需要在具体策略中硬编码
不需要人工设定阀值,即且使得代码逻辑清晰
分离基础策略和策略优化监督模块,提高灵活度和适配性
发现策略中隐藏的交易策略问题
可以通过不断的学习新的交易数据
现阶段的量化策略还是通过人来编写代码,未来的发展也许会向着完全由计算机实现整套流程的方向迈进,包括量化策略本身。
abupy的设计目标是:
只需要提供一些基础的简单种子策略代码,计算机在这些简单种子策略基础上不断自我学习、自我完善,创造新的策略,并且紧跟时间序列不断自我调整策略参数。
3. 主裁系统原理
Step13: 上面输出的每一行实际上代表一次交易,result代表这次交易的最终结果,0:亏损,1:盈利,deng_ang21代表买入信号发生时刻向前21天的交易日收盘价格拟合曲线角度特征值,与此相似deg_ang42、deg_ang60、deg_ang252分别代表买入信号发生时刻向前42天、60天、252天收盘价格拟合曲线角度特征值。
下面使用AbuUmpMainBase.fit()函数进行主裁分类簇的筛选,以及可视化分类簇特性,如下所示:
备注:默认使用component值从40至85,本示例由于使用的沙盒数据,训练集数据量太少,所以下面参数p_ncs降低component值从20至40
Step14: 上面可视化结果各个轴分别代表:
lcs
Step15: 对上表格第一行数据详细解释如下:
用gmm将特征进行分类,分20个类,这个分类中的第11簇失败率为0.6667,即index
Step16: 从上面的输出可以看到23_15是失败概率很大的分类簇,接下来我们查看ump_deg.nts表,它是字典结构。
下面获得mfx分类簇下的所有交易的DataFrame数据对象, 寻找deg_ang252中存在非常大的数值的分类簇, 且分类簇中交易数量最多的GMM分类簇。
备注:不同的运行环境下分类的结果序号等是不相同的(GMM中初始随机数, 预热参数导致), 即不同的环境下运行的结果不一定是23-15这个分类簇序号,但是分类的性质基本相似
Step17: 下面分别统计分类簇和训练数据集中特征的平均值,可以看到:
分类簇中deg_ang252非常大,deg_ang42的值相比较训练集平均值也很大
deg_ang21,deg_ang60平均值基本和训练集数据平均值持平
Step18: 更进一步,我们将所有分类簇中的交易快照进行可视化,进行人工分析,如下代码所示:
Step19: 5. 赋予宏观上合理的解释:
上面显示的交易sh600809是本节初失败结果人工分析的那个案例的交易,这里它在主裁deg识别中被捕获。这样我们就不需要在具体策略中编写代码阻止类似的交易生效,它被机器学习gmm识别到一个固定的分类簇中,我们保存这个分类簇,在之后的交易可以运用这个分类簇对新的交易进行裁判。
从上面的走势快照以及特征值分析可以对gmm这次分类进行宏观上合理的解释:
过去一年的股价走势快速拉升(deg_ang252非常大)
过去3三个月走势失去了前期的气势,开始走下坡路(deg_ang60平均值持平与训练集数据平均值)
过去2个月走势有一次回光反照(deg_ang42的值相比较训练集平均值也很大)
最终拦截的交易宏观上的解释为:快速拉升后的震荡下行走势下的小上升走势,且遇到了短期阻力位(由上面交易图可见)
上面的分析即做到了机器学习技术在搜索引擎(量化策略)的改进,必须赋予宏观上合理的解释。
你可以发现如果你想要手工在策略中通过编写代码添加这个规则时,逻辑代码的实现会相当复杂,而且不得不面对阀值问题,使用gmm分类簇可以有效规避此类问题,而且使得代码逻辑清晰,没有过多的硬编码,且在之后的交易中指导策略进行信号拦截
6. 最优分类簇筛选:
上面我们抽取了gmm大于阀值失败率的分类簇后,对ump_deg.cprs进行分析可以发现
Step20: 下面我们使用全局最优技术对分类簇集合进行筛选, 如下所示
Step21: 下面根据上面计算出的最优参数对分类簇集合进行筛选。
分类簇中样本交易获利比例总和小于-0.1
分类簇中样本每笔交易平均获利小于-0.01
分类簇中样本失败率大于0.67
如下代码返回的llps为最终筛选结果, 将筛选后的结果使用dump_clf接口进行保存(最终角度主裁模型)保存在本地,以预备之后对新的交易进行裁决。 | Python Code:
# 基础库导入
from __future__ import print_function
from __future__ import division
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import os
import sys
# 使用insert 0即只使用github,避免交叉使用了pip安装的abupy,导致的版本不一致问题
sys.path.insert(0, os.path.abspath('../'))
import abupy
# 使用沙盒数据,目的是和书中一样的数据环境
abupy.env.enable_example_env_ipython()
from abupy import AbuFactorAtrNStop, AbuFactorPreAtrNStop, AbuFactorCloseAtrNStop, AbuFactorBuyBreak, ABuProgress
from abupy import abu, EMarketTargetType, AbuMetricsBase, ABuMarketDrawing, AbuFuturesCn, ABuSymbolPd, ABuMarket
from abupy import AbuUmpMainDeg, AbuUmpMainJump, AbuUmpMainPrice, AbuUmpMainWave, AbuFuturesCn, EStoreAbu
Explanation: ABU量化系统使用文档
<center>
<img src="./image/abu_logo.png" alt="" style="vertical-align:middle;padding:10px 20px;"><font size="6" color="black"><b>第15节 量化交易和搜索引擎</b></font>
</center>
作者: 阿布
阿布量化版权所有 未经允许 禁止转载
abu量化系统github地址 (欢迎+star)
本节ipython notebook
骰子? 骰子是什么东西?它应该出现在大富翁游戏里,应该出现在澳门和拉斯维加斯的赌场中,但是,物理学?不,那不是它应该来的地方。骰子代表了投机,代表了不确定,而物理学不是一门最严格最精密,最不能容忍不确定的科学吗?——《量子物理史话》
虽然我们无法对市场做到确定性的预测,但是股票市场也并不是杂乱无章的,预测和混沌之前存在着一种状态,这种状态可以使用使用概率来描述。《量子物理史话》中薛定谔方程说整个宇宙,你和我都是概率,波恩对波动方程的解释为:电子电荷在空间中的实际分布是电子在某处出现的概率,我们只能预言概率!电子有90%的可能出现在这里, 10%的可能出现在那里,我们也同然可以使用统计来预言概率,如某个策略在某种情况下失败概率为90%,成功概率为10%。
本节将介绍abu量化系统中的ump模块,它使用了多种机器学习技术,来实现我上面说的预测概率, 首先导入abupy中本节使用的模块:
End of explanation
us_choice_symbols = ['usTSLA', 'usNOAH', 'usSFUN', 'usBIDU', 'usAAPL', 'usGOOG', 'usWUBA', 'usVIPS']
cn_choice_symbols = ['002230', '300104', '300059', '601766', '600085', '600036', '600809', '000002', '002594']
hk_choice_symbols = ['hk03333', 'hk00700', 'hk02333', 'hk01359', 'hk00656', 'hk03888', 'hk02318']
tc_choice_symbols = ['btc', 'ltc']
# 期货市场的直接从AbuFuturesCn().symbo中读取
ft_choice_symbols = AbuFuturesCn().symbol.tolist()
Explanation: 受限于沙盒中数据限制,本节示例的相关性分析只限制在abupy内置沙盒数据中,完整示例以及代码请阅读后面章节的全市场回测示例或者《量化交易之路》中相关章节。
首先将内置沙盒中美股,A股,港股, 比特币,莱特币,期货市场中的symbol都列出来:
End of explanation
# 训练集:沙盒中所有美股 + 沙盒中所有A股 + 沙盒中所有港股 + 比特币
train_choice_symbols = us_choice_symbols + cn_choice_symbols + hk_choice_symbols + tc_choice_symbols[:1]
# 测试集:沙盒中所有期货 + 莱特币
test_choice_symbols = ft_choice_symbols + tc_choice_symbols[1:]
Explanation: 1. 切分训练集交易的回测
下面把沙盒数据中的symbol分成两组,一组做为训练symbol
本例训练集symbol使用:沙盒中所有美股 + 沙盒中所有A股 + 沙盒中所有港股 + 比特币
本例测试集symbol使用:沙盒中所有期货 + 莱特币
备注:本例由于symbol数量少所以手动分配训练集,测试集,非沙盒数据环境下使用abupy.env.g_enable_train_test_split等相关设置进行对参数中的symbol或者某个全市场symbol进行自动切割训练集,测试集,详例请阅读《量化交易之路》中相关章节
End of explanation
# 设置初始资金数
read_cash = 1000000
# 买入因子依然延用向上突破因子
buy_factors = [{'xd': 60, 'class': AbuFactorBuyBreak},
{'xd': 42, 'class': AbuFactorBuyBreak}]
# 卖出因子继续使用上一节使用的因子
sell_factors = [
{'stop_loss_n': 1.0, 'stop_win_n': 3.0,
'class': AbuFactorAtrNStop},
{'class': AbuFactorPreAtrNStop, 'pre_atr_n': 1.5},
{'class': AbuFactorCloseAtrNStop, 'close_atr_n': 1.5}
]
Explanation: 下面的买入因子,卖出因子继续使用之前章节的设置,如下所示:
End of explanation
# 回测生成买入时刻特征
abupy.env.g_enable_ml_feature = True
Explanation: 本节示例的裁判系统是建立在机器学习技术基础上的,所以必然会涉及到特征,abu量化系统支持在回测过程中生成特征数据,切分训练测试集,甚至成交买单快照图片,通过下面的一行代码设置即可在生成最终的输出结果数据orders_pd上加上买入时刻的很多信息,比如价格位置、趋势走向、波动情况等等特征:
关于特征的类的具体编写请阅读源代码ABuMLFeature以及ABuKLManager
本节只示例使用内置特征,在之后的章节后示例自定义特征类的实现
End of explanation
abu_result_tuple_train, _ = abu.run_loop_back(read_cash,
buy_factors,
sell_factors,
start='2014-07-26',
end='2016-07-26',
choice_symbols=train_choice_symbols)
ABuProgress.clear_output()
# 把运行的结果保存在本地,以便后面的章节直接使用,保存回测结果数据代码如下所示
abu.store_abu_result_tuple(abu_result_tuple_train, n_folds=2, store_type=EStoreAbu.E_STORE_CUSTOM_NAME,
custom_name='lecture_train')
orders_pd_train = abu_result_tuple_train.orders_pd
Explanation: 下面通过abu.run_loop_back进行回测,choice_symbol使用分配好的训练集symbol:
End of explanation
orders_pd_train.columns
Explanation: 回测结束后,看一些orders_pd的columns,可以看到buy_deg_ang252,buy_price_rank90,buy_atr_std,buy_wave_score3等等都是特征列:
End of explanation
orders_pd_train.filter(regex='buy*').drop(
['buy_date', 'buy_price', 'buy_cnt', 'buy_factor', 'buy_pos', 'buy_type_str'], axis=1).head()
Explanation: 下面看看生成的特征的具体示例,如下所示:
备注:buy开头的是买入时刻形成的特征,sell开头的是卖出时刻形成的特征
End of explanation
AbuMetricsBase.show_general(*abu_result_tuple_train, returns_cmp=True, only_info=True)
Explanation: 下面度量训练集使用returns_cmp模式,即不对比标尺大盘,在无资金限制下所有买入交易都可以成交的模式, 如下所示:
End of explanation
capital_pd = abu_result_tuple_train.capital.capital_pd
capital_pd['capital_blance'][-1] - capital_pd['capital_blance'][0]
Explanation: 如上所示的输出即为无资金限制所有买入交易都可以成交的模式下的度量结果,可以看到:所有交易总盈亏和:2717948,但实际上如果在考虑资金的情况下的实际交易总盈亏和并没有这么多,因为有很多交易因为资金限制没能买入成交,如下所示:
End of explanation
# 选择失败的笔交易绘制交易快照
plot_simple = orders_pd_train[orders_pd_train.profit_cg < 0]
# save=True保存在本地,耗时操作,需要运行几分钟
ABuMarketDrawing.plot_candle_from_order(plot_simple, save=True)
Explanation: 本节的示例回测的度量都将使用无资金限制下所有买入交易都可以成交的模式。
备注:无资金限制下所有买入交易都可以成交的模式具体实现请阅读AbuMetricsBase
进入本节核心主体量化交易和搜索引擎结果的好坏最相似的地方有两个:
对搜索引擎(量化策略)失败结果的人工分析,注重分析失败的结果以及是否存在改进方案,改进方案是否会引进新的问题
机器学习技术在搜索引擎(量化策略)上的改进,必须赋予宏观上合理的解释
下面将依次展开以上两点:
2. 对交易进行人工分析
对交易进行人工分析最常用的手动即是直接可视化交易的买入卖出点及走势。
下面使用plot_candle_from_order直接将orders_pd(交易单子数据)作为参数传入,save=True将交易当时买入点、卖出点等信息标注在图上并保存在本地,针对保存后的交易快照我们就可以进行人工分析:
End of explanation
if abupy.env.g_is_mac_os:
!open $abupy.env.g_project_data_dir
else:
!echo $abupy.env.g_project_data_dir
Explanation: 保存完成后,快照将保存在~/abu/data/save_png下当前日期的文件夹中,可使用如下命令直接打开查看:
End of explanation
# 参数为orders_pd_train
ump_deg = AbuUmpMainDeg(orders_pd_train)
ump_deg.fiter.df.head()
Explanation: 通过人工分析这些失败的交易,可以观察是否有改进方案,或者不合理的交易,比如下面这笔交易:
从上面趋势图可以看出:之前大幅度下跌后,底部开始向上拉升,可以发现买入点是在前期阻力位的位置,你可以在具体策略中编写代码阻止类似的交易生效,但是这样容易过拟合,并且这种对策略的微调一定也会带来一些负面的影响,很难量化最终的得失。
而且这样会导致你的基础策略太过复杂,基础追求的就应该是简单, 可以一句话说明你的基础策略,针对此类问题的一种解决方法在之前的第九节港股市场的回测中将优化策略的'策略'做为类装饰器进行封装有做过示例讲解,具体效果即是分离基础策略和策略优化监督模块,提高灵活度和适配性,本节示例通过ump来解决此类问题。
abupy中ump模块的设计目标是:
不需要在具体策略中硬编码
不需要人工设定阀值,即且使得代码逻辑清晰
分离基础策略和策略优化监督模块,提高灵活度和适配性
发现策略中隐藏的交易策略问题
可以通过不断的学习新的交易数据
现阶段的量化策略还是通过人来编写代码,未来的发展也许会向着完全由计算机实现整套流程的方向迈进,包括量化策略本身。
abupy的设计目标是:
只需要提供一些基础的简单种子策略代码,计算机在这些简单种子策略基础上不断自我学习、自我完善,创造新的策略,并且紧跟时间序列不断自我调整策略参数。
3. 主裁系统原理:
下面的内容主要示例通过abupy中的ump模块解决上述问题,abu量化系统中的ump裁判模块,abu量化系统命名规则里,a代表alpha,b代表beta,u代表ump即裁判员的意思,ump将策略回测交易结果作为训练集进行模式识别,特别针对失败的交易识别模式,寻找规律,通过非均衡技术近一步寻找概率上的优势,通过构建多个裁判员的方式来构建裁判(主裁、边裁)机制,来对新的交易进行识别,当新的交易失败的风险大于一定的概率的时候,放弃这次交易,如下图所示:
主裁核心代码在基类AbuUmpMainBase源代码中,使用gmm进行无监督机器学习, gmm根据参数component将特征分类,component的数值表示将回测交易数据分为多少个类别,默认component值从40至85,即默认将回测交易数据分为40至85个分类,对所有分类结果的cluster组中对应的交易结果数据result进行统计,将cluster组中交易失败概率大于阀值(默认参数0.65即65%失败率)的gmm分类器clf进行保存。
举例说明:即使用gmm对回测交易数据进行聚类,比如你对所有交易数据聚类聚了20个分类,然后发现第19个分类里面65%以上都是赔钱的交易,那就提取这个分类的的类别及分类器,作为之后的判定器的组成部份,如果新的交易被判定为这类那我们就对这个交易进行拦截。
更多详情请自行阅读AbuUmpMainBase源代码
4. 角度主裁:
每个特定主裁有自己独特的选定特征,子类完成的主要工作就是对特征进行处理,如AbuUmpMainDeg的特征为21、42、60、252日拟合角度, 更多具体实现请阅读《量化交易之路》中相关内容,下面仅示例使用,先看看角度主裁有哪些特征:
End of explanation
ump_deg.fit(p_ncs=slice(20, 40, 1), brust_min=False)
Explanation: 上面输出的每一行实际上代表一次交易,result代表这次交易的最终结果,0:亏损,1:盈利,deng_ang21代表买入信号发生时刻向前21天的交易日收盘价格拟合曲线角度特征值,与此相似deg_ang42、deg_ang60、deg_ang252分别代表买入信号发生时刻向前42天、60天、252天收盘价格拟合曲线角度特征值。
下面使用AbuUmpMainBase.fit()函数进行主裁分类簇的筛选,以及可视化分类簇特性,如下所示:
备注:默认使用component值从40至85,本示例由于使用的沙盒数据,训练集数据量太少,所以下面参数p_ncs降低component值从20至40
End of explanation
ump_deg.cprs.head()
Explanation: 上面可视化结果各个轴分别代表:
lcs: 分类簇中样本总数
lrs: 分类簇中样本失败率
lps: 分类簇中样本交易获利比例总和
lms: 分类簇中样本每笔交易平均获利
ump_deg.cprs提取了使用gmm从20-40个分类中交易失败率大于65%的簇:
End of explanation
mfx = ump_deg.cprs[(ump_deg.cprs['lcs'] > 5)].sort_values(by='lrs')[::-1]
mfx.head()
Explanation: 对上表格第一行数据详细解释如下:
用gmm将特征进行分类,分20个类,这个分类中的第11簇失败率为0.6667,即index:20_11,这个分类中有7笔交易,平均每笔交易平均获利0.0282,分类中所有交易获利比例总和为-0.1972
备注:不同的运行环境下分类的结果序号等是不相同的
下面我们找出所有提取结果中交易失败概率最大的分类簇, 由于沙盒数据中数据量限制,导致每个分类簇中数据很少,下面的筛选条件是分类中有至少有5笔以上交易:
End of explanation
max_failed_cluster_orders = None
for mfx_ind in np.arange(0, len(mfx)):
tmp = ump_deg.nts[mfx.index[mfx_ind]]
if tmp.buy_deg_ang252.mean() > 10:
if max_failed_cluster_orders is None:
max_failed_cluster_orders = tmp
elif len(tmp) > len(max_failed_cluster_orders):
# 寻找分类簇中交易数量最多的
max_failed_cluster_orders = tmp
if max_failed_cluster_orders is None:
max_failed_cluster_orders = ump_deg.nts[mfx.index[0]]
max_failed_cluster_orders
Explanation: 从上面的输出可以看到23_15是失败概率很大的分类簇,接下来我们查看ump_deg.nts表,它是字典结构。
下面获得mfx分类簇下的所有交易的DataFrame数据对象, 寻找deg_ang252中存在非常大的数值的分类簇, 且分类簇中交易数量最多的GMM分类簇。
备注:不同的运行环境下分类的结果序号等是不相同的(GMM中初始随机数, 预热参数导致), 即不同的环境下运行的结果不一定是23-15这个分类簇序号,但是分类的性质基本相似
End of explanation
print('分类簇中deg_ang60平均值为{0:.2f}'.format(
max_failed_cluster_orders.buy_deg_ang60.mean()))
print('训练数据集中deg_ang60平均值为{0:.2f}\n'.format(
orders_pd_train.buy_deg_ang60.mean()))
print('分类簇中deg_ang21平均值为{0:.2f}'.format(
max_failed_cluster_orders.buy_deg_ang21.mean()))
print('训练数据集中deg_ang21平均值为{0:.2f}\n'.format(
orders_pd_train.buy_deg_ang21.mean()))
print('分类簇中deg_ang42平均值为{0:.2f}'.format(
max_failed_cluster_orders.buy_deg_ang42.mean()))
print('训练数据集中deg_ang42平均值为{0:.2f}\n'.format(
orders_pd_train.buy_deg_ang42.mean()))
print('分类簇中deg_ang252平均值为{0:.2f}'.format(
max_failed_cluster_orders.buy_deg_ang252.mean()))
print('训练数据集中deg_ang252平均值为{0:.2f}'.format(
orders_pd_train.buy_deg_ang252.mean()))
Explanation: 下面分别统计分类簇和训练数据集中特征的平均值,可以看到:
分类簇中deg_ang252非常大,deg_ang42的值相比较训练集平均值也很大
deg_ang21,deg_ang60平均值基本和训练集数据平均值持平
End of explanation
cp = []
for ind in np.arange(0, len(max_failed_cluster_orders)):
# 获取其在原始orders中的ind
order_ind = int(max_failed_cluster_orders.iloc[ind].ind)
# 从原始orders中取出order
order = ump_deg.fiter.order_has_ret.iloc[order_ind]
if order.symbol.isdigit() and order.symbol not in cp:
# 介于篇幅长度,只可视化a股市场的了,每个symbol只绘制一次,避免42d和60d策略同时生效,两个单子,这里绘制两次
cp.append(order.symbol)
ABuMarketDrawing.plot_candle_from_order(order, date_ext=252)
Explanation: 更进一步,我们将所有分类簇中的交易快照进行可视化,进行人工分析,如下代码所示:
End of explanation
ump_deg.cprs[ump_deg.cprs['lps'] > 0].head()
Explanation: 5. 赋予宏观上合理的解释:
上面显示的交易sh600809是本节初失败结果人工分析的那个案例的交易,这里它在主裁deg识别中被捕获。这样我们就不需要在具体策略中编写代码阻止类似的交易生效,它被机器学习gmm识别到一个固定的分类簇中,我们保存这个分类簇,在之后的交易可以运用这个分类簇对新的交易进行裁判。
从上面的走势快照以及特征值分析可以对gmm这次分类进行宏观上合理的解释:
过去一年的股价走势快速拉升(deg_ang252非常大)
过去3三个月走势失去了前期的气势,开始走下坡路(deg_ang60平均值持平与训练集数据平均值)
过去2个月走势有一次回光反照(deg_ang42的值相比较训练集平均值也很大)
最终拦截的交易宏观上的解释为:快速拉升后的震荡下行走势下的小上升走势,且遇到了短期阻力位(由上面交易图可见)
上面的分析即做到了机器学习技术在搜索引擎(量化策略)的改进,必须赋予宏观上合理的解释。
你可以发现如果你想要手工在策略中通过编写代码添加这个规则时,逻辑代码的实现会相当复杂,而且不得不面对阀值问题,使用gmm分类簇可以有效规避此类问题,而且使得代码逻辑清晰,没有过多的硬编码,且在之后的交易中指导策略进行信号拦截
6. 最优分类簇筛选:
上面我们抽取了gmm大于阀值失败率的分类簇后,对ump_deg.cprs进行分析可以发现:
在很多分类簇中的交易胜率不高,但是交易获利比例总和却为正值, 即有很多交易簇,虽然簇的失败率很高,但是簇中所有交易的收益和却是正值,即一直强调的不能只关注胜率,盈亏比更是关键。
那么我们将所有分类簇保存在本地,对之后的交易进行裁决显然是不妥当的。
End of explanation
brust_min = ump_deg.brust_min()
brust_min
Explanation: 下面我们使用全局最优技术对分类簇集合进行筛选, 如下所示:
备注:
对外的使用实际不会涉及如下内容,如在之后的章节中主裁的训练只使用一行代码即可完成,这里不必过分深入
如对内部实现敢兴趣,请阅读《量化交易之路》中相关内容或者阅读源代码。
End of explanation
llps = ump_deg.cprs[(ump_deg.cprs['lps'] <= brust_min[0]) & (ump_deg.cprs['lms'] <= brust_min[1]) &
(ump_deg.cprs['lrs'] >= brust_min[2])]
ump_deg.dump_clf(llps)
Explanation: 下面根据上面计算出的最优参数对分类簇集合进行筛选。
分类簇中样本交易获利比例总和小于-0.1
分类簇中样本每笔交易平均获利小于-0.01
分类簇中样本失败率大于0.67
如下代码返回的llps为最终筛选结果, 将筛选后的结果使用dump_clf接口进行保存(最终角度主裁模型)保存在本地,以预备之后对新的交易进行裁决。
End of explanation |
15,053 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
讀取 Miku
Step1: Muki NN
<img width=500px src='./muki_nn.png' />
Step2: 第一層
Step3: 第二層
Step4: 第三層
Step5: 定義 Cost Function & Update Function
Step6: Training
Step7: Training - Random Suffule
Step8: Testing | Python Code:
img_count = 0
def showimg(img):
muki_pr = np.zeros((500,500,3))
l =img.tolist()
count = 0
for x in range(500):
for y in range(500):
muki_pr[y][x] = l[count]
count += 1
plt.imshow(muki_pr)
def saveimg(fname,img):
muki_pr = np.zeros((500,500,3))
l =img.tolist()
count = 0
for x in range(500):
for y in range(500):
muki_pr[y][x] = l[count]
count += 1
plt.imsave(fname,muki_pr)
def read_muki():
img_data = np.random.randn(250000,1)
xy_data = []
import random
f = open('./muki.txt','rb')
count = 0
for line in f:
y,x,c = line.split()
xy_data.append([float(x),float(y)])
x = (float(x) )*100. + 250
y = (float(y) )*100. + 250
c = float(c)
img_data[count] = c
count = count + 1
return np.matrix(xy_data),img_data
xy_data,img_data = read_muki()
showimg(img_data)
print xy_data[:10]
print img_data[:10]
Explanation: 讀取 Miku
End of explanation
batch_size = 500
hidden_size = 128
x = T.matrix(name='x',dtype='float32') # size =2
y = T.matrix(name='x',dtype='float32') # size =1
w1 = theano.shared(np.random.randn(hidden_size,2))
b1 = theano.shared(np.random.randn(hidden_size))
w2 = theano.shared(np.random.randn(hidden_size,hidden_size))
b2 = theano.shared(np.random.randn(hidden_size))
w3 = theano.shared(np.random.randn(1,hidden_size))
b3 = theano.shared(np.random.randn(1))
Explanation: Muki NN
<img width=500px src='./muki_nn.png' />
End of explanation
z1 = T.dot(w1,x) + b1.dimshuffle(0,'x')
a1 = 1/(1+T.exp(-z1))
fa1 = theano.function(inputs=[x],outputs=[a1],allow_input_downcast=True)
l1_o= fa1(np.random.randn(2,batch_size))
l1_o= fa1(xy_data[:500].T)
Explanation: 第一層
End of explanation
z2 = T.dot(w2,a1) + b2.dimshuffle(0,'x')
a2 = 1/(1+T.exp(-z2))
fa2 = theano.function(inputs=[x],outputs=[a2],allow_input_downcast=True)
l2_o = fa2(np.random.randn(2,batch_size))
l2_o= fa2(xy_data[:500].T)
print l2_o[0].shape
Explanation: 第二層
End of explanation
z3 = T.dot(w3,a2) + b3.dimshuffle(0,'x')
a3 = 1/(1+T.exp(-z3))
fa3 = theano.function(inputs=[x],outputs=[a3],allow_input_downcast=True)
l3_o = fa3(np.random.randn(2,batch_size))
print l3_o[0].shape
Explanation: 第三層
End of explanation
y_hat = T.matrix('reference',dtype='float32')
cost = T.sum((a3-y_hat)**2)/batch_size
dw1,dw2,dw3,db1,db2,db3 = T.grad(cost,[w1,w2,w3,b1,b2,b3])
def Myupdates(ps,gs):
from itertools import izip
r = 0.5
pu = [ (p,p-r*g) for p,g in izip(ps,gs) ]
return pu
train = theano.function(inputs=[x,y_hat],
outputs=[a3,cost],
updates=Myupdates([w1,w2,b1,b2,w3,b3],[dw1,dw2,db1,db2,dw3,db3]),
allow_input_downcast=True,)
Explanation: 定義 Cost Function & Update Function
End of explanation
def training(xy_data,img_data):
for ii in range(1000):
for i in range(500):
start = i * 500
end = start + 500
img_predict,cost_predict = train(xy_data[start:end].T,img_data[start:end].T)
if ii % 5 == 0:
saveimg('./imgs/muki_'+ str(ii) +'.png', fa3(xy_data.T)[0].T)
print cost_predict,
training(xy_data,img_data)
Explanation: Training
End of explanation
all_data = zip(xy_data,img_data)
shuffle(all_data)
temp_xy = []
temp_data = []
for row in all_data:
temp_xy.append(row[0].tolist()[0])
temp_data.append(row[1])
s_data = np.matrix(temp_data)
s_xy = np.matrix(temp_xy)
Explanation: Training - Random Suffule
End of explanation
showimg(fa2(xy_data.T)[0].T)
Explanation: Testing
End of explanation |
15,054 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Numerically solving differential equations with python
This is a brief description of what numerical integration is and a practical tutorial on how to do it in Python.
Software required
In order to run this notebook in your own computer, you need to install the following software
Step1: Why use scientific libraries?
The method we just used above is called the Euler method, and is the simplest one available. The problem is that, although it works reasonably well for the differential equation above, in many cases it doesn't perform very well. There are many ways to improve it
Step2: We get a much better approximation now, the two curves superimpose each other!
Now, what if we wanted to integrate a system of differential equations? Let's take the Lotka-Volterra equations
Step3: An interesting thing to do here is take a look at the phase space, that is, plot only the dependent variables, without respect to time | Python Code:
%matplotlib inline
from numpy import *
from matplotlib.pyplot import *
# time intervals
dt = 0.5
tt = arange(0, 10, dt)
# initial condition
xx = [0.1]
def f(x):
return x * (1.-x)
# loop over time
for t in tt[1:]:
xx.append(xx[-1] + dt * f(xx[-1]))
# plotting
plot(tt, xx, '.-')
ta = arange(0, 10, 0.01)
plot(ta, 0.1 * exp(ta)/(1+0.1*(exp(ta)-1.)))
xlabel('t')
ylabel('x')
legend(['approximation', 'analytical solution'], loc='best',)
Explanation: Numerically solving differential equations with python
This is a brief description of what numerical integration is and a practical tutorial on how to do it in Python.
Software required
In order to run this notebook in your own computer, you need to install the following software:
python
numpy and scipy - python scientific libraries
matplotlib - a library for plotting
the ipython notebook (now renamed to Jupyter
On Windows and Mac, we recommend installing the Anaconda distribution, which includes all of the above in a single package (among several other libraries), available at https://www.anaconda.com/distribution/
On Linux, you can install everything using your distribution's prefered way, e.g.:
Debian/Ubuntu: sudo apt-get install python-numpy python-scipy python-matplotlib python-ipython-notebook
Fedora: sudo yum install python-numpy python-scipy python-matplotlib python-ipython-notebook
Arch: `sudo pacman -S python-numpy python-scipy python-matplotlib jupyter
Code snippets shown here can also be copied into a pure text file with .py extension and ran outside the notebook (e.g., in an python or ipython shell).
From the web
Alternatively, you can use a service that runs notebooks on the cloud, e.g. SageMathCloud or wakari. It is possible to visualize publicly-available notebooks using http://nbviewer.ipython.org, but no computation can be performed (it just shows saved pre-calculated results).
How numerical integration works
Let's say we have a differential equation that we don't know how (or don't want) to derive its (analytical) solution. We can still find out what the solutions are through numerical integration. So, how dows that work?
The idea is to approximate the solution at successive small time intervals, extrapolating the value of the derivative over each interval. For example, let's take the differential equation
$$ \frac{dx}{dt} = f(x) = x (1 - x) $$
with an initial value $x_0 = 0.1$ at an initial time $t=0$ (that is, $x(0) = 0.1$). At $t=0$, the derivative $\frac{dx}{dt}$ values $f(0.1) = 0.1 \times (1-0.1) = 0.09$. We pick a small interval step, say, $\Delta t = 0.5$, and assume that that value of the derivative is a good approximation over the whole interval from $t=0$ up to $t=0.5$. This means that in this time $x$ is going to increase by $\frac{dx}{dt} \times \Delta t = 0.09 \times 0.5 = 0.045$. So our approximate solution for $x$ at $t=0.5$ is $x(0) + 0.045 = 0.145$. We can then use this value of $x(0.5)$ to calculate the next point in time, $t=1$. We calculate the derivative at each step, multiply by the time step and add to the previous value of the solution, as in the table below:
| $t$ | $x$ | $\frac{dx}{dt}$ |
| ---:|---------:|----------:|
| 0 | 0.1 | 0.09 |
| 0.5 | 0.145 | 0.123975 |
| 1.0 | 0.206987 | 0.164144 |
| 1.5 | 0.289059 | 0.205504 |
| 2.0 | 0.391811 | 0.238295 |
Of course, this is terribly tedious to do by hand, so we can write a simple program to do it and plot the solution. Below we compare it to the known analytical solution of this differential equation (the logistic equation). Don't worry about the code just yet: there are better and simpler ways to do it!
End of explanation
# everything after a '#' is a comment
## we begin importing libraries we are going to use
# import all (*) functions from numpy library, eg array, arange etc.
from numpy import *
# import all (*) interactive plotting functions, eg plot, xlabel etc.
from matplotlib.pyplot import *
# import the numerical integrator we will use, odeint()
from scipy.integrate import odeint
# time steps: an array of values starting from 0 going up to (but
# excluding) 10, in steps of 0.01
t = arange(0, 10., .1)
# parameters
r = 2.
K = 10.
# initial condition
x0 = 0.1
# let's define the right-hand side of the differential equation
# It must be a function of the dependent variable (x) and of the
# time (t), even if time does not appear explicitly
# this is how you define a function:
def f(x, t, r, K):
# in python, there are no curling braces '{}' to start or
# end a function, nor any special keyword: the block is defined
# by leading spaces (usually 4)
# arithmetic is done the same as in other languages: + - * /
return r*x*(1-x/K)
# call the function that performs the integration
# the order of the arguments is as below: the derivative function,
# the initial condition, the points where we want the solution, and
# a list of parameters
x = odeint(f, x0, t, (r, K))
# plot the solution
plot(t, x, '.')
xlabel('t') # define label of x-axis
ylabel('x') # and of y-axis
tt = arange(0, 10, 0.01)
# plot analytical solution
# notice that `t` is an array: when you do any arithmetical operation
# with an array, it is the same as doing it for each element
plot(tt, K * x0 * exp(r*tt)/(K+x0*(exp(r*tt)-1.)))
legend(['approximation', 'analytical solution'], loc='best') # draw legend
Explanation: Why use scientific libraries?
The method we just used above is called the Euler method, and is the simplest one available. The problem is that, although it works reasonably well for the differential equation above, in many cases it doesn't perform very well. There are many ways to improve it: in fact, there are many books entirely dedicated to this. Although many math or physics students do learn how to implement more sophisticated methods, the topic is really deep. Luckily, we can rely on the expertise of lots of people to come up with good algorithms that work well in most situations.
Then, how... ?
We are going to demonstrate how to use scientific libraries to integrate differential equations. Although the specific commands depend on the software, the general procedure is usually the same:
define the derivative function (the right hand side of the differential equation)
choose a time step or a sequence of times where you want the solution
provide the parameters and the initial condition
pass the function, time sequence, parameters and initial conditions to a computer routine that runs the integration.
A single equation
So, let's start with the same equation as above, the logistic equation, now with any parameters for growth rate and carrying capacity:
$$ \frac{dx}{dt} = f(x) = r x \left(1 - \frac{x}{K} \right) $$
with $r=2$, $K=10$ and $x(0) = 0.1$. We show how to integrate it using python below, introducing key language syntax as necessary.
End of explanation
# we didn't need to do this again: if the cell above was run already,
# the libraries are imported, but we repeat it here for convenience
from numpy import *
from matplotlib.pyplot import *
from scipy.integrate import odeint
t = arange(0, 50., 0.1)
# parameters
r = 2.
c = 0.5
e = 0.1
d = 1.
# initial condition: this is an array now!
x0 = array([1., 3.])
# the function still receives only `x`, but it will be an array, not a number
def LV(x, t, r, c, e, d):
# in python, arrays are numbered from 0, so the first element
# is x[0], the second is x[1]. The square brackets `[ ]` define a
# list, that is converted to an array using the function `array()`.
# Notice that the first entry corresponds to dV/dt and the second to dP/dt
return array([ r*x[0] - c * x[0] * x[1],
e * c * x[0] * x[1] - d * x[1] ])
# call the function that performs the integration
# the order of the arguments is as below: the derivative function,
# the initial condition, the points where we want the solution, and
# a list of parameters
x = odeint(LV, x0, t, (r, c, e, d))
# Now `x` is a 2-dimension array of size 5000 x 2 (5000 time steps by 2
# variables). We can check it like this:
print('shape of x:', x.shape)
# plot the solution
plot(t, x)
xlabel('t') # define label of x-axis
ylabel('populations') # and of y-axis
legend(['V', 'P'], loc='upper right')
Explanation: We get a much better approximation now, the two curves superimpose each other!
Now, what if we wanted to integrate a system of differential equations? Let's take the Lotka-Volterra equations:
$$ \begin{aligned}
\frac{dV}{dt} &= r V - c V P\
\frac{dP}{dt} &= ec V P - dP
\end{aligned}$$
In this case, the variable is no longer a number, but an array [V, P]. We do the same as before, but now x is going to be an array:
End of explanation
# `x[0,0]` is the first value (1st line, 1st column), `x[0,1]` is the value of
# the 1st line, 2nd column, which corresponds to the value of P at the initial
# time. We plot just this point first to know where we started:
plot(x[0,0], x[0,1], 'o')
print('Initial condition:', x[0])
# `x[0]` or (equivalently) x[0,:] is the first line, and `x[:,0]` is the first
# column. Notice the colon `:` stands for all the values of that axis. We are
# going to plot the second column (P) against the first (V):
plot(x[:,0], x[:,1])
xlabel('V')
ylabel('P')
# Let's calculate and plot another solution with a different initial condition
x2 = odeint(LV, [10., 4.], t, (r, c, e, d))
plot(x2[:,0], x2[:,1])
plot(x2[0,0], x2[0,1], 'o')
Explanation: An interesting thing to do here is take a look at the phase space, that is, plot only the dependent variables, without respect to time:
End of explanation |
15,055 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial
This Python notebook demonstrates how OASIS can be used to efficiently evaluate a classifier, based on an example dataset from the entity resolution domain.
We begin by loading the required packages (including OASIS) and setting the random seeds for reproducability.
Step1: Example dataset
The dataset we shall use for this tutorial is derived from the Amazon-GoogleProducts dataset available from here. It is described in the following publication
Step2: Evaluating the classifier
Our goal is to estimate the F1-score of the classifier by sequentially labelling items in the test set. This example is somewhat contrived since we already know the ground truth labels (they are included with the test set). However, we can simulate the labelling by defining an oracle which looks up the labels as follows
Step3: In the following experiments, we shall adopt the parameter settings below
Step4: OASIS
Here we use the OASIS method to estimate the F1-score. The first step is to initialise the sampler.
Step5: Next we query n_labels sequentially.
Step6: Finally, we plot the history of estimates to check for convergence. Since we already know the true value of the F1-score for this example (because we were given all of the labels in advance), we have indicated it on the plot using a red line.
Step7: Other samplers
For comparison, we repeat the evaluation using two alternative sampling methods available in the OASIS package.
First, we test the basic passive sampling method. It performs poorly due to the extreme class imbalance. Of the 5000 labels queried, none of them correspond to a true positive, yielding an incorrect estimate for the F1-score of 0.0.
Step8: The non-adaptive importance sampling method fares better, yielding a decent estimate after consuming 5000 labels. However, it takes longer to converge than OASIS. | Python Code:
import numpy as np
import random
import oasis
import matplotlib.pyplot as plt
%matplotlib inline
np.random.seed(319158)
random.seed(319158)
Explanation: Tutorial
This Python notebook demonstrates how OASIS can be used to efficiently evaluate a classifier, based on an example dataset from the entity resolution domain.
We begin by loading the required packages (including OASIS) and setting the random seeds for reproducability.
End of explanation
data = oasis.Data()
data.read_h5('Amazon-GoogleProducts-test.h5')
data.calc_true_performance() #: calculate true precision, recall, F1-score
Explanation: Example dataset
The dataset we shall use for this tutorial is derived from the Amazon-GoogleProducts dataset available from here. It is described in the following publication:
H. Köpcke, A. Thor, and E. Rahm. "Evaluation of entity resolution approaches on real-world match problems." Proceedings of the VLDB Endowment 3.1-2 (2010): 484-493.
The dataset consists of product listings from two e-commerce websites: Amazon and Google Products (which no longer exists as of 2017). Our goal is to train a classifier to identify pairs of records across the two data sources which refer to the same products. This involves forming the cross join of the two data sources and classifying each pair of records as a "match" or "non-match". Since the focus of this notebook is evaluation, we shall not demonstrate how to build the classifier here. Instead, we shall load the data from a classifier we prepared earlier.
Loading the data
Using our pre-trained classifier, we calculated predictions and scores on a test set containing 676,267 record pairs. The data is stored in HDF5 format and is available in the GitHub repository.
Below, we make use of the Data class in the OASIS package to read the HDF file into memory.
End of explanation
def oracle(idx):
return data.labels[idx]
Explanation: Evaluating the classifier
Our goal is to estimate the F1-score of the classifier by sequentially labelling items in the test set. This example is somewhat contrived since we already know the ground truth labels (they are included with the test set). However, we can simulate the labelling by defining an oracle which looks up the labels as follows:
End of explanation
alpha = 0.5 #: corresponds to F1-score
n_labels = 5000 #: stop sampling after querying this number of labels
max_iter = 1e6 #: maximum no. of iterations that can be stored
Explanation: In the following experiments, we shall adopt the parameter settings below:
End of explanation
smplr = oasis.OASISSampler(alpha, data.preds, data.scores, oracle, max_iter=max_iter)
Explanation: OASIS
Here we use the OASIS method to estimate the F1-score. The first step is to initialise the sampler.
End of explanation
smplr.sample_distinct(n_labels)
Explanation: Next we query n_labels sequentially.
End of explanation
def plt_estimates(smplr, true_value):
plt.plot(smplr.estimate_[smplr.queried_oracle_])
plt.axhline(y=true_value, color='r')
plt.xlabel("Label budget")
plt.ylabel("Estimate of F1-score")
plt.ylim(0,1)
plt.show()
plt_estimates(smplr, data.F1_measure)
Explanation: Finally, we plot the history of estimates to check for convergence. Since we already know the true value of the F1-score for this example (because we were given all of the labels in advance), we have indicated it on the plot using a red line.
End of explanation
pass_smplr = oasis.PassiveSampler(alpha, data.preds, oracle, max_iter=max_iter)
pass_smplr.sample_distinct(n_labels)
plt_estimates(pass_smplr, data.F1_measure)
Explanation: Other samplers
For comparison, we repeat the evaluation using two alternative sampling methods available in the OASIS package.
First, we test the basic passive sampling method. It performs poorly due to the extreme class imbalance. Of the 5000 labels queried, none of them correspond to a true positive, yielding an incorrect estimate for the F1-score of 0.0.
End of explanation
is_smplr = oasis.ImportanceSampler(alpha, data.preds, data.scores, oracle, max_iter=max_iter)
is_smplr.sample_distinct(n_labels)
plt_estimates(is_smplr, data.F1_measure)
Explanation: The non-adaptive importance sampling method fares better, yielding a decent estimate after consuming 5000 labels. However, it takes longer to converge than OASIS.
End of explanation |
15,056 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Learning with TensorFlow
Credits
Step1: Reformat into a TensorFlow-friendly shape
Step2: Let's build a small network with two convolutional layers, followed by one fully connected layer. Convolutional networks are more expensive computationally, so we'll limit its depth and number of fully connected nodes. | Python Code:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
import cPickle as pickle
import numpy as np
import tensorflow as tf
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
del save # hint to help gc free up memory
print 'Training set', train_dataset.shape, train_labels.shape
print 'Validation set', valid_dataset.shape, valid_labels.shape
print 'Test set', test_dataset.shape, test_labels.shape
Explanation: Deep Learning with TensorFlow
Credits: Forked from TensorFlow by Google
Setup
Refer to the setup instructions.
Exercise 4
Previously in 2_fullyconnected.ipynb and 3_regularization.ipynb, we trained fully connected networks to classify notMNIST characters.
The goal of this exercise is make the neural network convolutional.
End of explanation
image_size = 28
num_labels = 10
num_channels = 1 # grayscale
import numpy as np
def reformat(dataset, labels):
dataset = dataset.reshape(
(-1, image_size, image_size, num_channels)).astype(np.float32)
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print 'Training set', train_dataset.shape, train_labels.shape
print 'Validation set', valid_dataset.shape, valid_labels.shape
print 'Test set', test_dataset.shape, test_labels.shape
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
Explanation: Reformat into a TensorFlow-friendly shape:
- convolutions need the image data formatted as a cube (width by height by #channels)
- labels as float 1-hot encodings.
End of explanation
batch_size = 16
patch_size = 5
depth = 16
num_hidden = 64
graph = tf.Graph()
with graph.as_default():
# Input data.
tf_train_dataset = tf.placeholder(
tf.float32, shape=(batch_size, image_size, image_size, num_channels))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
layer1_weights = tf.Variable(tf.truncated_normal(
[patch_size, patch_size, num_channels, depth], stddev=0.1))
layer1_biases = tf.Variable(tf.zeros([depth]))
layer2_weights = tf.Variable(tf.truncated_normal(
[patch_size, patch_size, depth, depth], stddev=0.1))
layer2_biases = tf.Variable(tf.constant(1.0, shape=[depth]))
layer3_weights = tf.Variable(tf.truncated_normal(
[image_size / 4 * image_size / 4 * depth, num_hidden], stddev=0.1))
layer3_biases = tf.Variable(tf.constant(1.0, shape=[num_hidden]))
layer4_weights = tf.Variable(tf.truncated_normal(
[num_hidden, num_labels], stddev=0.1))
layer4_biases = tf.Variable(tf.constant(1.0, shape=[num_labels]))
# Model.
def model(data):
conv = tf.nn.conv2d(data, layer1_weights, [1, 2, 2, 1], padding='SAME')
hidden = tf.nn.relu(conv + layer1_biases)
conv = tf.nn.conv2d(hidden, layer2_weights, [1, 2, 2, 1], padding='SAME')
hidden = tf.nn.relu(conv + layer2_biases)
shape = hidden.get_shape().as_list()
reshape = tf.reshape(hidden, [shape[0], shape[1] * shape[2] * shape[3]])
hidden = tf.nn.relu(tf.matmul(reshape, layer3_weights) + layer3_biases)
return tf.matmul(hidden, layer4_weights) + layer4_biases
# Training computation.
logits = model(tf_train_dataset)
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.05).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(model(tf_valid_dataset))
test_prediction = tf.nn.softmax(model(tf_test_dataset))
num_steps = 1001
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
print "Initialized"
for step in xrange(num_steps):
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
batch_data = train_dataset[offset:(offset + batch_size), :, :, :]
batch_labels = train_labels[offset:(offset + batch_size), :]
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 50 == 0):
print "Minibatch loss at step", step, ":", l
print "Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels)
print "Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels)
print "Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels)
Explanation: Let's build a small network with two convolutional layers, followed by one fully connected layer. Convolutional networks are more expensive computationally, so we'll limit its depth and number of fully connected nodes.
End of explanation |
15,057 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Motor imagery decoding from EEG data using the Common Spatial Pattern (CSP)
Decoding of motor imagery applied to EEG data decomposed using CSP.
Here the classifier is applied to features extracted on CSP filtered signals.
See https
Step1: Classification with linear discrimant analysis
Step2: Look at performance over time | Python Code:
# Authors: Martin Billinger <[email protected]>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
from sklearn.pipeline import Pipeline
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.model_selection import ShuffleSplit, cross_val_score
from mne import Epochs, pick_types, events_from_annotations
from mne.channels import make_standard_montage
from mne.io import concatenate_raws, read_raw_edf
from mne.datasets import eegbci
from mne.decoding import CSP
print(__doc__)
# #############################################################################
# # Set parameters and read data
# avoid classification of evoked responses by using epochs that start 1s after
# cue onset.
tmin, tmax = -1., 4.
event_id = dict(hands=2, feet=3)
subject = 1
runs = [6, 10, 14] # motor imagery: hands vs feet
raw_fnames = eegbci.load_data(subject, runs)
raw = concatenate_raws([read_raw_edf(f, preload=True) for f in raw_fnames])
eegbci.standardize(raw) # set channel names
montage = make_standard_montage('standard_1005')
raw.set_montage(montage)
# strip channel names of "." characters
raw.rename_channels(lambda x: x.strip('.'))
# Apply band-pass filter
raw.filter(7., 30., fir_design='firwin', skip_by_annotation='edge')
events, _ = events_from_annotations(raw, event_id=dict(T1=2, T2=3))
picks = pick_types(raw.info, meg=False, eeg=True, stim=False, eog=False,
exclude='bads')
# Read epochs (train will be done only between 1 and 2s)
# Testing will be done with a running classifier
epochs = Epochs(raw, events, event_id, tmin, tmax, proj=True, picks=picks,
baseline=None, preload=True)
epochs_train = epochs.copy().crop(tmin=1., tmax=2.)
labels = epochs.events[:, -1] - 2
Explanation: Motor imagery decoding from EEG data using the Common Spatial Pattern (CSP)
Decoding of motor imagery applied to EEG data decomposed using CSP.
Here the classifier is applied to features extracted on CSP filtered signals.
See https://en.wikipedia.org/wiki/Common_spatial_pattern and [1]. The EEGBCI
dataset is documented in [2]. The data set is available at PhysioNet [3]_.
References
.. [1] Zoltan J. Koles. The quantitative extraction and topographic mapping
of the abnormal components in the clinical EEG. Electroencephalography
and Clinical Neurophysiology, 79(6):440--447, December 1991.
.. [2] Schalk, G., McFarland, D.J., Hinterberger, T., Birbaumer, N.,
Wolpaw, J.R. (2004) BCI2000: A General-Purpose Brain-Computer Interface
(BCI) System. IEEE TBME 51(6):1034-1043.
.. [3] Goldberger AL, Amaral LAN, Glass L, Hausdorff JM, Ivanov PCh, Mark RG,
Mietus JE, Moody GB, Peng C-K, Stanley HE. (2000) PhysioBank,
PhysioToolkit, and PhysioNet: Components of a New Research Resource for
Complex Physiologic Signals. Circulation 101(23):e215-e220.
End of explanation
# Define a monte-carlo cross-validation generator (reduce variance):
scores = []
epochs_data = epochs.get_data()
epochs_data_train = epochs_train.get_data()
cv = ShuffleSplit(10, test_size=0.2, random_state=42)
cv_split = cv.split(epochs_data_train)
# Assemble a classifier
lda = LinearDiscriminantAnalysis()
csp = CSP(n_components=4, reg=None, log=True, norm_trace=False)
# Use scikit-learn Pipeline with cross_val_score function
clf = Pipeline([('CSP', csp), ('LDA', lda)])
scores = cross_val_score(clf, epochs_data_train, labels, cv=cv, n_jobs=1)
# Printing the results
class_balance = np.mean(labels == labels[0])
class_balance = max(class_balance, 1. - class_balance)
print("Classification accuracy: %f / Chance level: %f" % (np.mean(scores),
class_balance))
# plot CSP patterns estimated on full data for visualization
csp.fit_transform(epochs_data, labels)
csp.plot_patterns(epochs.info, ch_type='eeg', units='Patterns (AU)', size=1.5)
Explanation: Classification with linear discrimant analysis
End of explanation
sfreq = raw.info['sfreq']
w_length = int(sfreq * 0.5) # running classifier: window length
w_step = int(sfreq * 0.1) # running classifier: window step size
w_start = np.arange(0, epochs_data.shape[2] - w_length, w_step)
scores_windows = []
for train_idx, test_idx in cv_split:
y_train, y_test = labels[train_idx], labels[test_idx]
X_train = csp.fit_transform(epochs_data_train[train_idx], y_train)
X_test = csp.transform(epochs_data_train[test_idx])
# fit classifier
lda.fit(X_train, y_train)
# running classifier: test classifier on sliding window
score_this_window = []
for n in w_start:
X_test = csp.transform(epochs_data[test_idx][:, :, n:(n + w_length)])
score_this_window.append(lda.score(X_test, y_test))
scores_windows.append(score_this_window)
# Plot scores over time
w_times = (w_start + w_length / 2.) / sfreq + epochs.tmin
plt.figure()
plt.plot(w_times, np.mean(scores_windows, 0), label='Score')
plt.axvline(0, linestyle='--', color='k', label='Onset')
plt.axhline(0.5, linestyle='-', color='k', label='Chance')
plt.xlabel('time (s)')
plt.ylabel('classification accuracy')
plt.title('Classification score over time')
plt.legend(loc='lower right')
plt.show()
Explanation: Look at performance over time
End of explanation |
15,058 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Manipulation de séries financières avec la classe StockPrices
La classe StockPrices facilite la récupération de données financières via différents sites. Le site Yahoo Finance requiet maintenant un cookie (depuis Mai 2017) et il est préférable de choisir Google ou Quandl. Google ne fonctionne que les marchés américains, Quandl a des historiques plus courts.
Step1: Initialisation
Step2: Créer un objet StockPrices
Le plus est d'utiliser le tick de la série financière utilisé par le site Yahoo Finance ou Google Finance qui fait maintenant partie du moteur de recherche ou quandl.
Step3: La classe <tt>StockPrices</tt> contient un objet <a href="http
Step4: De la même manière, on peut créer un objet <tt>StockPrices</tt> à partir d'un DataFrame
Step5: Quelques graphes
Premier dessin, on télécharge les données de BNP puis on dessine le cours de l'action.
Step6: La même chose se produit sur une autre série financière mais pas à la même date. On trace maintenant la série Open (Adj Close défini
sur cette page View and download historical price, dividend, or split data n'est disponible qu'avec Yahoo).
Step7: Ce type de série ne fait pas toujours apparaître les saut de prix qui survient comme par exemple le <a href="http
Step8: Quelques opérations
Step9: On affiche les dernières lignes.
Step10: On récupère la série des rendements.
Step11: On trace la série des rendements pour les derniers mois.
Step12: Quelques notions sur les dates
La classe <i>StockPrices</i> utilise les dates sous forme de chaînes de caractères. De cette façon, il n'est pas possible de faire des opérations dessus. Pour ce faire, il faut les convertir en un objet appelé <a href="https
Step13: On ajoute un jour
Step14: Puis on convertit dans l'autre sens
Step15: Promenade dans l'index
Il est facile de récupérer les valeurs correspondant à une date précise. Mais comment récupérer la valeur du jour d'après ?
Step16: Sauver les tables
On peut conserver les données sous forme de fichiers pour les récupérer plus tard.
Step17: Le fichier est sauvé. Pour le récupérer avec pandas
Step18: Les dates apparaissent deux fois.
Step19: Cela est dû au fait que les dates sont à la fois une colonne et servent d'index. Pour éviter de les conserver deux fois, on demande explicitement à ce que l'index ne soit pas ajouté au fichier
Step20: Puis on récupère les données
Step21: On vérifie le fichier sur disque dur | Python Code:
import pyensae
from jyquickhelper import add_notebook_menu
add_notebook_menu()
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('ggplot')
Explanation: Manipulation de séries financières avec la classe StockPrices
La classe StockPrices facilite la récupération de données financières via différents sites. Le site Yahoo Finance requiet maintenant un cookie (depuis Mai 2017) et il est préférable de choisir Google ou Quandl. Google ne fonctionne que les marchés américains, Quandl a des historiques plus courts.
End of explanation
import pyensae
import os
from pyensae.finance import StockPrices
cache = os.path.abspath("cache")
if not os.path.exists(cache):
os.mkdir(cache)
Explanation: Initialisation
End of explanation
source = 'yahoo_new'
tick = 'MSFT'
stock = StockPrices(tick, folder=cache, url=source)
stock.head()
stock.tail()
Explanation: Créer un objet StockPrices
Le plus est d'utiliser le tick de la série financière utilisé par le site Yahoo Finance ou Google Finance qui fait maintenant partie du moteur de recherche ou quandl.
End of explanation
stock.dataframe.columns
Explanation: La classe <tt>StockPrices</tt> contient un objet <a href="http://pandas.pydata.org/pandas-docs/version/0.13.1/generated/pandas.DataFrame.html">pandas.DataFrame</a> auquel on accède en écrivant <tt>stock.dataframe</tt> ou <tt>stock.df</tt> :
End of explanation
import pandas
data = [{"Date":"2014-04-01", "Close":105.6}, {"Date":"2014-04-02", "Close":104.6},
{"Date":"2014-04-03", "Close":105.8}, ]
df = pandas.DataFrame(data)
stock = StockPrices("donnees",df)
stock.head()
Explanation: De la même manière, on peut créer un objet <tt>StockPrices</tt> à partir d'un DataFrame :
End of explanation
import datetime
stock = StockPrices(tick, folder=cache, url=source)
ax = StockPrices.draw(stock, figsize=(12,6))
stock = StockPrices(tick, folder=cache, url=source)
StockPrices.draw(stock, figsize=(12,6));
Explanation: Quelques graphes
Premier dessin, on télécharge les données de BNP puis on dessine le cours de l'action.
End of explanation
stock = StockPrices("MSFT", folder=cache, url='yahoo')
StockPrices.draw(stock, field=["Open", "Close"], figsize=(12,6));
Explanation: La même chose se produit sur une autre série financière mais pas à la même date. On trace maintenant la série Open (Adj Close défini
sur cette page View and download historical price, dividend, or split data n'est disponible qu'avec Yahoo).
End of explanation
stock.head()
stock = StockPrices(tick)
ret = stock.returns()["2019-01-04":"2019-02-02"]
ret.dataframe.loc["2019-01-11":"2019-01-18","Close"] = 0 # on annule certains valeurs
ax = stock.plot(figsize=(16, 5))
ret.plot(axis=2, ax=ax, label_prefix="r", color='blue');
Explanation: Ce type de série ne fait pas toujours apparaître les saut de prix qui survient comme par exemple le <a href="http://invest.bnpparibas.com/fr/pid5900/en-bref.html">20 février 2002</a> lorsque le cours nominal de l'action de la BNP a été divisé par deux pour augmenter la liquidité. Le nombre d'actions a été multiplié par deux. Les données sont le plus souvent corrigées BNP février 2002.
Ajouter une seconde série sur un graphe
Dans l'exemple suivant, on trace une série financière puis on ajoute la série des rendements sur un second axe.
End of explanation
os.listdir(cache)
Explanation: Quelques opérations
End of explanation
stock.tail()
Explanation: On affiche les dernières lignes.
End of explanation
ret = stock.returns()
ret.tail()
Explanation: On récupère la série des rendements.
End of explanation
StockPrices.draw(ret, figsize=(12,6), begin="2013-12-01", date_format="%Y-%m");
Explanation: On trace la série des rendements pour les derniers mois.
End of explanation
from datetime import datetime, timedelta
dt = datetime.strptime("2014-03-31","%Y-%m-%d")
dt
Explanation: Quelques notions sur les dates
La classe <i>StockPrices</i> utilise les dates sous forme de chaînes de caractères. De cette façon, il n'est pas possible de faire des opérations dessus. Pour ce faire, il faut les convertir en un objet appelé <a href="https://docs.python.org/2/library/datetime.html">datetime</a>.
End of explanation
delta = timedelta(1)
dt = dt + delta
dt
Explanation: On ajoute un jour :
End of explanation
s = dt.strftime("%Y-%m-%d")
s
Explanation: Puis on convertit dans l'autre sens :
End of explanation
tick2 = 'GOOGL'
stock = StockPrices(tick2, folder=cache, url=source)
df = stock.dataframe
print("A", df["2005-01-04":"2005-01-06"])
print("D", df.loc["2005-01-04","Close"])
print("G", df.index.get_loc("2005-01-06")) # retourne la position de cette date
Explanation: Promenade dans l'index
Il est facile de récupérer les valeurs correspondant à une date précise. Mais comment récupérer la valeur du jour d'après ?
End of explanation
stock = StockPrices(tick2, folder=cache, url=source)
stock.dataframe.to_csv("donnees.txt", sep="\t")
[_ for _ in os.listdir(".") if "donnees" in _]
Explanation: Sauver les tables
On peut conserver les données sous forme de fichiers pour les récupérer plus tard.
End of explanation
import pandas
df = pandas.read_csv("donnees.txt", sep="\t")
df.head()
Explanation: Le fichier est sauvé. Pour le récupérer avec pandas :
End of explanation
with open("donnees.txt","r") as f:
text = f.read()
print(text[:400])
Explanation: Les dates apparaissent deux fois.
End of explanation
stock = StockPrices(tick2, folder=cache, url=source)
stock.dataframe.to_csv("donnees.txt", sep="\t", index=False)
Explanation: Cela est dû au fait que les dates sont à la fois une colonne et servent d'index. Pour éviter de les conserver deux fois, on demande explicitement à ce que l'index ne soit pas ajouté au fichier :
End of explanation
df = pandas.read_csv("donnees.txt",sep="\t")
df.head()
Explanation: Puis on récupère les données :
End of explanation
with open("donnees.txt", "r") as f:
text = f.read()
print(text[:400])
Explanation: On vérifie le fichier sur disque dur :
End of explanation |
15,059 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This is an example for the displaytools extension for the IPython Notebook.
The extension introduces some "magic" comments (like ## and ##
Step1: Note that the equation sign (i.e., =) must be enclosed by two spaces, i.e.
Step2: Printing can be combined with LaTeX rendering
Step3: If there is no assignment taking place, ## nevertheless causes the display of the respective result.
Step4: Transposition
Sometimes, it can save much space if some return value is displayed in transposed form (while still being assigned not transposed). Compare these examples | Python Code:
%load_ext displaytools3
%reload_ext displaytools3
import sympy as sp
from sympy import sin, cos
from sympy.abc import t, pi
x = 2*pi*t
y1 = cos(x)
y2 = cos(x)*t
ydot1 = y1.diff(t) ##
ydot2 = y2.diff(t) ##
ydot1_obj = y1.diff(t, evaluate=False) ##
Explanation: This is an example for the displaytools extension for the IPython Notebook.
The extension introduces some "magic" comments (like ## and ##: ) which trigger additional output (normally only the return value of the last line of a cell is printed). See Why is this useful?
End of explanation
ydot1 = y1.diff(t) ##:
ydot2 = y2.diff(t) ##:
ydot1_obj = y1.diff(t, evaluate=False) ##:
Explanation: Note that the equation sign (i.e., =) must be enclosed by two spaces, i.e.: lhs = rhs.
If the variable name is also desired this can be triggered by ##:
End of explanation
sp.interactive.printing.init_printing(1)
ydot1 = y1.diff(t) ##:
ydot2 = y2.diff(t) ##:
ydot1_obj = y1.diff(t, evaluate=False) ##:
Explanation: Printing can be combined with LaTeX rendering:
End of explanation
y1.diff(t,t) ##
y2.diff(t,t) ##
Explanation: If there is no assignment taking place, ## nevertheless causes the display of the respective result.
End of explanation
xx = sp.Matrix(sp.symbols('x1:11')) ##
yy = sp.Matrix(sp.symbols('y1:11')) ##:T
xx.shape, yy.shape ##
# combination with other comments
a = 3 # comment ##:
# Multiline statements and indended lines are not yet supported:
a = [1,
2] ##:
if 1:
b = [10, 20] ##:
c = [100, 200] ##:
Explanation: Transposition
Sometimes, it can save much space if some return value is displayed in transposed form (while still being assigned not transposed). Compare these examples:
End of explanation |
15,060 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
$$ \LaTeX \text{ command declarations here.}
\newcommand{\R}{\mathbb{R}}
\renewcommand{\vec}[1]{\mathbf{#1}}
\newcommand{\X}{\mathcal{X}}
\newcommand{\D}{\mathcal{D}}
\newcommand{\G}{\mathcal{G}}
\newcommand{\L}{\mathcal{L}}
\newcommand{\X}{\mathcal{X}}
\newcommand{\Parents}{\mathrm{Parents}}
\newcommand{\NonDesc}{\mathrm{NonDesc}}
\newcommand{\I}{\mathcal{I}}
\newcommand{\dsep}{\text{d-sep}}
\newcommand{\Cat}{\mathrm{Categorical}}
\newcommand{\Bin}{\mathrm{Binomial}}
$$
Step1: EECS 445
Step3: Dimensionality Reduction
Data may even be embedded in a low-dimensional nonlinear manifold.
- How can we recover a low-dimensional representation?
<img src="images/swiss-roll.png">
Dimensionality Reduction
As an even more extreme example, consider a dataset consisting of the same image translated and rotate in different directions
Step4: Example
Step5: Break time!
<img src="images/finger_cat.gif"/>
Soon to come | Python Code:
from __future__ import division
# plotting
%matplotlib inline
from matplotlib import pyplot as plt;
import matplotlib as mpl;
from mpl_toolkits.mplot3d import Axes3D
# scientific
import numpy as np;
import sklearn as skl;
import sklearn.datasets;
import sklearn.cluster;
import sklearn.mixture;
# ipython
import IPython;
# python
import os;
import random;
#####################################################
# image processing
import PIL;
# trim and scale images
def trim(im, percent=100):
print("trim:", percent);
bg = PIL.Image.new(im.mode, im.size, im.getpixel((0,0)))
diff = PIL.ImageChops.difference(im, bg)
diff = PIL.ImageChops.add(diff, diff, 2.0, -100)
bbox = diff.getbbox()
if bbox:
x = im.crop(bbox)
return x.resize(((x.size[0]*percent)//100,
(x.size[1]*percent)//100),
PIL.Image.ANTIALIAS);
Explanation: $$ \LaTeX \text{ command declarations here.}
\newcommand{\R}{\mathbb{R}}
\renewcommand{\vec}[1]{\mathbf{#1}}
\newcommand{\X}{\mathcal{X}}
\newcommand{\D}{\mathcal{D}}
\newcommand{\G}{\mathcal{G}}
\newcommand{\L}{\mathcal{L}}
\newcommand{\X}{\mathcal{X}}
\newcommand{\Parents}{\mathrm{Parents}}
\newcommand{\NonDesc}{\mathrm{NonDesc}}
\newcommand{\I}{\mathcal{I}}
\newcommand{\dsep}{\text{d-sep}}
\newcommand{\Cat}{\mathrm{Categorical}}
\newcommand{\Bin}{\mathrm{Binomial}}
$$
End of explanation
def plot_plane():
# random samples
n = 200;
data = np.random.random((3,n));
data[2,:] = 0.4 * data[1,:] + 0.6 * data[0,:];
# plot plane
fig = plt.figure(figsize=(10,6));
ax = fig.add_subplot(111, projection="3d");
ax.scatter(*data);
plot_plane()
Explanation: EECS 445: Machine Learning
Lecture 14: Unsupervised Learning: PCA and Clustering
Instructor: Jacob Abernethy
Date: November 2, 2016
Midterm exam information
Statistics:
<span>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Total Score</th>
</tr>
</thead>
<tbody>
<tr>
<th>count</th>
<td>141.0</td>
</tr>
<tr>
<th>mean</th>
<td>48.7</td>
</tr>
<tr>
<th>std</th>
<td>12.4</td>
</tr>
<tr>
<th>min</th>
<td>15.0</td>
</tr>
<tr>
<th>25%</th>
<td>40.0</td>
</tr>
<tr>
<th>50%</th>
<td>50.0</td>
</tr>
<tr>
<th>75%</th>
<td>58.0</td>
</tr>
<tr>
<th>max</th>
<td>72.0</td>
</tr>
</tbody>
</table>
</span>
Score histogram
<img src="images/midterm_dist.png">
Announcements
We will be updating the HW4 to give you all an extra week
We will add a "free form" ML challenge via Kaggle as well
Want a regrade on the midterm? We'll post a policy soon.
References
[MLAPP] Murphy, Kevin. Machine Learning: A Probabilistic Perspective. 2012.
[PRML] Bishop, Christopher. Pattern Recognition and Machine Learning. 2006.
Goal Today: Methods for Unsupervised Learning
We generally a call a problem "Unsupervised" when we don't have any labels!
Outline
Principle Component Analysis
Classical View
Low dimensional representation of data
Relationship to SVD
Clustering
Core idea behind clustering
K-means algorithm
K-means++ etc.
Principal Components Analysis
Uses material from [MLAPP] and [PRML]
Dimensionality Reduction
High-dimensional data may have low-dimensional structure.
- We only need two dimensions to describe a rotated plane in 3d!
End of explanation
## scikit example: Faces recognition example using eigenfaces and SVMs
from __future__ import print_function
from time import time
import matplotlib.pyplot as plt
from sklearn.cross_validation import train_test_split
from sklearn.datasets import fetch_lfw_people
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.decomposition import RandomizedPCA
from sklearn.svm import SVC
###############################################################################
# Download the data, if not already on disk and load it as numpy arrays
lfw_people = fetch_lfw_people(min_faces_per_person=70, resize=0.4)
# introspect the images arrays to find the shapes (for plotting)
n_samples, h, w = lfw_people.images.shape
# for machine learning we use the 2 data directly (as relative pixel
# positions info is ignored by this model)
X = lfw_people.data
n_features = X.shape[1]
# the label to predict is the id of the person
y = lfw_people.target
target_names = lfw_people.target_names
n_classes = target_names.shape[0]
###############################################################################
# Split into a training set and a test set using a stratified k fold
# split into a training and testing set
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.25, random_state=42)
###############################################################################
# Compute a PCA (eigenfaces) on the face dataset (treated as unlabeled
# dataset): unsupervised feature extraction / dimensionality reduction
n_components = 150
#print("Extracting the top %d eigenfaces from %d faces"
# % (n_components, X_train.shape[0]))
#t0 = time()
pca = RandomizedPCA(n_components=n_components, whiten=True).fit(X_train)
#print("done in %0.3fs" % (time() - t0))
eigenfaces = pca.components_.reshape((n_components, h, w))
#print("Projecting the input data on the eigenfaces orthonormal basis")
#t0 = time()
X_train_pca = pca.transform(X_train)
X_test_pca = pca.transform(X_test)
#print("done in %0.3fs" % (time() - t0))
###############################################################################
# Train a SVM classification model
#print("Fitting the classifier to the training set")
#t0 = time()
param_grid = {'C': [1e3, 5e3, 1e4, 5e4, 1e5],
'gamma': [0.0001, 0.0005, 0.001, 0.005, 0.01, 0.1], }
clf = GridSearchCV(SVC(kernel='rbf', class_weight='balanced'), param_grid)
clf = clf.fit(X_train_pca, y_train)
#print("done in %0.3fs" % (time() - t0))
#print("Best estimator found by grid search:")
#print(clf.best_estimator_)
###############################################################################
# Quantitative evaluation of the model quality on the test set
#print("Predicting people's names on the test set")
#t0 = time()
y_pred = clf.predict(X_test_pca)
#print("done in %0.3fs" % (time() - t0))
#print(classification_report(y_test, y_pred, target_names=target_names))
#print(confusion_matrix(y_test, y_pred, labels=range(n_classes)))
###############################################################################
# Qualitative evaluation of the predictions using matplotlib
def plot_gallery(images, titles, h, w, n_row=3, n_col=4):
Helper function to plot a gallery of portraits
plt.figure(figsize=(1.8 * n_col, 2.4 * n_row))
plt.subplots_adjust(bottom=0, left=.01, right=.99, top=.90, hspace=.35)
for i in range(n_row * n_col):
plt.subplot(n_row, n_col, i + 1)
plt.imshow(images[i].reshape((h, w)), cmap=plt.cm.gray)
plt.title(titles[i], size=12)
plt.xticks(())
plt.yticks(())
# plot the result of the prediction on a portion of the test set
def title(y_pred, y_test, target_names, i):
pred_name = target_names[y_pred[i]].rsplit(' ', 1)[-1]
true_name = target_names[y_test[i]].rsplit(' ', 1)[-1]
return 'predicted: %s\ntrue: %s' % (pred_name, true_name)
prediction_titles = [title(y_pred, y_test, target_names, i)
for i in range(y_pred.shape[0])]
plot_gallery(X_test, prediction_titles, h, w)
Explanation: Dimensionality Reduction
Data may even be embedded in a low-dimensional nonlinear manifold.
- How can we recover a low-dimensional representation?
<img src="images/swiss-roll.png">
Dimensionality Reduction
As an even more extreme example, consider a dataset consisting of the same image translated and rotate in different directions:
- Only 3 degrees of freedom for a 100x100-dimensional dataset!
<img src="images/pca_1.png" align="middle">
Principal Components Analysis
Given a set $X = {x_n}$ of observations
* in a space of dimension $D$,
* find a linear subspace of dimension $M < D$
* that captures most of its variability.
PCA can be described in two equivalent ways:
* maximizing the variance of the projection, or
* minimizing the squared approximation error.
PCA: Equivalent Descriptions
Maximize variance or minimize squared projection error:
<img src="images/pca_2.png" height = "300px" width = "300px" align="middle">
PCA: Equivalent Descriptions
With mean at the origin $ c_i^2 = a_i^2 + b_i^2 $, with constant $c_i^2$
Minimizing $b_i^2$ maximizes $a_i^2$ and vice versa
<img src="images/pca_3.png" height = "300px" width = "300px" align="middle">
PCA: First Principal Component
Given data points ${x_n}$ in $D$-dim space.
* Mean $\bar x = \frac{1}{N} \sum_{n=1}^{N} x_n $
* Data covariance ($D \times D$ matrix):
$ S = \frac{1}{N} \sum_{n=1}^{N}(x_n - \bar x)(x_n - \bar x)^T$
Let $u_1$ be the principal component we want.
* Unit length $u_1^T u_1 = 1$
* Projection of $x_n$ is $u_1^T x_n$
PCA: First Principal Component
Goal: Maximize the projection variance over directions $u_1$:
$$ \frac{1}{N} \sum_{n=1}^{N}{u_1^Tx_n - u_1^T \bar x }^2 = u_1^TSu_1$$
Use a Lagrange multiplier to enforce $u_1^T u_1 = 1$
Maximize: $u_1^T S u_1 + \lambda(1-u_1^T u_1)$
Derivative is zero when $ Su_1 = \lambda u_1$
That is, $u_1^T S u_1 = \lambda $
So $u_1$ is eigenvector of $S$ with largest eigenvalue.
PCA: Maximizing Variance
The top $M$ eigenvectors of the empirical covariance matrix $S$ give the $M$ principal components of the data.
- Minimizes squared projection error
- Maximizes projection variances
Recall: These are the top $M$ left singular vectors of the data matrix $\hat X$, where $\hat X := X - \bar x \mathbb{1}_N$, i.e. we shift $X$ to ensure 0-mean rows.
Key points for computing SVD
$$\def\bX{\bar X}$$
Let $X$ be the $n \times m$ data matrix ($n$ rows, one for each example). We want to represent our data using only the top $k$ principle components.
1. Mean-center the data, so that $\bX$ is $X$ with each row subtracted by the mean row $\frac 1 n \sum_i X_{i:}$
1. Compute the SVD of $\bX$, i.e. $\bX = U \Sigma V^\top$
1. We can construct $\Sigma_k$ which drops all but the top $k$ singular values from $\Sigma$
1. We can represent $\bX$ either in terms of the principle components, $\Sigma_k V^\top$ or we can look at the data in the original representation after dropping the lower components, which is $U \Sigma_k V^\top$
Example: Eigenfaces
<img src="images/pca_9.png" align="middle">
Example: Face Recognition via Eigenfaces
End of explanation
eigenface_titles = ["eigenface %d" % i for i in range(eigenfaces.shape[0])]
plot_gallery(eigenfaces, eigenface_titles, h, w)
Explanation: Example: Face Recognition
End of explanation
X, y = skl.datasets.make_blobs(1000, cluster_std=[1.0, 2.5, 0.5], random_state=170)
plt.scatter(X[:,0], X[:,1])
Explanation: Break time!
<img src="images/finger_cat.gif"/>
Soon to come: Latent Variable Models
Uses material from [MLAPP] §10.1-10.4, §11.1-11.2
Latent Variable Models
In general, the goal of probabilistic modeling is to
Use what we know to make inferences about what we don't know.
Graphical models provide a natural framework for this problem.
- Assume unobserved variables are correlated due to the influence of unobserved latent variables.
- Latent variables encode beliefs about the generative process.
Example to Come: Gaussian Mixture Models
This dataset is hard to explain with a single distribution.
- Underlying density is complicated overall...
- But it's clearly three Gaussians!
End of explanation |
15,061 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="images/mcg.jpg", style="width
Step1: Linear Gaussian Models - The Process
The linear gaussian model in supervised learning scheme is nothing but a linear regression where inputs are drawn from a jointly gaussian distribution.
Determining the Latent Parameters via Maximum Likelihood Estimation (MLE)
The samples drawn from the conditional linear gaussian distributions are observed as | Python Code:
# from pgmpy.factors.continuous import LinearGaussianCPD
import sys
import numpy as np
import pgmpy
sys.path.insert(0, "../pgmpy/")
from pgmpy.factors.continuous import LinearGaussianCPD
mu = np.array([7, 13])
sigma = np.array([[4, 3], [3, 6]])
cpd = LinearGaussianCPD(
"Y", evidence_mean=mu, evidence_variance=sigma, evidence=["X1", "X2"]
)
cpd.variable, cpd.evidence
#### import numpy as np
%matplotlib inline
import pandas as pd
import seaborn as sns
import numpy as np
from matplotlib import cm
from mpl_toolkits.mplot3d import Axes3D
from scipy.stats import multivariate_normal
from matplotlib import pyplot
# Obtain the X and Y which are jointly gaussian from the distribution
mu_x = np.array([7, 13])
sigma_x = np.array([[4, 3], [3, 6]])
# Variables
states = ["X1", "X2"]
# Generate samples from the distribution
X_Norm = multivariate_normal(mean=mu_x, cov=sigma_x)
X_samples = X_Norm.rvs(size=10000)
X_df = pd.DataFrame(X_samples, columns=states)
# Generate
X_df["P_X"] = X_df.apply(X_Norm.pdf, axis=1)
X_df.head()
g = sns.jointplot(X_df["X1"], X_df["X2"], kind="kde", height=10, space=0)
Explanation: <img src="images/mcg.jpg", style="width: 100px">
Linear Gaussian Bayesian Networks (GBNs)
Generate $x_1$ $x_2$ and $Y$ from a Multivariate Gaussian Distribution with a Mean and a Variance.
What if the inputs to the linear regression were correlated? This often happens in linear dynamical systems. Linear Gaussian Models are useful for modeling probabilistic PCA, factor analysis and linear dynamical systems. Linear Dynamical Systems have variety of uses such as tracking of moving objects. This is an area where Signal Processing methods have a high overlap with Machine Learning methods. When the problem is treated as a state-space problem with added stochasticity, then the future samples depend on the past. The latent parameters, $\beta_i$ where $i \in [1,...,k]$ provide a linear combination of the univariate gaussian distributions as shown in the figure.
<img src="images/gbn.png", style="width: 400px">
The observed variable, $y_{jx}$ can be described as a sample that is drawn from the conditional distribution:
$$\mathcal{N}(y_{jx} | \sum_{i=1}^k \beta_i^T x_i + \beta_0; \sigma^2)$$
The latent parameters $\beta_is$ and $\sigma^2$ need to be determined.
End of explanation
beta_vec = np.array([0.7, 0.3])
beta_0 = 2
sigma_c = 4
def genYX(x):
x = [x["X1"], x["X2"]]
var_mean = np.dot(beta_vec.transpose(), x) + beta_0
Yx_sample = np.random.normal(var_mean, sigma_c, 1)
return Yx_sample[0]
X_df["(Y|X)"] = X_df.apply(genYX, axis=1)
X_df.head()
sns.distplot(X_df["(Y|X)"])
# X_df.to_csv('gbn_values.csv', index=False)
cpd.fit(X_df, states=["(Y|X)", "X1", "X2"], estimator="MLE")
Explanation: Linear Gaussian Models - The Process
The linear gaussian model in supervised learning scheme is nothing but a linear regression where inputs are drawn from a jointly gaussian distribution.
Determining the Latent Parameters via Maximum Likelihood Estimation (MLE)
The samples drawn from the conditional linear gaussian distributions are observed as:
$$ p(Y|X) = \cfrac{1}{\sqrt(2\pi\sigma_c^2} \times exp(\cfrac{(\sum_{i=1}^k \beta_i^T x_i + \beta_0 - x[m])^2}{2\sigma^2})$$
Taking log,
$$ log(p(Y|X)) = (\sum_{i=1}^k[-\cfrac{1}{2}log(2\pi\sigma^2) - \cfrac{1}{2\sigma^2}( \beta_i^T x_i + \beta_0 - x[m])^2)]$$
Differentiating w.r.t $\beta_i$, we can get k+1 linear equations as shown below:
The Condtional Distribution p(Y|X)
<img src="images/lgm.png", style="width: 700px">
The betas can easily be estimated by inverting the coefficient matrix and multiplying it to the right-hand side.
End of explanation |
15,062 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Modelado de un sistema con ipython
Para el correcto funcionamiento del extrusor de filamento, es necesario regular correctamente la temperatura a la que está el cañon. Por ello se usará un sistema consistente en una resitencia que disipe calor, y un sensor de temperatura PT100 para poder cerrar el lazo y controlar el sistema. A continuación, desarrollaremos el proceso utilizado.
Step1: Respuesta del sistema
El primer paso será someter al sistema a un escalon en lazo abierto para ver la respuesta temporal del mismo. A medida que va calentando, registraremos los datos para posteriormente representarlos.
Step2: Cálculo del polinomio
Hacemos una regresión con un polinomio de orden 2 para calcular cual es la mejor ecuación que se ajusta a la tendencia de nuestros datos.
Step3: El polinomio caracteristico de nuestro sistema es
Step4: En este caso hemos establecido un setpoint de 80ºC Como vemos, una vez introducido el controlador, la temperatura tiende a estabilizarse, sin embargo tiene mucha sobreoscilación. Por ello aumentaremos los valores de $K_i$ y $K_d$, siendo los valores de esta segunda iteracción los siguientes
Step5: En esta segunda iteracción hemos logrado bajar la sobreoscilación inicial, pero tenemos mayor error en regimen permanente. Por ello volvemos a aumentar los valores de $K_i$ y $K_d$ siendo los valores de esta tercera iteracción los siguientes
Step6: En este caso, se puso un setpoint de 160ºC. Como vemos, la sobreoscilación inicial ha disminuido en comparación con la anterior iteracción y el error en regimen permanente es menor. Para intentar minimar el error, aumentaremos únicamente el valor de $K_i$. Siendo los valores de esta cuarta iteracción del regulador los siguientes | Python Code:
#Importamos las librerías utilizadas
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pylab as plt
#Mostramos las versiones usadas de cada librerías
print ("Numpy v{}".format(np.__version__))
print ("Pandas v{}".format(pd.__version__))
print ("Seaborn v{}".format(sns.__version__))
#Mostramos todos los gráficos en el notebook
%pylab inline
#Abrimos el fichero csv con los datos de la muestra
datos = pd.read_csv('datos.csv')
#Almacenamos en una lista las columnas del fichero con las que vamos a trabajar
#columns = ['temperatura', 'entrada']
columns = ['temperatura', 'entrada']
Explanation: Modelado de un sistema con ipython
Para el correcto funcionamiento del extrusor de filamento, es necesario regular correctamente la temperatura a la que está el cañon. Por ello se usará un sistema consistente en una resitencia que disipe calor, y un sensor de temperatura PT100 para poder cerrar el lazo y controlar el sistema. A continuación, desarrollaremos el proceso utilizado.
End of explanation
#Mostramos en varias gráficas la información obtenida tras el ensay
#ax = datos['temperatura'].plot(figsize=(10,5), ylim=(0,60),label="Temperatura")
#ax.set_xlabel('Tiempo')
#ax.set_ylabel('Temperatura [ºC]')
#ax.set_ylim(20,60)
#ax = datos['entrada'].plot(secondary_y=True, label="Entrada")#.set_ylim(-1,55)
fig, ax1 = plt.subplots()
t = np.arange(0.01, 10.0, 0.01)
s1 = np.exp(t)
ax1.plot(datos['time'], datos['temperatura'], 'b-')
ax1.set_xlabel('Tiempo (s)')
# Make the y-axis label and tick labels match the line color.
ax1.set_ylabel('Temperatura', color='b')
#for tl in ax1.get_yticklabels():
# tl.set_color('b')
ax2 = ax1.twinx()
s2 = np.sin(2*np.pi*t)
ax2.plot(datos['time'], datos['entrada'], 'r-')
ax2.set_ylabel('Escalón', color='r')
ax2.set_ylim(-1,55)
#for tl in ax2.get_yticklabels():
# tl.set_color('r')
plt.figure(figsize=(10,5))
plt.show()
Explanation: Respuesta del sistema
El primer paso será someter al sistema a un escalon en lazo abierto para ver la respuesta temporal del mismo. A medida que va calentando, registraremos los datos para posteriormente representarlos.
End of explanation
# Buscamos el polinomio de orden 4 que determina la distribución de los datos
reg = np.polyfit(datos['time'],datos['temperatura'],2)
# Calculamos los valores de y con la regresión
ry = np.polyval(reg,datos['time'])
print (reg)
plt.plot(datos['time'],datos['temperatura'],'b^', label=('Datos experimentales'))
plt.plot(datos['time'],ry,'ro', label=('regresión polinómica'))
plt.legend(loc=0)
plt.grid(True)
plt.xlabel('Tiempo')
plt.ylabel('Temperatura [ºC]')
Explanation: Cálculo del polinomio
Hacemos una regresión con un polinomio de orden 2 para calcular cual es la mejor ecuación que se ajusta a la tendencia de nuestros datos.
End of explanation
#Almacenamos en una lista las columnas del fichero con las que vamos a trabajar
datos_it1 = pd.read_csv('Regulador1.csv')
columns = ['temperatura']
#Mostramos en varias gráficas la información obtenida tras el ensayo
ax = datos_it1[columns].plot(figsize=(10,5), ylim=(20,100),title='Modelo matemático del sistema con regulador',)
ax.set_xlabel('Tiempo')
ax.set_ylabel('Temperatura [ºC]')
ax.hlines([80],0,3500,colors='r')
#Calculamos MP
Tmax = datos_it1.describe().loc['max','temperatura'] #Valor de la Temperatura maxima en el ensayo
Sp=80.0 #Valor del setpoint
Mp= ((Tmax-Sp)/(Sp))*100
print("El valor de sobreoscilación es de: {:.2f}%".format(Mp))
#Calculamos el Error en régimen permanente
Errp = datos_it1.describe().loc['75%','temperatura'] #Valor de la temperatura en régimen permanente
Eregimen = abs(Sp-Errp)
print("El valor del error en régimen permanente es de: {:.2f}".format(Eregimen))
Explanation: El polinomio caracteristico de nuestro sistema es:
$$P_x= 25.9459 -1.5733·10^{-4}·X - 8.18174·10^{-9}·X^2$$
Transformada de laplace
Si calculamos la transformada de laplace del sistema, obtenemos el siguiente resultado:
$$G_s = \frac{25.95·S^2 - 0.00015733·S + 1.63635·10^{-8}}{S^3}$$
Cálculo del PID mediante OCTAVE
Aplicando el método de sintonizacion de Ziegler-Nichols calcularemos el PID para poder regular correctamente el sistema.Este método, nos da d emanera rápida unos valores de $K_p$, $K_i$ y $K_d$ orientativos, para que podamos ajustar correctamente el controlador. Esté método consiste en el cálculo de tres parámetros característicos, con los cuales calcularemos el regulador:
$$G_s=K_p(1+\frac{1}{T_i·S}+T_d·S)=K_p+\frac{K_i}{S}+K_d$$
En esta primera iteración, los datos obtenidos son los siguientes:
$K_p = 6082.6$ $K_i=93.868 K_d=38.9262$
Con lo que nuestro regulador tiene la siguiente ecuación característica:
$$G_s = \frac{38.9262·S^2 + 6082.6·S + 93.868}{S}$$
Iteracción 1 de regulador
End of explanation
#Almacenamos en una lista las columnas del fichero con las que vamos a trabajar
datos_it2 = pd.read_csv('Regulador2.csv')
columns = ['temperatura']
#Mostramos en varias gráficas la información obtenida tras el ensayo
ax2 = datos_it2[columns].plot(figsize=(10,5), ylim=(20,100),title='Modelo matemático del sistema con regulador',)
ax2.set_xlabel('Tiempo')
ax2.set_ylabel('Temperatura [ºC]')
ax2.hlines([80],0,3500,colors='r')
#Calculamos MP
Tmax = datos_it2.describe().loc['max','temperatura'] #Valor de la Temperatura maxima en el ensayo
Sp=80.0 #Valor del setpoint
Mp= ((Tmax-Sp)/(Sp))*100
print("El valor de sobreoscilación es de: {:.2f}%".format(Mp))
#Calculamos el Error en régimen permanente
Errp = datos_it2.describe().loc['75%','temperatura'] #Valor de la temperatura en régimen permanente
Eregimen = abs(Sp-Errp)
print("El valor del error en régimen permanente es de: {:.2f}".format(Eregimen))
Explanation: En este caso hemos establecido un setpoint de 80ºC Como vemos, una vez introducido el controlador, la temperatura tiende a estabilizarse, sin embargo tiene mucha sobreoscilación. Por ello aumentaremos los valores de $K_i$ y $K_d$, siendo los valores de esta segunda iteracción los siguientes:
$K_p = 6082.6$ $K_i=103.25 K_d=51.425$
Iteracción 2 del regulador
End of explanation
#Almacenamos en una lista las columnas del fichero con las que vamos a trabajar
datos_it3 = pd.read_csv('Regulador3.csv')
columns = ['temperatura']
#Mostramos en varias gráficas la información obtenida tras el ensayo
ax3 = datos_it3[columns].plot(figsize=(10,5), ylim=(20,180),title='Modelo matemático del sistema con regulador',)
ax3.set_xlabel('Tiempo')
ax3.set_ylabel('Temperatura [ºC]')
ax3.hlines([160],0,6000,colors='r')
#Calculamos MP
Tmax = datos_it3.describe().loc['max','temperatura'] #Valor de la Temperatura maxima en el ensayo
Sp=160.0 #Valor del setpoint
Mp= ((Tmax-Sp)/(Sp))*100
print("El valor de sobreoscilación es de: {:.2f}%".format(Mp))
#Calculamos el Error en régimen permanente
Errp = datos_it3.describe().loc['75%','temperatura'] #Valor de la temperatura en régimen permanente
Eregimen = abs(Sp-Errp)
print("El valor del error en régimen permanente es de: {:.2f}".format(Eregimen))
Explanation: En esta segunda iteracción hemos logrado bajar la sobreoscilación inicial, pero tenemos mayor error en regimen permanente. Por ello volvemos a aumentar los valores de $K_i$ y $K_d$ siendo los valores de esta tercera iteracción los siguientes:
$K_p = 6082.6$ $K_i=121.64 K_d=60$
Iteracción 3 del regulador
End of explanation
#Almacenamos en una lista las columnas del fichero con las que vamos a trabajar
datos_it4 = pd.read_csv('Regulador4.csv')
columns = ['temperatura']
#Mostramos en varias gráficas la información obtenida tras el ensayo
ax4 = datos_it4[columns].plot(figsize=(10,5), ylim=(20,180))
ax4.set_xlabel('Tiempo')
ax4.set_ylabel('Temperatura [ºC]')
ax4.hlines([160],0,7000,colors='r')
#Calculamos MP
Tmax = datos_it4.describe().loc['max','temperatura'] #Valor de la Temperatura maxima en el ensayo
print (" {:.2f}".format(Tmax))
Sp=160.0 #Valor del setpoint
Mp= ((Tmax-Sp)/(Sp))*100
print("El valor de sobreoscilación es de: {:.2f}%".format(Mp))
#Calculamos el Error en régimen permanente
Errp = datos_it4.describe().loc['75%','temperatura'] #Valor de la temperatura en régimen permanente
Eregimen = abs(Sp-Errp)
print("El valor del error en régimen permanente es de: {:.2f}".format(Eregimen))
Explanation: En este caso, se puso un setpoint de 160ºC. Como vemos, la sobreoscilación inicial ha disminuido en comparación con la anterior iteracción y el error en regimen permanente es menor. Para intentar minimar el error, aumentaremos únicamente el valor de $K_i$. Siendo los valores de esta cuarta iteracción del regulador los siguientes:
$K_p = 6082.6$ $K_i=121.64 K_d=150$
Iteracción 4
End of explanation |
15,063 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 2 Distributions
Histograms
The most common representation of a distribution is a histogram. which is a graph that shows the frequency of each value
Step1: NSFG variables
Step2: Histogram of pregnancy length in weeks
Step3: Histogram of pregnancy lengths
Step4: Summarizing distributions
Some of the characteristics we might want to report are
Step5: Make a histogram of totincr the total income for the respondent's family.
Step6: Make a histogram of age_r, the respondent's age at the time of interview.
Step7: Use totincr to select the respondents with the highest income. Compute the distribution of parity for just the high income respondents.
Step8: Compare the mean parity for high income respondents and others.
Step9: Exercise 4
Using the variable totalwgt_lb, investigate whether first babies are lighter or heavier than others. Compute Cohen’s d to quantify the difference between the groups. How does it compare to the difference in pregnancy length? | Python Code:
from matplotlib import pyplot as plt
%matplotlib inline
import seaborn as sns
import numpy as np
import pandas as pd
import thinkstats2
import thinkplot
hist = thinkstats2.Hist([1, 2, 2, 3, 5])
hist
hist.Freq(2) # hist[2]
hist.Values()
thinkplot.Hist(hist)
thinkplot.Show(xlabel='value', ylabel='frequency')
Explanation: Chapter 2 Distributions
Histograms
The most common representation of a distribution is a histogram. which is a graph that shows the frequency of each value
End of explanation
import nsfg
Explanation: NSFG variables
End of explanation
preg = nsfg.ReadFemPreg()
live = preg[preg.outcome == 1]
hist = thinkstats2.Hist(live.birthwgt_lb, label='birthwgt_lb')
thinkplot.Hist(hist)
thinkplot.Show(xlabel='pounds', ylabel='frequency')
Explanation: Histogram of pregnancy length in weeks
End of explanation
firsts = live[live.birthord == 1]
others = live[live.birthord != 1]
firsts.prglngth.plot(kind='hist', width=2)
others.prglngth.plot(kind='hist', width=2)
Explanation: Histogram of pregnancy lengths
End of explanation
import thinkstats2
resp = thinkstats2.ReadStataDct('2002FemResp.dct').ReadFixedWidth('2002FemResp.dat.gz', compression='gzip')
Explanation: Summarizing distributions
Some of the characteristics we might want to report are:
- central tendency
- modes
- spread
- tails
- outliers
mean
$$\overline{x}= \frac{1}{n}\sum_{i}a_i$$
Variance
$$S^{2} = \frac{1}{n}\sum_i(x_{i}-\overline{x})^{2}$$
$x_{i}-\overline{x}$ is called the “deviation from the mean”
$\sqrt{S}$ is the standard deviation.
Pandas data structures provides methods to compute mean, variance and standard deviation:
```python
mean = live.prglngth.mean()
var = live.prglngth.var() # variance
std = live.prglngth.std() # standard deviation
```
Effect size
An effect size is a quantitative measure of the strength of an event.
One obvious choice is the difference in the means.
Another way to convey the size of the effect is to compare the difference between groups to the variability within groups.
Cohen's d
$$d = \frac{\overline{x_1} -\overline{x_2}}{s}$$
s is the “pooled standard deviation”
$$s=\sqrt{\frac{(n_1-1)S_1^2 + (n_2-1)S_2^2}{n_1 +n_2 -2}}$$
$n_i$ is the sample size of $x_i$, $S_i$ is the variance.
Reporting results
Who
A scientist might be interested in any (real) effect, no matter how small.
A doctor might only care about effects that are clinically significant.
How
Goals
Exercise2
End of explanation
resp.totincr.plot.hist(bins=range(17))
Explanation: Make a histogram of totincr the total income for the respondent's family.
End of explanation
resp.ager.plot.hist(bins=range(15,46))
Explanation: Make a histogram of age_r, the respondent's age at the time of interview.
End of explanation
rich = resp[resp.totincr == resp.totincr.max() ]
rich.parity.plot.hist(bins=range(10))
Explanation: Use totincr to select the respondents with the highest income. Compute the distribution of parity for just the high income respondents.
End of explanation
rich = resp[resp.totincr == resp.totincr.max() ]
notrich = resp[resp.totincr < resp.totincr.max()]
rich.parity.mean(), notrich.parity.mean()
Explanation: Compare the mean parity for high income respondents and others.
End of explanation
preg = nsfg.ReadFemPreg()
first = preg[preg.birthord ==1 ]
others = preg[preg.birthord >1 ]
first.totalwgt_lb.mean(), others.totalwgt_lb.mean()
def CohenEffectSize(group1, group2):
mean_diff = group1.mean() - group2.mean()
n1= len(group1)
n2 = len(group2)
pooled_var = (n1*group1.var() + n2* group2.var())/(n1+n2)
d = mean_diff / np.math.sqrt(pooled_var)
return d
CohenEffectSize(first.totalwgt_lb, others.totalwgt_lb)
Explanation: Exercise 4
Using the variable totalwgt_lb, investigate whether first babies are lighter or heavier than others. Compute Cohen’s d to quantify the difference between the groups. How does it compare to the difference in pregnancy length?
End of explanation |
15,064 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dynamic hedge
It's somewhat hard to find data about historical prices of options, so I will proceed differently as promissed. I will work on the following problem
Step1: For the interest rate to be used, observing the charts of LIBOR rates, I will pick a linear model, starting from 2.5% at Jan-1-2005, going to 6% at Dec-31-2007 and decaying to 0.5% until Dec-31-2007 and staying there.
Step2: That's all. Now I will measure volatility on the last running two weeks and decide if I will hedge that risk. Else return to the initial position. Options are bought or sold at the Black-Scholes price using the current rate and the running volatility. This mean that the first month I will keep the original portfolio, until I have enough (?) info about the market.
I will calculate the returns beforehand, so that I do not need to deal with pandas Series. It must be pointed that on prices there are a split on the stocks at April 2 2014, so I need to repair this split. Also, I will define the formulas for the Delta (of the BSM model), the option pricing, the annual volatility.
On this simulation I will use only put options bought at the suggested price by the BSM model, as I assume that high volatility can lead to losses and want to hedge that high volatility times. I will not worry if volatility is low and expect that the stock make gains.
Step3: I will define two functions, one to open a hedge position, selling if necessary some stock in order to affor the position. The other function will close the position, buying all possible stock with the earnings, if any. A transaction cost is included. It must account of both opening and closing the position (so, if your brokers charges you \$50, the commission is \$100)
Step4: Now I will proceed to explain how this trading algorithm will work | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
data = pd.read_csv("../data/GOOG.csv").ix[:,["Date", "Open"]]
data.sort_values(by="Date", inplace=True)
data.reset_index(inplace=True)
Explanation: Dynamic hedge
It's somewhat hard to find data about historical prices of options, so I will proceed differently as promissed. I will work on the following problem:
I have a portfolio of 1000 GOOG stocks on Jannuary 1 2005. I want to hedge the portfolio whenever the running volatility is bigger than 40%, otherwise I will let it growth.
Not having historical prices on options I will use the theoretical Black-Scholes-Merton (BSM) price for options. This is problematic in some ways. First, the volatility I'm using at some time may not be the implied volatility used at that time for the real options sold and this means that I'm ignoring volatility smiles. Then there is the fact I need the free-risk interest rates and I don't have them, so I will build a model for it just to account for the changes after the financial crisis of the 2008.
End of explanation
rate = np.zeros_like(data.Date)
n = 0
m = 0
for d in data.Date:
if d < "2007-12-31":
rate[m] = 2.5 + n*3.5/734.0
if d == "2007-12-31":
rate[m] = 6.0
n = 0
if "2008-12-31" > d > "2007-12-31":
rate[m] = 6.0 - 5.5*n/256
if d >= "2008-12-31":
rate[m] = 0.5
m +=1
n +=1
rate = rate/100
Explanation: For the interest rate to be used, observing the charts of LIBOR rates, I will pick a linear model, starting from 2.5% at Jan-1-2005, going to 6% at Dec-31-2007 and decaying to 0.5% until Dec-31-2007 and staying there.
End of explanation
from scipy.stats import norm
def volatility(v):
return np.sqrt(260)*np.std(v)
def eu_put_option(S, K, r, s, t):
d1 = (np.log(S/K) + t*(r + s*s/2))/np.sqrt(s*s*t)
d2 = (np.log(S/K) + t*(r - s*s/2))/np.sqrt(s*s*t)
return K*np.exp(-r*t)*norm.cdf(-d2) - S*norm.cdf(-d1)
def eu_put_option_delta(S, K, r, s, t):
return norm.cdf((np.log(S/K) + t*(r + s*s/2))/np.sqrt(s*s*t)) - 1
def repair_split(df, info): # info is a list of tuples [(date, split-ratio)]
temp = df.Open.values.copy()
for i in info:
date, ratio = i
mask = np.array(df.Date >= date)
temp[mask] = temp[mask]*ratio
return temp
stock_price = repair_split(data, [("2014-03-27", 2)])
rets = np.diff(np.log(stock_price))
Explanation: That's all. Now I will measure volatility on the last running two weeks and decide if I will hedge that risk. Else return to the initial position. Options are bought or sold at the Black-Scholes price using the current rate and the running volatility. This mean that the first month I will keep the original portfolio, until I have enough (?) info about the market.
I will calculate the returns beforehand, so that I do not need to deal with pandas Series. It must be pointed that on prices there are a split on the stocks at April 2 2014, so I need to repair this split. Also, I will define the formulas for the Delta (of the BSM model), the option pricing, the annual volatility.
On this simulation I will use only put options bought at the suggested price by the BSM model, as I assume that high volatility can lead to losses and want to hedge that high volatility times. I will not worry if volatility is low and expect that the stock make gains.
End of explanation
commission = 0
def rebalance(S, K, r, s, t, cap):
option_price = eu_put_option(S, K, r, s, t)
delta = eu_put_option_delta(S, K, r, s, t)
# rebalance the portfolio, if not enough money, then sell stock to buy put options
options = np.floor(cap/(option_price - delta*S))
stock = np.floor(-delta*cap/(option_price - delta*S))
money = cap - (options*option_price + stock*S)
return (stock, options, money, option_price)
def close_position(S, K, nstock, nopt, mon):
profit = nopt*max(0, K - S)
money = mon + profit
stock = nstock + np.floor(money/S)
money -= (stock - nstock)*S
money -= commission
return (profit, stock, 0, money)
Explanation: I will define two functions, one to open a hedge position, selling if necessary some stock in order to affor the position. The other function will close the position, buying all possible stock with the earnings, if any. A transaction cost is included. It must account of both opening and closing the position (so, if your brokers charges you \$50, the commission is \$100)
End of explanation
capital = 200000 # just enough to buy the stocks
stock = 1000
money = capital - 1000*stock_price[0] # this money will not take any interest
options = 0
strike = 0
option_price = 0
profit = 0
net_worth = []
vola = []
n_options = []
n_stock = []
n = 0
sell_options = 0
print("Not invested money: {0}".format(money))
for d in data.Date:
capital = money + stock*stock_price[n]
net_worth.append(capital)
if n<60:
n += 1
continue
# here begins the simulation
vol = volatility(rets[n-15:n])
vola.append(vol)
if sell_options == 0 and options > 0:
(profit, stock, options, money) = close_position(stock_price[n], strike, stock, options, money)
print("\nSell options: {0}".format(data.Date[n]))
print(" Profit: {0}".format(profit))
print(" Stock price at {0}, strike at {1}".format(stock_price[n], strike))
print(" Current balance: {0}".format(money + stock*stock_price[n]))
if vol > 0.5 and options == 0:
strike = stock_price[n] + 20
(stock, options, money, option_price) = rebalance(stock_price[n], strike, rate[n], vol, 30/260.0, capital);
print("\nBuy options: {0}".format(data.Date[n]))
print(" Put option price (stock price at {0}, strike at {1}): {2}".format(stock_price[n], strike, option_price))
print(" Position: {0} stock, {1} options, money: {2}".format(stock, options, money))
print(" Current balance: {0}".format(money + stock*stock_price[n]))
sell_options = 90
if sell_options > 0:
sell_options -= 1
n_options.append(options)
n_stock.append(stock)
n += 1
plt.figure(figsize=(9,9))
plt.subplot(311)
plt.plot(1000*stock_price, label="GOOG")
plt.plot(net_worth, label="Portfolio")
plt.legend(loc=0)
plt.xlim(0,len(net_worth))
plt.subplot(312)
plt.plot(n_stock, label="Stock")
plt.legend(loc=0)
plt.subplot(313)
plt.plot(n_options, label="Put options")
plt.legend(loc=0)
plt.tight_layout()
Explanation: Now I will proceed to explain how this trading algorithm will work:
Buy all possible stock if the running volatility is low (last 30 working days)
If volatility is bigger than 50%, buy put european options to hedge possible losses
Options are bought at BSM price with the current volatility and risk-free rate
If not enough money to afford the position, rebalance by selling some stock
Execution time will be 90 "working days" to simplify the code
On execution, if options are on the money, make some action on the same step:
Sell stock at strike price
get money
buy again stock at market price
this way I get only the difference as money without touching my stocks
If volatility is still high don't rebalance options until they expire
If volatility is still high on expiration, buy options and rebalance again
The algorithm will begin with $200000, that are enough to buy 1000 (actualy a little more) stocks of GOOG (at that time).
I will count the balance only as the current money in hand and the price of stocks. The price of options don't count here as part of the portfolio price. This is because the money spent on buying options is lost, and will not return as they arent American and I don't intend to transfer them.
The simulation will begin with 60 days of heating to get enough data to calculate 15-days volatility.
End of explanation |
15,065 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Machine Learning Engineer Nanodegree
Model Evaluation & Validation
Project
Step1: Data Exploration
In this first section of this project, you will make a cursory investigation about the Boston housing data and provide your observations. Familiarizing yourself with the data through an explorative process is a fundamental practice to help you better understand and justify your results.
Since the main goal of this project is to construct a working model which has the capability of predicting the value of houses, we will need to separate the dataset into features and the target variable. The features, 'RM', 'LSTAT', and 'PTRATIO', give us quantitative information about each data point. The target variable, 'MEDV', will be the variable we seek to predict. These are stored in features and prices, respectively.
Implementation
Step3: Question 1 - Feature Observation
As a reminder, we are using three features from the Boston housing dataset
Step4: Question 2 - Goodness of Fit
Assume that a dataset contains five data points and a model made the following predictions for the target variable
Step5: Answer
Step6: Question 3 - Training and Testing
What is the benefit to splitting a dataset into some ratio of training and testing subsets for a learning algorithm?
Hint
Step7: Question 4 - Learning the Data
Choose one of the graphs above and state the maximum depth for the model. What happens to the score of the training curve as more training points are added? What about the testing curve? Would having more training points benefit the model?
Hint
Step9: Question 5 - Bias-Variance Tradeoff
When the model is trained with a maximum depth of 1, does the model suffer from high bias or from high variance? How about when the model is trained with a maximum depth of 10? What visual cues in the graph justify your conclusions?
Hint
Step10: Making Predictions
Once a model has been trained on a given set of data, it can now be used to make predictions on new sets of input data. In the case of a decision tree regressor, the model has learned what the best questions to ask about the input data are, and can respond with a prediction for the target variable. You can use these predictions to gain information about data where the value of the target variable is unknown — such as data the model was not trained on.
Question 9 - Optimal Model
What maximum depth does the optimal model have? How does this result compare to your guess in Question 6?
Run the code block below to fit the decision tree regressor to the training data and produce an optimal model.
Step11: Answer
Step12: Answer | Python Code:
# Import libraries necessary for this project
import numpy as np
import pandas as pd
from sklearn.cross_validation import ShuffleSplit
# Import supplementary visualizations code visuals.py
import visuals as vs
# Pretty display for notebooks
%matplotlib inline
# Load the Boston housing dataset
data = pd.read_csv('housing.csv')
prices = data['MEDV']
features = data.drop('MEDV', axis = 1)
# Success
print "Boston housing dataset has {} data points with {} variables each.".format(*data.shape)
Explanation: Machine Learning Engineer Nanodegree
Model Evaluation & Validation
Project: Predicting Boston Housing Prices
Welcome to the first project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested. Sections that begin with 'Implementation' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!
In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.
Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
Getting Started
In this project, you will evaluate the performance and predictive power of a model that has been trained and tested on data collected from homes in suburbs of Boston, Massachusetts. A model trained on this data that is seen as a good fit could then be used to make certain predictions about a home — in particular, its monetary value. This model would prove to be invaluable for someone like a real estate agent who could make use of such information on a daily basis.
The dataset for this project originates from the UCI Machine Learning Repository. The Boston housing data was collected in 1978 and each of the 506 entries represent aggregated data about 14 features for homes from various suburbs in Boston, Massachusetts. For the purposes of this project, the following preprocessing steps have been made to the dataset:
- 16 data points have an 'MEDV' value of 50.0. These data points likely contain missing or censored values and have been removed.
- 1 data point has an 'RM' value of 8.78. This data point can be considered an outlier and has been removed.
- The features 'RM', 'LSTAT', 'PTRATIO', and 'MEDV' are essential. The remaining non-relevant features have been excluded.
- The feature 'MEDV' has been multiplicatively scaled to account for 35 years of market inflation.
Run the code cell below to load the Boston housing dataset, along with a few of the necessary Python libraries required for this project. You will know the dataset loaded successfully if the size of the dataset is reported.
End of explanation
# TODO: Minimum price of the data
minimum_price = None
# TODO: Maximum price of the data
maximum_price = None
# TODO: Mean price of the data
mean_price = None
# TODO: Median price of the data
median_price = None
# TODO: Standard deviation of prices of the data
std_price = None
# Show the calculated statistics
print "Statistics for Boston housing dataset:\n"
print "Minimum price: ${:,.2f}".format(minimum_price)
print "Maximum price: ${:,.2f}".format(maximum_price)
print "Mean price: ${:,.2f}".format(mean_price)
print "Median price ${:,.2f}".format(median_price)
print "Standard deviation of prices: ${:,.2f}".format(std_price)
Explanation: Data Exploration
In this first section of this project, you will make a cursory investigation about the Boston housing data and provide your observations. Familiarizing yourself with the data through an explorative process is a fundamental practice to help you better understand and justify your results.
Since the main goal of this project is to construct a working model which has the capability of predicting the value of houses, we will need to separate the dataset into features and the target variable. The features, 'RM', 'LSTAT', and 'PTRATIO', give us quantitative information about each data point. The target variable, 'MEDV', will be the variable we seek to predict. These are stored in features and prices, respectively.
Implementation: Calculate Statistics
For your very first coding implementation, you will calculate descriptive statistics about the Boston housing prices. Since numpy has already been imported for you, use this library to perform the necessary calculations. These statistics will be extremely important later on to analyze various prediction results from the constructed model.
In the code cell below, you will need to implement the following:
- Calculate the minimum, maximum, mean, median, and standard deviation of 'MEDV', which is stored in prices.
- Store each calculation in their respective variable.
End of explanation
# TODO: Import 'r2_score'
def performance_metric(y_true, y_predict):
Calculates and returns the performance score between
true and predicted values based on the metric chosen.
# TODO: Calculate the performance score between 'y_true' and 'y_predict'
score = None
# Return the score
return score
Explanation: Question 1 - Feature Observation
As a reminder, we are using three features from the Boston housing dataset: 'RM', 'LSTAT', and 'PTRATIO'. For each data point (neighborhood):
- 'RM' is the average number of rooms among homes in the neighborhood.
- 'LSTAT' is the percentage of homeowners in the neighborhood considered "lower class" (working poor).
- 'PTRATIO' is the ratio of students to teachers in primary and secondary schools in the neighborhood.
Using your intuition, for each of the three features above, do you think that an increase in the value of that feature would lead to an increase in the value of 'MEDV' or a decrease in the value of 'MEDV'? Justify your answer for each.
Hint: Would you expect a home that has an 'RM' value of 6 be worth more or less than a home that has an 'RM' value of 7?
Answer:
Developing a Model
In this second section of the project, you will develop the tools and techniques necessary for a model to make a prediction. Being able to make accurate evaluations of each model's performance through the use of these tools and techniques helps to greatly reinforce the confidence in your predictions.
Implementation: Define a Performance Metric
It is difficult to measure the quality of a given model without quantifying its performance over training and testing. This is typically done using some type of performance metric, whether it is through calculating some type of error, the goodness of fit, or some other useful measurement. For this project, you will be calculating the coefficient of determination, R<sup>2</sup>, to quantify your model's performance. The coefficient of determination for a model is a useful statistic in regression analysis, as it often describes how "good" that model is at making predictions.
The values for R<sup>2</sup> range from 0 to 1, which captures the percentage of squared correlation between the predicted and actual values of the target variable. A model with an R<sup>2</sup> of 0 is no better than a model that always predicts the mean of the target variable, whereas a model with an R<sup>2</sup> of 1 perfectly predicts the target variable. Any value between 0 and 1 indicates what percentage of the target variable, using this model, can be explained by the features. A model can be given a negative R<sup>2</sup> as well, which indicates that the model is arbitrarily worse than one that always predicts the mean of the target variable.
For the performance_metric function in the code cell below, you will need to implement the following:
- Use r2_score from sklearn.metrics to perform a performance calculation between y_true and y_predict.
- Assign the performance score to the score variable.
End of explanation
# Calculate the performance of this model
score = performance_metric([3, -0.5, 2, 7, 4.2], [2.5, 0.0, 2.1, 7.8, 5.3])
print "Model has a coefficient of determination, R^2, of {:.3f}.".format(score)
Explanation: Question 2 - Goodness of Fit
Assume that a dataset contains five data points and a model made the following predictions for the target variable:
| True Value | Prediction |
| :-------------: | :--------: |
| 3.0 | 2.5 |
| -0.5 | 0.0 |
| 2.0 | 2.1 |
| 7.0 | 7.8 |
| 4.2 | 5.3 |
Would you consider this model to have successfully captured the variation of the target variable? Why or why not?
Run the code cell below to use the performance_metric function and calculate this model's coefficient of determination.
End of explanation
# TODO: Import 'train_test_split'
# TODO: Shuffle and split the data into training and testing subsets
X_train, X_test, y_train, y_test = (None, None, None, None)
# Success
print "Training and testing split was successful."
Explanation: Answer:
Implementation: Shuffle and Split Data
Your next implementation requires that you take the Boston housing dataset and split the data into training and testing subsets. Typically, the data is also shuffled into a random order when creating the training and testing subsets to remove any bias in the ordering of the dataset.
For the code cell below, you will need to implement the following:
- Use train_test_split from sklearn.cross_validation to shuffle and split the features and prices data into training and testing sets.
- Split the data into 80% training and 20% testing.
- Set the random_state for train_test_split to a value of your choice. This ensures results are consistent.
- Assign the train and testing splits to X_train, X_test, y_train, and y_test.
End of explanation
# Produce learning curves for varying training set sizes and maximum depths
vs.ModelLearning(features, prices)
Explanation: Question 3 - Training and Testing
What is the benefit to splitting a dataset into some ratio of training and testing subsets for a learning algorithm?
Hint: What could go wrong with not having a way to test your model?
Answer:
Analyzing Model Performance
In this third section of the project, you'll take a look at several models' learning and testing performances on various subsets of training data. Additionally, you'll investigate one particular algorithm with an increasing 'max_depth' parameter on the full training set to observe how model complexity affects performance. Graphing your model's performance based on varying criteria can be beneficial in the analysis process, such as visualizing behavior that may not have been apparent from the results alone.
Learning Curves
The following code cell produces four graphs for a decision tree model with different maximum depths. Each graph visualizes the learning curves of the model for both training and testing as the size of the training set is increased. Note that the shaded region of a learning curve denotes the uncertainty of that curve (measured as the standard deviation). The model is scored on both the training and testing sets using R<sup>2</sup>, the coefficient of determination.
Run the code cell below and use these graphs to answer the following question.
End of explanation
vs.ModelComplexity(X_train, y_train)
Explanation: Question 4 - Learning the Data
Choose one of the graphs above and state the maximum depth for the model. What happens to the score of the training curve as more training points are added? What about the testing curve? Would having more training points benefit the model?
Hint: Are the learning curves converging to particular scores?
Answer:
Complexity Curves
The following code cell produces a graph for a decision tree model that has been trained and validated on the training data using different maximum depths. The graph produces two complexity curves — one for training and one for validation. Similar to the learning curves, the shaded regions of both the complexity curves denote the uncertainty in those curves, and the model is scored on both the training and validation sets using the performance_metric function.
Run the code cell below and use this graph to answer the following two questions.
End of explanation
# TODO: Import 'make_scorer', 'DecisionTreeRegressor', and 'GridSearchCV'
def fit_model(X, y):
Performs grid search over the 'max_depth' parameter for a
decision tree regressor trained on the input data [X, y].
# Create cross-validation sets from the training data
cv_sets = ShuffleSplit(X.shape[0], n_splits = 10, test_size = 0.20, random_state = 0)
# TODO: Create a decision tree regressor object
regressor = None
# TODO: Create a dictionary for the parameter 'max_depth' with a range from 1 to 10
params = {}
# TODO: Transform 'performance_metric' into a scoring function using 'make_scorer'
scoring_fnc = None
# TODO: Create the grid search object
grid = None
# Fit the grid search object to the data to compute the optimal model
grid = grid.fit(X, y)
# Return the optimal model after fitting the data
return grid.best_estimator_
Explanation: Question 5 - Bias-Variance Tradeoff
When the model is trained with a maximum depth of 1, does the model suffer from high bias or from high variance? How about when the model is trained with a maximum depth of 10? What visual cues in the graph justify your conclusions?
Hint: How do you know when a model is suffering from high bias or high variance?
Answer:
Question 6 - Best-Guess Optimal Model
Which maximum depth do you think results in a model that best generalizes to unseen data? What intuition lead you to this answer?
Answer:
Evaluating Model Performance
In this final section of the project, you will construct a model and make a prediction on the client's feature set using an optimized model from fit_model.
Question 7 - Grid Search
What is the grid search technique and how it can be applied to optimize a learning algorithm?
Answer:
Question 8 - Cross-Validation
What is the k-fold cross-validation training technique? What benefit does this technique provide for grid search when optimizing a model?
Hint: Much like the reasoning behind having a testing set, what could go wrong with using grid search without a cross-validated set?
Answer:
Implementation: Fitting a Model
Your final implementation requires that you bring everything together and train a model using the decision tree algorithm. To ensure that you are producing an optimized model, you will train the model using the grid search technique to optimize the 'max_depth' parameter for the decision tree. The 'max_depth' parameter can be thought of as how many questions the decision tree algorithm is allowed to ask about the data before making a prediction. Decision trees are part of a class of algorithms called supervised learning algorithms.
In addition, you will find your implementation is using ShuffleSplit() for an alternative form of cross-validation (see the 'cv_sets' variable). While it is not the K-Fold cross-validation technique you describe in Question 8, this type of cross-validation technique is just as useful!. The ShuffleSplit() implementation below will create 10 ('n_splits') shuffled sets, and for each shuffle, 20% ('test_size') of the data will be used as the validation set. While you're working on your implementation, think about the contrasts and similarities it has to the K-fold cross-validation technique.
Please note that ShuffleSplit has different parameters in scikit-learn versions 0.17 and 0.18.
For the fit_model function in the code cell below, you will need to implement the following:
- Use DecisionTreeRegressor from sklearn.tree to create a decision tree regressor object.
- Assign this object to the 'regressor' variable.
- Create a dictionary for 'max_depth' with the values from 1 to 10, and assign this to the 'params' variable.
- Use make_scorer from sklearn.metrics to create a scoring function object.
- Pass the performance_metric function as a parameter to the object.
- Assign this scoring function to the 'scoring_fnc' variable.
- Use GridSearchCV from sklearn.grid_search to create a grid search object.
- Pass the variables 'regressor', 'params', 'scoring_fnc', and 'cv_sets' as parameters to the object.
- Assign the GridSearchCV object to the 'grid' variable.
End of explanation
# Fit the training data to the model using grid search
reg = fit_model(X_train, y_train)
# Produce the value for 'max_depth'
print "Parameter 'max_depth' is {} for the optimal model.".format(reg.get_params()['max_depth'])
Explanation: Making Predictions
Once a model has been trained on a given set of data, it can now be used to make predictions on new sets of input data. In the case of a decision tree regressor, the model has learned what the best questions to ask about the input data are, and can respond with a prediction for the target variable. You can use these predictions to gain information about data where the value of the target variable is unknown — such as data the model was not trained on.
Question 9 - Optimal Model
What maximum depth does the optimal model have? How does this result compare to your guess in Question 6?
Run the code block below to fit the decision tree regressor to the training data and produce an optimal model.
End of explanation
# Produce a matrix for client data
client_data = [[5, 17, 15], # Client 1
[4, 32, 22], # Client 2
[8, 3, 12]] # Client 3
# Show predictions
for i, price in enumerate(reg.predict(client_data)):
print "Predicted selling price for Client {}'s home: ${:,.2f}".format(i+1, price)
Explanation: Answer:
Question 10 - Predicting Selling Prices
Imagine that you were a real estate agent in the Boston area looking to use this model to help price homes owned by your clients that they wish to sell. You have collected the following information from three of your clients:
| Feature | Client 1 | Client 2 | Client 3 |
| :---: | :---: | :---: | :---: |
| Total number of rooms in home | 5 rooms | 4 rooms | 8 rooms |
| Neighborhood poverty level (as %) | 17% | 32% | 3% |
| Student-teacher ratio of nearby schools | 15-to-1 | 22-to-1 | 12-to-1 |
What price would you recommend each client sell his/her home at? Do these prices seem reasonable given the values for the respective features?
Hint: Use the statistics you calculated in the Data Exploration section to help justify your response.
Run the code block below to have your optimized model make predictions for each client's home.
End of explanation
vs.PredictTrials(features, prices, fit_model, client_data)
Explanation: Answer:
Sensitivity
An optimal model is not necessarily a robust model. Sometimes, a model is either too complex or too simple to sufficiently generalize to new data. Sometimes, a model could use a learning algorithm that is not appropriate for the structure of the data given. Other times, the data itself could be too noisy or contain too few samples to allow a model to adequately capture the target variable — i.e., the model is underfitted. Run the code cell below to run the fit_model function ten times with different training and testing sets to see how the prediction for a specific client changes with the data it's trained on.
End of explanation |
15,066 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TensorFlow Tutorial #02
Convolutional Neural Network
by Magnus Erik Hvass Pedersen
/ GitHub / Videos on YouTube
Introduction
The previous tutorial showed that a simple linear model had about 91% classification accuracy for recognizing hand-written digits in the MNIST data-set.
In this tutorial we will implement a simple Convolutional Neural Network in TensorFlow which has a classification accuracy of about 99%, or more if you make some of the suggested exercises.
Convolutional Networks work by moving small filters across the input image. This means the filters are re-used for recognizing patterns throughout the entire input image. This makes the Convolutional Networks much more powerful than Fully-Connected networks with the same number of variables. This in turn makes the Convolutional Networks faster to train.
You should be familiar with basic linear algebra, Python and the Jupyter Notebook editor. Beginners to TensorFlow may also want to study the first tutorial before proceeding to this one.
Flowchart
The following chart shows roughly how the data flows in the Convolutional Neural Network that is implemented below.
Step1: The input image is processed in the first convolutional layer using the filter-weights. This results in 16 new images, one for each filter in the convolutional layer. The images are also down-sampled so the image resolution is decreased from 28x28 to 14x14.
These 16 smaller images are then processed in the second convolutional layer. We need filter-weights for each of these 16 channels, and we need filter-weights for each output channel of this layer. There are 36 output channels so there are a total of 16 x 36 = 576 filters in the second convolutional layer. The resulting images are down-sampled again to 7x7 pixels.
The output of the second convolutional layer is 36 images of 7x7 pixels each. These are then flattened to a single vector of length 7 x 7 x 36 = 1764, which is used as the input to a fully-connected layer with 128 neurons (or elements). This feeds into another fully-connected layer with 10 neurons, one for each of the classes, which is used to determine the class of the image, that is, which number is depicted in the image.
The convolutional filters are initially chosen at random, so the classification is done randomly. The error between the predicted and true class of the input image is measured as the so-called cross-entropy. The optimizer then automatically propagates this error back through the Convolutional Network using the chain-rule of differentiation and updates the filter-weights so as to improve the classification error. This is done iteratively thousands of times until the classification error is sufficiently low.
These particular filter-weights and intermediate images are the results of one optimization run and may look different if you re-run this Notebook.
Note that the computation in TensorFlow is actually done on a batch of images instead of a single image, which makes the computation more efficient. This means the flowchart actually has one more data-dimension when implemented in TensorFlow.
Convolutional Layer
The following chart shows the basic idea of processing an image in the first convolutional layer. The input image depicts the number 7 and four copies of the image are shown here, so we can see more clearly how the filter is being moved to different positions of the image. For each position of the filter, the dot-product is being calculated between the filter and the image pixels under the filter, which results in a single pixel in the output image. So moving the filter across the entire input image results in a new image being generated.
The red filter-weights means that the filter has a positive reaction to black pixels in the input image, while blue pixels means the filter has a negative reaction to black pixels.
In this case it appears that the filter recognizes the horizontal line of the 7-digit, as can be seen from its stronger reaction to that line in the output image.
Step2: The step-size for moving the filter across the input is called the stride. There is a stride for moving the filter horizontally (x-axis) and another stride for moving vertically (y-axis).
In the source-code below, the stride is set to 1 in both directions, which means the filter starts in the upper left corner of the input image and is being moved 1 pixel to the right in each step. When the filter reaches the end of the image to the right, then the filter is moved back to the left side and 1 pixel down the image. This continues until the filter has reached the lower right corner of the input image and the entire output image has been generated.
When the filter reaches the end of the right-side as well as the bottom of the input image, then it can be padded with zeroes (white pixels). This causes the output image to be of the exact same dimension as the input image.
Furthermore, the output of the convolution may be passed through a so-called Rectified Linear Unit (ReLU), which merely ensures that the output is positive because negative values are set to zero. The output may also be down-sampled by so-called max-pooling, which considers small windows of 2x2 pixels and only keeps the largest of those pixels. This halves the resolution of the input image e.g. from 28x28 to 14x14 pixels.
Note that the second convolutional layer is more complicated because it takes 16 input channels. We want a separate filter for each input channel, so we need 16 filters instead of just one. Furthermore, we want 36 output channels from the second convolutional layer, so in total we need 16 x 36 = 576 filters for the second convolutional layer. It can be a bit challenging to understand how this works.
Imports
Step3: This was developed using Python 3.5.2 (Anaconda) and TensorFlow version
Step4: Configuration of Neural Network
The configuration of the Convolutional Neural Network is defined here for convenience, so you can easily find and change these numbers and re-run the Notebook.
Step5: Load Data
The MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path.
Step6: The MNIST data-set has now been loaded and consists of 70,000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial.
Step7: The class-labels are One-Hot encoded, which means that each label is a vector with 10 elements, all of which are zero except for one element. The index of this one element is the class-number, that is, the digit shown in the associated image. We also need the class-numbers as integers for the test-set, so we calculate it now.
Step8: Data Dimensions
The data dimensions are used in several places in the source-code below. They are defined once so we can use these variables instead of numbers throughout the source-code below.
Step9: Helper-function for plotting images
Function used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.
Step10: Plot a few images to see if data is correct
Step11: TensorFlow Graph
The entire purpose of TensorFlow is to have a so-called computational graph that can be executed much more efficiently than if the same calculations were to be performed directly in Python. TensorFlow can be more efficient than NumPy because TensorFlow knows the entire computation graph that must be executed, while NumPy only knows the computation of a single mathematical operation at a time.
TensorFlow can also automatically calculate the gradients that are needed to optimize the variables of the graph so as to make the model perform better. This is because the graph is a combination of simple mathematical expressions so the gradient of the entire graph can be calculated using the chain-rule for derivatives.
TensorFlow can also take advantage of multi-core CPUs as well as GPUs - and Google has even built special chips just for TensorFlow which are called TPUs (Tensor Processing Units) and are even faster than GPUs.
A TensorFlow graph consists of the following parts which will be detailed below
Step12: Helper-function for creating a new Convolutional Layer
This function creates a new convolutional layer in the computational graph for TensorFlow. Nothing is actually calculated here, we are just adding the mathematical formulas to the TensorFlow graph.
It is assumed that the input is a 4-dim tensor with the following dimensions
Step13: Helper-function for flattening a layer
A convolutional layer produces an output tensor with 4 dimensions. We will add fully-connected layers after the convolution layers, so we need to reduce the 4-dim tensor to 2-dim which can be used as input to the fully-connected layer.
Step14: Helper-function for creating a new Fully-Connected Layer
This function creates a new fully-connected layer in the computational graph for TensorFlow. Nothing is actually calculated here, we are just adding the mathematical formulas to the TensorFlow graph.
It is assumed that the input is a 2-dim tensor of shape [num_images, num_inputs]. The output is a 2-dim tensor of shape [num_images, num_outputs].
Step15: Placeholder variables
Placeholder variables serve as the input to the TensorFlow computational graph that we may change each time we execute the graph. We call this feeding the placeholder variables and it is demonstrated further below.
First we define the placeholder variable for the input images. This allows us to change the images that are input to the TensorFlow graph. This is a so-called tensor, which just means that it is a multi-dimensional vector or matrix. The data-type is set to float32 and the shape is set to [None, img_size_flat], where None means that the tensor may hold an arbitrary number of images with each image being a vector of length img_size_flat.
Step16: The convolutional layers expect x to be encoded as a 4-dim tensor so we have to reshape it so its shape is instead [num_images, img_height, img_width, num_channels]. Note that img_height == img_width == img_size and num_images can be inferred automatically by using -1 for the size of the first dimension. So the reshape operation is
Step17: Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable x. The shape of this placeholder variable is [None, num_classes] which means it may hold an arbitrary number of labels and each label is a vector of length num_classes which is 10 in this case.
Step18: We could also have a placeholder variable for the class-number, but we will instead calculate it using argmax. Note that this is a TensorFlow operator so nothing is calculated at this point.
Step19: Create placeholder var for dropout
Step20: Convolutional Layer 1
Create the first convolutional layer. It takes x_image as input and creates num_filters1 different filters, each having width and height equal to filter_size1. Finally we wish to down-sample the image so it is half the size by using 2x2 max-pooling.
Step21: Check the shape of the tensor that will be output by the convolutional layer. It is (?, 14, 14, 16) which means that there is an arbitrary number of images (this is the ?), each image is 14 pixels wide and 14 pixels high, and there are 16 different channels, one channel for each of the filters.
Step22: Convolutional Layer 2
Create the second convolutional layer, which takes as input the output from the first convolutional layer. The number of input channels corresponds to the number of filters in the first convolutional layer.
Step23: Check the shape of the tensor that will be output from this convolutional layer. The shape is (?, 7, 7, 36) where the ? again means that there is an arbitrary number of images, with each image having width and height of 7 pixels, and there are 36 channels, one for each filter.
Step24: Flatten Layer
The convolutional layers output 4-dim tensors. We now wish to use these as input in a fully-connected network, which requires for the tensors to be reshaped or flattened to 2-dim tensors.
Step25: Check that the tensors now have shape (?, 1764) which means there's an arbitrary number of images which have been flattened to vectors of length 1764 each. Note that 1764 = 7 x 7 x 36.
Step26: Fully-Connected Layer 1
Add a fully-connected layer to the network. The input is the flattened layer from the previous convolution. The number of neurons or nodes in the fully-connected layer is fc_size. ReLU is used so we can learn non-linear relations.
Step27: Check that the output of the fully-connected layer is a tensor with shape (?, 128) where the ? means there is an arbitrary number of images and fc_size == 128.
Step28: Fully-Connected Layer 2
Add another fully-connected layer that outputs vectors of length 10 for determining which of the 10 classes the input image belongs to. Note that ReLU is not used in this layer.
Step29: Predicted Class
The second fully-connected layer estimates how likely it is that the input image belongs to each of the 10 classes. However, these estimates are a bit rough and difficult to interpret because the numbers may be very small or large, so we want to normalize them so that each element is limited between zero and one and the 10 elements sum to one. This is calculated using the so-called softmax function and the result is stored in y_pred.
Step30: The class-number is the index of the largest element.
Step31: Cost-function to be optimized
To make the model better at classifying the input images, we must somehow change the variables for all the network layers. To do this we first need to know how well the model currently performs by comparing the predicted output of the model y_pred to the desired output y_true.
The cross-entropy is a performance measure used in classification. The cross-entropy is a continuous function that is always positive and if the predicted output of the model exactly matches the desired output then the cross-entropy equals zero. The goal of optimization is therefore to minimize the cross-entropy so it gets as close to zero as possible by changing the variables of the network layers.
TensorFlow has a built-in function for calculating the cross-entropy. Note that the function calculates the softmax internally so we must use the output of layer_fc2 directly rather than y_pred which has already had the softmax applied.
Step32: We have now calculated the cross-entropy for each of the image classifications so we have a measure of how well the model performs on each image individually. But in order to use the cross-entropy to guide the optimization of the model's variables we need a single scalar value, so we simply take the average of the cross-entropy for all the image classifications.
Step33: Optimization Method
Now that we have a cost measure that must be minimized, we can then create an optimizer. In this case it is the AdamOptimizer which is an advanced form of Gradient Descent.
Note that optimization is not performed at this point. In fact, nothing is calculated at all, we just add the optimizer-object to the TensorFlow graph for later execution.
Step34: Performance Measures
We need a few more performance measures to display the progress to the user.
This is a vector of booleans whether the predicted class equals the true class of each image.
Step35: This calculates the classification accuracy by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then calculating the average of these numbers.
Step36: TensorFlow Run
Create TensorFlow session
Once the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.
Step37: Initialize variables
The variables for weights and biases must be initialized before we start optimizing them.
Step38: Helper-function to perform optimization iterations
There are 55,000 images in the training-set. It takes a long time to calculate the gradient of the model using all these images. We therefore only use a small batch of images in each iteration of the optimizer.
If your computer crashes or becomes very slow because you run out of RAM, then you may try and lower this number, but you may then need to perform more optimization iterations.
Step39: Function for performing a number of optimization iterations so as to gradually improve the variables of the network layers. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples. The progress is printed every 100 iterations.
Step40: Helper-function to plot example errors
Function for plotting examples of images from the test-set that have been mis-classified.
Step41: Helper-function to plot confusion matrix
Step42: Helper-function for showing the performance
Function for printing the classification accuracy on the test-set.
It takes a while to compute the classification for all the images in the test-set, that's why the results are re-used by calling the above functions directly from this function, so the classifications don't have to be recalculated by each function.
Note that this function can use a lot of computer memory, which is why the test-set is split into smaller batches. If you have little RAM in your computer and it crashes, then you can try and lower the batch-size.
Step43: Performance before any optimization
The accuracy on the test-set is very low because the model variables have only been initialized and not optimized at all, so it just classifies the images randomly.
Step44: Performance after 1 optimization iteration
The classification accuracy does not improve much from just 1 optimization iteration, because the learning-rate for the optimizer is set very low.
Step45: Performance after 100 optimization iterations
After 100 optimization iterations, the model has significantly improved its classification accuracy.
Step46: Performance after 1000 optimization iterations
After 1000 optimization iterations, the model has greatly increased its accuracy on the test-set to more than 90%.
Step47: Performance after 10,000 optimization iterations
After 10,000 optimization iterations, the model has a classification accuracy on the test-set of about 99%.
Step48: Visualization of Weights and Layers
In trying to understand why the convolutional neural network can recognize handwritten digits, we will now visualize the weights of the convolutional filters and the resulting output images.
Helper-function for plotting convolutional weights
Step49: Helper-function for plotting the output of a convolutional layer
Step50: Input Images
Helper-function for plotting an image.
Step51: Plot an image from the test-set which will be used as an example below.
Step52: Plot another example image from the test-set.
Step53: Convolution Layer 1
Now plot the filter-weights for the first convolutional layer.
Note that positive weights are red and negative weights are blue.
Step54: Applying each of these convolutional filters to the first input image gives the following output images, which are then used as input to the second convolutional layer. Note that these images are down-sampled to 14 x 14 pixels which is half the resolution of the original input image.
Step55: The following images are the results of applying the convolutional filters to the second image.
Step56: It is difficult to see from these images what the purpose of the convolutional filters might be. It appears that they have merely created several variations of the input image, as if light was shining from different angles and casting shadows in the image.
Convolution Layer 2
Now plot the filter-weights for the second convolutional layer.
There are 16 output channels from the first conv-layer, which means there are 16 input channels to the second conv-layer. The second conv-layer has a set of filter-weights for each of its input channels. We start by plotting the filter-weigths for the first channel.
Note again that positive weights are red and negative weights are blue.
Step57: There are 16 input channels to the second convolutional layer, so we can make another 15 plots of filter-weights like this. We just make one more with the filter-weights for the second channel.
Step58: It can be difficult to understand and keep track of how these filters are applied because of the high dimensionality.
Applying these convolutional filters to the images that were ouput from the first conv-layer gives the following images.
Note that these are down-sampled yet again to 7 x 7 pixels which is half the resolution of the images from the first conv-layer.
Step59: And these are the results of applying the filter-weights to the second image.
Step60: From these images, it looks like the second convolutional layer might detect lines and patterns in the input images, which are less sensitive to local variations in the original input images.
These images are then flattened and input to the fully-connected layer, but that is not shown here.
Close TensorFlow Session
We are now done using TensorFlow, so we close the session to release its resources. | Python Code:
from IPython.display import Image
Image('images/02_network_flowchart.png')
Explanation: TensorFlow Tutorial #02
Convolutional Neural Network
by Magnus Erik Hvass Pedersen
/ GitHub / Videos on YouTube
Introduction
The previous tutorial showed that a simple linear model had about 91% classification accuracy for recognizing hand-written digits in the MNIST data-set.
In this tutorial we will implement a simple Convolutional Neural Network in TensorFlow which has a classification accuracy of about 99%, or more if you make some of the suggested exercises.
Convolutional Networks work by moving small filters across the input image. This means the filters are re-used for recognizing patterns throughout the entire input image. This makes the Convolutional Networks much more powerful than Fully-Connected networks with the same number of variables. This in turn makes the Convolutional Networks faster to train.
You should be familiar with basic linear algebra, Python and the Jupyter Notebook editor. Beginners to TensorFlow may also want to study the first tutorial before proceeding to this one.
Flowchart
The following chart shows roughly how the data flows in the Convolutional Neural Network that is implemented below.
End of explanation
Image('images/02_convolution.png')
Explanation: The input image is processed in the first convolutional layer using the filter-weights. This results in 16 new images, one for each filter in the convolutional layer. The images are also down-sampled so the image resolution is decreased from 28x28 to 14x14.
These 16 smaller images are then processed in the second convolutional layer. We need filter-weights for each of these 16 channels, and we need filter-weights for each output channel of this layer. There are 36 output channels so there are a total of 16 x 36 = 576 filters in the second convolutional layer. The resulting images are down-sampled again to 7x7 pixels.
The output of the second convolutional layer is 36 images of 7x7 pixels each. These are then flattened to a single vector of length 7 x 7 x 36 = 1764, which is used as the input to a fully-connected layer with 128 neurons (or elements). This feeds into another fully-connected layer with 10 neurons, one for each of the classes, which is used to determine the class of the image, that is, which number is depicted in the image.
The convolutional filters are initially chosen at random, so the classification is done randomly. The error between the predicted and true class of the input image is measured as the so-called cross-entropy. The optimizer then automatically propagates this error back through the Convolutional Network using the chain-rule of differentiation and updates the filter-weights so as to improve the classification error. This is done iteratively thousands of times until the classification error is sufficiently low.
These particular filter-weights and intermediate images are the results of one optimization run and may look different if you re-run this Notebook.
Note that the computation in TensorFlow is actually done on a batch of images instead of a single image, which makes the computation more efficient. This means the flowchart actually has one more data-dimension when implemented in TensorFlow.
Convolutional Layer
The following chart shows the basic idea of processing an image in the first convolutional layer. The input image depicts the number 7 and four copies of the image are shown here, so we can see more clearly how the filter is being moved to different positions of the image. For each position of the filter, the dot-product is being calculated between the filter and the image pixels under the filter, which results in a single pixel in the output image. So moving the filter across the entire input image results in a new image being generated.
The red filter-weights means that the filter has a positive reaction to black pixels in the input image, while blue pixels means the filter has a negative reaction to black pixels.
In this case it appears that the filter recognizes the horizontal line of the 7-digit, as can be seen from its stronger reaction to that line in the output image.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
from sklearn.metrics import confusion_matrix
import time
from datetime import timedelta
import math
Explanation: The step-size for moving the filter across the input is called the stride. There is a stride for moving the filter horizontally (x-axis) and another stride for moving vertically (y-axis).
In the source-code below, the stride is set to 1 in both directions, which means the filter starts in the upper left corner of the input image and is being moved 1 pixel to the right in each step. When the filter reaches the end of the image to the right, then the filter is moved back to the left side and 1 pixel down the image. This continues until the filter has reached the lower right corner of the input image and the entire output image has been generated.
When the filter reaches the end of the right-side as well as the bottom of the input image, then it can be padded with zeroes (white pixels). This causes the output image to be of the exact same dimension as the input image.
Furthermore, the output of the convolution may be passed through a so-called Rectified Linear Unit (ReLU), which merely ensures that the output is positive because negative values are set to zero. The output may also be down-sampled by so-called max-pooling, which considers small windows of 2x2 pixels and only keeps the largest of those pixels. This halves the resolution of the input image e.g. from 28x28 to 14x14 pixels.
Note that the second convolutional layer is more complicated because it takes 16 input channels. We want a separate filter for each input channel, so we need 16 filters instead of just one. Furthermore, we want 36 output channels from the second convolutional layer, so in total we need 16 x 36 = 576 filters for the second convolutional layer. It can be a bit challenging to understand how this works.
Imports
End of explanation
tf.__version__
Explanation: This was developed using Python 3.5.2 (Anaconda) and TensorFlow version:
End of explanation
# Convolutional Layer 1.
filter_size1 = 5 # Convolution filters are 5 x 5 pixels.
num_filters1 = 9 # There are 16 of these filters.
# Convolutional Layer 2.
filter_size2 = 5 # Convolution filters are 5 x 5 pixels.
num_filters2 = 9 # There are 36 of these filters.
# Convolutional Layer 2.
#filter_size3 = 5 # Convolution filters are 5 x 5 pixels.
#num_filters3 = 25 # There are 25 of these filters.
# Fully-connected layer.
fc_size = 32 # Number of neurons in fully-connected layer.
Explanation: Configuration of Neural Network
The configuration of the Convolutional Neural Network is defined here for convenience, so you can easily find and change these numbers and re-run the Notebook.
End of explanation
from tensorflow.examples.tutorials.mnist import input_data
data = input_data.read_data_sets('data/MNIST/', one_hot=True)
Explanation: Load Data
The MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path.
End of explanation
print("Size of:")
print("- Training-set:\t\t{}".format(len(data.train.labels)))
print("- Test-set:\t\t{}".format(len(data.test.labels)))
print("- Validation-set:\t{}".format(len(data.validation.labels)))
Explanation: The MNIST data-set has now been loaded and consists of 70,000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial.
End of explanation
data.test.cls = np.argmax(data.test.labels, axis=1)
Explanation: The class-labels are One-Hot encoded, which means that each label is a vector with 10 elements, all of which are zero except for one element. The index of this one element is the class-number, that is, the digit shown in the associated image. We also need the class-numbers as integers for the test-set, so we calculate it now.
End of explanation
# We know that MNIST images are 28 pixels in each dimension.
img_size = 28
# Images are stored in one-dimensional arrays of this length.
img_size_flat = img_size * img_size
# Tuple with height and width of images used to reshape arrays.
img_shape = (img_size, img_size)
# Number of colour channels for the images: 1 channel for gray-scale.
num_channels = 1
# Number of classes, one class for each of 10 digits.
num_classes = 10
Explanation: Data Dimensions
The data dimensions are used in several places in the source-code below. They are defined once so we can use these variables instead of numbers throughout the source-code below.
End of explanation
def plot_images(images, cls_true, cls_pred=None):
assert len(images) == len(cls_true) == 9
# Create figure with 3x3 sub-plots.
fig, axes = plt.subplots(3, 3)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Plot image.
ax.imshow(images[i].reshape(img_shape), cmap='binary')
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true[i])
else:
xlabel = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i])
# Show the classes as the label on the x-axis.
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
Explanation: Helper-function for plotting images
Function used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.
End of explanation
# Get the first images from the test-set.
images = data.test.images[0:9]
# Get the true classes for those images.
cls_true = data.test.cls[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true)
Explanation: Plot a few images to see if data is correct
End of explanation
def new_weights(shape):
return tf.Variable(tf.truncated_normal(shape, stddev=0.05))
def new_biases(length):
return tf.Variable(tf.constant(0.05, shape=[length]))
Explanation: TensorFlow Graph
The entire purpose of TensorFlow is to have a so-called computational graph that can be executed much more efficiently than if the same calculations were to be performed directly in Python. TensorFlow can be more efficient than NumPy because TensorFlow knows the entire computation graph that must be executed, while NumPy only knows the computation of a single mathematical operation at a time.
TensorFlow can also automatically calculate the gradients that are needed to optimize the variables of the graph so as to make the model perform better. This is because the graph is a combination of simple mathematical expressions so the gradient of the entire graph can be calculated using the chain-rule for derivatives.
TensorFlow can also take advantage of multi-core CPUs as well as GPUs - and Google has even built special chips just for TensorFlow which are called TPUs (Tensor Processing Units) and are even faster than GPUs.
A TensorFlow graph consists of the following parts which will be detailed below:
Placeholder variables used for inputting data to the graph.
Variables that are going to be optimized so as to make the convolutional network perform better.
The mathematical formulas for the convolutional network.
A cost measure that can be used to guide the optimization of the variables.
An optimization method which updates the variables.
In addition, the TensorFlow graph may also contain various debugging statements e.g. for logging data to be displayed using TensorBoard, which is not covered in this tutorial.
Helper-functions for creating new variables
Functions for creating new TensorFlow variables in the given shape and initializing them with random values. Note that the initialization is not actually done at this point, it is merely being defined in the TensorFlow graph.
End of explanation
def new_conv_layer(input, # The previous layer.
num_input_channels, # Num. channels in prev. layer.
filter_size, # Width and height of each filter.
num_filters, # Number of filters.
use_pooling=True): # Use 2x2 max-pooling.
# Shape of the filter-weights for the convolution.
# This format is determined by the TensorFlow API.
shape = [filter_size, filter_size, num_input_channels, num_filters]
# Create new weights aka. filters with the given shape.
weights = new_weights(shape=shape)
# Create new biases, one for each filter.
biases = new_biases(length=num_filters)
# Create the TensorFlow operation for convolution.
# Note the strides are set to 1 in all dimensions.
# The first and last stride must always be 1,
# because the first is for the image-number and
# the last is for the input-channel.
# But e.g. strides=[1, 2, 2, 1] would mean that the filter
# is moved 2 pixels across the x- and y-axis of the image.
# The padding is set to 'SAME' which means the input image
# is padded with zeroes so the size of the output is the same.
layer = tf.nn.conv2d(input=input,
filter=weights,
strides=[1, 1, 1, 1],
padding='SAME')
# Add the biases to the results of the convolution.
# A bias-value is added to each filter-channel.
layer += biases
# Use pooling to down-sample the image resolution?
if use_pooling:
# This is 2x2 max-pooling, which means that we
# consider 2x2 windows and select the largest value
# in each window. Then we move 2 pixels to the next window.
layer = tf.nn.max_pool(value=layer,
ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1],
padding='SAME')
# Rectified Linear Unit (ReLU).
# It calculates max(x, 0) for each input pixel x.
# This adds some non-linearity to the formula and allows us
# to learn more complicated functions.
layer = tf.nn.relu(layer)
# Note that ReLU is normally executed before the pooling,
# but since relu(max_pool(x)) == max_pool(relu(x)) we can
# save 75% of the relu-operations by max-pooling first.
# We return both the resulting layer and the filter-weights
# because we will plot the weights later.
return layer, weights
Explanation: Helper-function for creating a new Convolutional Layer
This function creates a new convolutional layer in the computational graph for TensorFlow. Nothing is actually calculated here, we are just adding the mathematical formulas to the TensorFlow graph.
It is assumed that the input is a 4-dim tensor with the following dimensions:
Image number.
Y-axis of each image.
X-axis of each image.
Channels of each image.
Note that the input channels may either be colour-channels, or it may be filter-channels if the input is produced from a previous convolutional layer.
The output is another 4-dim tensor with the following dimensions:
Image number, same as input.
Y-axis of each image. If 2x2 pooling is used, then the height and width of the input images is divided by 2.
X-axis of each image. Ditto.
Channels produced by the convolutional filters.
End of explanation
def flatten_layer(layer):
# Get the shape of the input layer.
layer_shape = layer.get_shape()
# The shape of the input layer is assumed to be:
# layer_shape == [num_images, img_height, img_width, num_channels]
# The number of features is: img_height * img_width * num_channels
# We can use a function from TensorFlow to calculate this.
num_features = layer_shape[1:4].num_elements()
# Reshape the layer to [num_images, num_features].
# Note that we just set the size of the second dimension
# to num_features and the size of the first dimension to -1
# which means the size in that dimension is calculated
# so the total size of the tensor is unchanged from the reshaping.
layer_flat = tf.reshape(layer, [-1, num_features])
# The shape of the flattened layer is now:
# [num_images, img_height * img_width * num_channels]
# Return both the flattened layer and the number of features.
return layer_flat, num_features
Explanation: Helper-function for flattening a layer
A convolutional layer produces an output tensor with 4 dimensions. We will add fully-connected layers after the convolution layers, so we need to reduce the 4-dim tensor to 2-dim which can be used as input to the fully-connected layer.
End of explanation
def new_fc_layer(input, # The previous layer.
num_inputs, # Num. inputs from prev. layer.
num_outputs, # Num. outputs.
use_relu=True): # Use Rectified Linear Unit (ReLU)?
# Create new weights and biases.
weights = new_weights(shape=[num_inputs, num_outputs])
biases = new_biases(length=num_outputs)
# Calculate the layer as the matrix multiplication of
# the input and weights, and then add the bias-values.
layer = tf.matmul(input, weights) + biases
# Use ReLU?
if use_relu:
layer = tf.nn.relu(layer)
return layer
Explanation: Helper-function for creating a new Fully-Connected Layer
This function creates a new fully-connected layer in the computational graph for TensorFlow. Nothing is actually calculated here, we are just adding the mathematical formulas to the TensorFlow graph.
It is assumed that the input is a 2-dim tensor of shape [num_images, num_inputs]. The output is a 2-dim tensor of shape [num_images, num_outputs].
End of explanation
x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x')
Explanation: Placeholder variables
Placeholder variables serve as the input to the TensorFlow computational graph that we may change each time we execute the graph. We call this feeding the placeholder variables and it is demonstrated further below.
First we define the placeholder variable for the input images. This allows us to change the images that are input to the TensorFlow graph. This is a so-called tensor, which just means that it is a multi-dimensional vector or matrix. The data-type is set to float32 and the shape is set to [None, img_size_flat], where None means that the tensor may hold an arbitrary number of images with each image being a vector of length img_size_flat.
End of explanation
x_image = tf.reshape(x, [-1, img_size, img_size, num_channels])
Explanation: The convolutional layers expect x to be encoded as a 4-dim tensor so we have to reshape it so its shape is instead [num_images, img_height, img_width, num_channels]. Note that img_height == img_width == img_size and num_images can be inferred automatically by using -1 for the size of the first dimension. So the reshape operation is:
End of explanation
y_true = tf.placeholder(tf.float32, shape=[None, 10], name='y_true')
Explanation: Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable x. The shape of this placeholder variable is [None, num_classes] which means it may hold an arbitrary number of labels and each label is a vector of length num_classes which is 10 in this case.
End of explanation
y_true_cls = tf.argmax(y_true, dimension=1)
Explanation: We could also have a placeholder variable for the class-number, but we will instead calculate it using argmax. Note that this is a TensorFlow operator so nothing is calculated at this point.
End of explanation
keep_prob = tf.placeholder(tf.float32)
Explanation: Create placeholder var for dropout
End of explanation
layer_conv1, weights_conv1 = \
new_conv_layer(input=x_image,
num_input_channels=num_channels,
filter_size=filter_size1,
num_filters=num_filters1,
use_pooling=True)
Explanation: Convolutional Layer 1
Create the first convolutional layer. It takes x_image as input and creates num_filters1 different filters, each having width and height equal to filter_size1. Finally we wish to down-sample the image so it is half the size by using 2x2 max-pooling.
End of explanation
layer_conv1
Explanation: Check the shape of the tensor that will be output by the convolutional layer. It is (?, 14, 14, 16) which means that there is an arbitrary number of images (this is the ?), each image is 14 pixels wide and 14 pixels high, and there are 16 different channels, one channel for each of the filters.
End of explanation
layer_conv2, weights_conv2 = \
new_conv_layer(input=layer_conv1,
num_input_channels=num_filters1,
filter_size=filter_size2,
num_filters=num_filters2,
use_pooling=True)
Explanation: Convolutional Layer 2
Create the second convolutional layer, which takes as input the output from the first convolutional layer. The number of input channels corresponds to the number of filters in the first convolutional layer.
End of explanation
layer_conv2
#layer_conv3, weights_conv3 = \
# new_conv_layer(input=layer_conv2,
# num_input_channels=num_filters2,
# filter_size=filter_size3,
# num_filters=num_filters3,
# use_pooling=True)
#layer_conv3
Explanation: Check the shape of the tensor that will be output from this convolutional layer. The shape is (?, 7, 7, 36) where the ? again means that there is an arbitrary number of images, with each image having width and height of 7 pixels, and there are 36 channels, one for each filter.
End of explanation
layer_flat, num_features = flatten_layer(layer_conv2)
Explanation: Flatten Layer
The convolutional layers output 4-dim tensors. We now wish to use these as input in a fully-connected network, which requires for the tensors to be reshaped or flattened to 2-dim tensors.
End of explanation
layer_flat
num_features
Explanation: Check that the tensors now have shape (?, 1764) which means there's an arbitrary number of images which have been flattened to vectors of length 1764 each. Note that 1764 = 7 x 7 x 36.
End of explanation
layer_fc1 = new_fc_layer(input=layer_flat,
num_inputs=num_features,
num_outputs=fc_size,
use_relu=True)
Explanation: Fully-Connected Layer 1
Add a fully-connected layer to the network. The input is the flattened layer from the previous convolution. The number of neurons or nodes in the fully-connected layer is fc_size. ReLU is used so we can learn non-linear relations.
End of explanation
layer_fc1
layer_dropout1 = tf.nn.dropout(layer_fc1, keep_prob)
#layer_fc2 = new_fc_layer(input=layer_dropout1,
# num_inputs=fc_size,
# num_outputs=fc_size,
# use_relu=True)
#layer_dropout2 = tf.nn.dropout(layer_fc2, keep_prob)
Explanation: Check that the output of the fully-connected layer is a tensor with shape (?, 128) where the ? means there is an arbitrary number of images and fc_size == 128.
End of explanation
layer_fc3 = new_fc_layer(input=layer_dropout1,
num_inputs=fc_size,
num_outputs=num_classes,
use_relu=False)
layer_fc3
Explanation: Fully-Connected Layer 2
Add another fully-connected layer that outputs vectors of length 10 for determining which of the 10 classes the input image belongs to. Note that ReLU is not used in this layer.
End of explanation
y_pred = tf.nn.softmax(layer_fc3)
Explanation: Predicted Class
The second fully-connected layer estimates how likely it is that the input image belongs to each of the 10 classes. However, these estimates are a bit rough and difficult to interpret because the numbers may be very small or large, so we want to normalize them so that each element is limited between zero and one and the 10 elements sum to one. This is calculated using the so-called softmax function and the result is stored in y_pred.
End of explanation
y_pred_cls = tf.argmax(y_pred, dimension=1)
Explanation: The class-number is the index of the largest element.
End of explanation
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=layer_fc3,
labels=y_true)
Explanation: Cost-function to be optimized
To make the model better at classifying the input images, we must somehow change the variables for all the network layers. To do this we first need to know how well the model currently performs by comparing the predicted output of the model y_pred to the desired output y_true.
The cross-entropy is a performance measure used in classification. The cross-entropy is a continuous function that is always positive and if the predicted output of the model exactly matches the desired output then the cross-entropy equals zero. The goal of optimization is therefore to minimize the cross-entropy so it gets as close to zero as possible by changing the variables of the network layers.
TensorFlow has a built-in function for calculating the cross-entropy. Note that the function calculates the softmax internally so we must use the output of layer_fc2 directly rather than y_pred which has already had the softmax applied.
End of explanation
cost = tf.reduce_mean(cross_entropy)
Explanation: We have now calculated the cross-entropy for each of the image classifications so we have a measure of how well the model performs on each image individually. But in order to use the cross-entropy to guide the optimization of the model's variables we need a single scalar value, so we simply take the average of the cross-entropy for all the image classifications.
End of explanation
optimizer = tf.train.AdamOptimizer(learning_rate=1e-3).minimize(cost)
Explanation: Optimization Method
Now that we have a cost measure that must be minimized, we can then create an optimizer. In this case it is the AdamOptimizer which is an advanced form of Gradient Descent.
Note that optimization is not performed at this point. In fact, nothing is calculated at all, we just add the optimizer-object to the TensorFlow graph for later execution.
End of explanation
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
Explanation: Performance Measures
We need a few more performance measures to display the progress to the user.
This is a vector of booleans whether the predicted class equals the true class of each image.
End of explanation
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
Explanation: This calculates the classification accuracy by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then calculating the average of these numbers.
End of explanation
session = tf.Session()
Explanation: TensorFlow Run
Create TensorFlow session
Once the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.
End of explanation
session.run(tf.initialize_all_variables())
Explanation: Initialize variables
The variables for weights and biases must be initialized before we start optimizing them.
End of explanation
train_batch_size = 64
Explanation: Helper-function to perform optimization iterations
There are 55,000 images in the training-set. It takes a long time to calculate the gradient of the model using all these images. We therefore only use a small batch of images in each iteration of the optimizer.
If your computer crashes or becomes very slow because you run out of RAM, then you may try and lower this number, but you may then need to perform more optimization iterations.
End of explanation
print(data.train)
# Counter for total number of iterations performed so far.
total_iterations = 0
def optimize(num_iterations):
# Ensure we update the global variable rather than a local copy.
global total_iterations
# Start-time used for printing time-usage below.
start_time = time.time()
for i in range(total_iterations,
total_iterations + num_iterations):
# Get a batch of training examples.
# x_batch now holds a batch of images and
# y_true_batch are the true labels for those images.
x_batch, y_true_batch = data.train.next_batch(train_batch_size)
# Put the batch into a dict with the proper names
# for placeholder variables in the TensorFlow graph.
feed_dict_train = {x: x_batch,
y_true: y_true_batch,
keep_prob: 0.5}
# Run the optimizer using this batch of training data.
# TensorFlow assigns the variables in feed_dict_train
# to the placeholder variables and then runs the optimizer.
session.run(optimizer, feed_dict=feed_dict_train)
# Print status every 100 iterations.
if i % 100 == 0:
# Calculate the accuracy on the training-set.
acc = session.run(accuracy, feed_dict=feed_dict_train)
# Message for printing.
msg = "Optimization Iteration: {0:>6}, Training Accuracy: {1:>6.1%}"
# Print it.
print(msg.format(i + 1, acc))
# Update the total number of iterations performed.
total_iterations += num_iterations
# Ending time.
end_time = time.time()
# Difference between start and end-times.
time_dif = end_time - start_time
# Print the time-usage.
print("Time usage: " + str(timedelta(seconds=int(round(time_dif)))))
Explanation: Function for performing a number of optimization iterations so as to gradually improve the variables of the network layers. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples. The progress is printed every 100 iterations.
End of explanation
def plot_example_errors(cls_pred, correct):
# This function is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# correct is a boolean array whether the predicted class
# is equal to the true class for each image in the test-set.
# Negate the boolean array.
incorrect = (correct == False)
# Get the images from the test-set that have been
# incorrectly classified.
images = data.test.images[incorrect]
# Get the predicted classes for those images.
cls_pred = cls_pred[incorrect]
# Get the true classes for those images.
cls_true = data.test.cls[incorrect]
# Plot the first 9 images.
plot_images(images=images[0:9],
cls_true=cls_true[0:9],
cls_pred=cls_pred[0:9])
Explanation: Helper-function to plot example errors
Function for plotting examples of images from the test-set that have been mis-classified.
End of explanation
def plot_confusion_matrix(cls_pred):
# This is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# Get the true classifications for the test-set.
cls_true = data.test.cls
# Get the confusion matrix using sklearn.
cm = confusion_matrix(y_true=cls_true,
y_pred=cls_pred)
# Print the confusion matrix as text.
print(cm)
# Plot the confusion matrix as an image.
plt.matshow(cm)
# Make various adjustments to the plot.
plt.colorbar()
tick_marks = np.arange(num_classes)
plt.xticks(tick_marks, range(num_classes))
plt.yticks(tick_marks, range(num_classes))
plt.xlabel('Predicted')
plt.ylabel('True')
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
Explanation: Helper-function to plot confusion matrix
End of explanation
# Split the test-set into smaller batches of this size.
test_batch_size = 256
def print_test_accuracy(show_example_errors=False,
show_confusion_matrix=False):
# Number of images in the test-set.
num_test = len(data.test.images)
# Allocate an array for the predicted classes which
# will be calculated in batches and filled into this array.
cls_pred = np.zeros(shape=num_test, dtype=np.int)
# Now calculate the predicted classes for the batches.
# We will just iterate through all the batches.
# There might be a more clever and Pythonic way of doing this.
# The starting index for the next batch is denoted i.
i = 0
while i < num_test:
# The ending index for the next batch is denoted j.
j = min(i + test_batch_size, num_test)
# Get the images from the test-set between index i and j.
images = data.test.images[i:j, :]
# Get the associated labels.
labels = data.test.labels[i:j, :]
# Create a feed-dict with these images and labels.
feed_dict = {x: images,
y_true: labels,
keep_prob:1}
# Calculate the predicted class using TensorFlow.
cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict)
# Set the start-index for the next batch to the
# end-index of the current batch.
i = j
# Convenience variable for the true class-numbers of the test-set.
cls_true = data.test.cls
# Create a boolean array whether each image is correctly classified.
correct = (cls_true == cls_pred)
# Calculate the number of correctly classified images.
# When summing a boolean array, False means 0 and True means 1.
correct_sum = correct.sum()
# Classification accuracy is the number of correctly classified
# images divided by the total number of images in the test-set.
acc = float(correct_sum) / num_test
# Print the accuracy.
msg = "Accuracy on Test-Set: {0:.1%} ({1} / {2})"
print(msg.format(acc, correct_sum, num_test))
# Plot some examples of mis-classifications, if desired.
if show_example_errors:
print("Example errors:")
plot_example_errors(cls_pred=cls_pred, correct=correct)
# Plot the confusion matrix, if desired.
if show_confusion_matrix:
print("Confusion Matrix:")
plot_confusion_matrix(cls_pred=cls_pred)
Explanation: Helper-function for showing the performance
Function for printing the classification accuracy on the test-set.
It takes a while to compute the classification for all the images in the test-set, that's why the results are re-used by calling the above functions directly from this function, so the classifications don't have to be recalculated by each function.
Note that this function can use a lot of computer memory, which is why the test-set is split into smaller batches. If you have little RAM in your computer and it crashes, then you can try and lower the batch-size.
End of explanation
print_test_accuracy()
Explanation: Performance before any optimization
The accuracy on the test-set is very low because the model variables have only been initialized and not optimized at all, so it just classifies the images randomly.
End of explanation
optimize(num_iterations=1)
print_test_accuracy()
Explanation: Performance after 1 optimization iteration
The classification accuracy does not improve much from just 1 optimization iteration, because the learning-rate for the optimizer is set very low.
End of explanation
optimize(num_iterations=99) # We already performed 1 iteration above.
print_test_accuracy(show_example_errors=True)
Explanation: Performance after 100 optimization iterations
After 100 optimization iterations, the model has significantly improved its classification accuracy.
End of explanation
optimize(num_iterations=900) # We performed 100 iterations above.
print_test_accuracy(show_example_errors=True)
Explanation: Performance after 1000 optimization iterations
After 1000 optimization iterations, the model has greatly increased its accuracy on the test-set to more than 90%.
End of explanation
optimize(num_iterations=9000) # We performed 1000 iterations above.
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)
Explanation: Performance after 10,000 optimization iterations
After 10,000 optimization iterations, the model has a classification accuracy on the test-set of about 99%.
End of explanation
def plot_conv_weights(weights, input_channel=0):
# Assume weights are TensorFlow ops for 4-dim variables
# e.g. weights_conv1 or weights_conv2.
# Retrieve the values of the weight-variables from TensorFlow.
# A feed-dict is not necessary because nothing is calculated.
w = session.run(weights)
# Get the lowest and highest values for the weights.
# This is used to correct the colour intensity across
# the images so they can be compared with each other.
w_min = np.min(w)
w_max = np.max(w)
# Number of filters used in the conv. layer.
num_filters = w.shape[3]
# Number of grids to plot.
# Rounded-up, square-root of the number of filters.
num_grids = math.ceil(math.sqrt(num_filters))
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(num_grids, num_grids)
# Plot all the filter-weights.
for i, ax in enumerate(axes.flat):
# Only plot the valid filter-weights.
if i<num_filters:
# Get the weights for the i'th filter of the input channel.
# See new_conv_layer() for details on the format
# of this 4-dim tensor.
img = w[:, :, input_channel, i]
# Plot image.
ax.imshow(img, vmin=w_min, vmax=w_max,
interpolation='nearest', cmap='seismic')
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
Explanation: Visualization of Weights and Layers
In trying to understand why the convolutional neural network can recognize handwritten digits, we will now visualize the weights of the convolutional filters and the resulting output images.
Helper-function for plotting convolutional weights
End of explanation
def plot_conv_layer(layer, image):
# Assume layer is a TensorFlow op that outputs a 4-dim tensor
# which is the output of a convolutional layer,
# e.g. layer_conv1 or layer_conv2.
# Create a feed-dict containing just one image.
# Note that we don't need to feed y_true because it is
# not used in this calculation.
feed_dict = {x: [image],
keep_prob: 1}
# Calculate and retrieve the output values of the layer
# when inputting that image.
values = session.run(layer, feed_dict=feed_dict)
# Number of filters used in the conv. layer.
num_filters = values.shape[3]
# Number of grids to plot.
# Rounded-up, square-root of the number of filters.
num_grids = math.ceil(math.sqrt(num_filters))
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(num_grids, num_grids)
# Plot the output images of all the filters.
for i, ax in enumerate(axes.flat):
# Only plot the images for valid filters.
if i<num_filters:
# Get the output image of using the i'th filter.
# See new_conv_layer() for details on the format
# of this 4-dim tensor.
img = values[0, :, :, i]
# Plot image.
ax.imshow(img, interpolation='nearest', cmap='binary')
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
Explanation: Helper-function for plotting the output of a convolutional layer
End of explanation
def plot_image(image):
plt.imshow(image.reshape(img_shape),
interpolation='nearest',
cmap='binary')
plt.show()
Explanation: Input Images
Helper-function for plotting an image.
End of explanation
image1 = data.test.images[0]
plot_image(image1)
Explanation: Plot an image from the test-set which will be used as an example below.
End of explanation
image2 = data.test.images[13]
plot_image(image2)
Explanation: Plot another example image from the test-set.
End of explanation
plot_conv_weights(weights=weights_conv1)
Explanation: Convolution Layer 1
Now plot the filter-weights for the first convolutional layer.
Note that positive weights are red and negative weights are blue.
End of explanation
plot_conv_layer(layer=layer_conv1, image=image1)
Explanation: Applying each of these convolutional filters to the first input image gives the following output images, which are then used as input to the second convolutional layer. Note that these images are down-sampled to 14 x 14 pixels which is half the resolution of the original input image.
End of explanation
plot_conv_layer(layer=layer_conv1, image=image2)
Explanation: The following images are the results of applying the convolutional filters to the second image.
End of explanation
plot_conv_weights(weights=weights_conv2, input_channel=0)
Explanation: It is difficult to see from these images what the purpose of the convolutional filters might be. It appears that they have merely created several variations of the input image, as if light was shining from different angles and casting shadows in the image.
Convolution Layer 2
Now plot the filter-weights for the second convolutional layer.
There are 16 output channels from the first conv-layer, which means there are 16 input channels to the second conv-layer. The second conv-layer has a set of filter-weights for each of its input channels. We start by plotting the filter-weigths for the first channel.
Note again that positive weights are red and negative weights are blue.
End of explanation
plot_conv_weights(weights=weights_conv2, input_channel=1)
Explanation: There are 16 input channels to the second convolutional layer, so we can make another 15 plots of filter-weights like this. We just make one more with the filter-weights for the second channel.
End of explanation
plot_conv_layer(layer=layer_conv2, image=image1)
Explanation: It can be difficult to understand and keep track of how these filters are applied because of the high dimensionality.
Applying these convolutional filters to the images that were ouput from the first conv-layer gives the following images.
Note that these are down-sampled yet again to 7 x 7 pixels which is half the resolution of the images from the first conv-layer.
End of explanation
plot_conv_layer(layer=layer_conv2, image=image2)
#plot_conv_layer(layer=layer_conv3, image=image1)
#plot_conv_layer(layer=layer_conv3, image=image2)
Explanation: And these are the results of applying the filter-weights to the second image.
End of explanation
# This has been commented out in case you want to modify and experiment
# with the Notebook without having to restart it.
# session.close()
Explanation: From these images, it looks like the second convolutional layer might detect lines and patterns in the input images, which are less sensitive to local variations in the original input images.
These images are then flattened and input to the fully-connected layer, but that is not shown here.
Close TensorFlow Session
We are now done using TensorFlow, so we close the session to release its resources.
End of explanation |
15,067 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Линейная регрессия и основные библиотеки Python для анализа данных и научных вычислений
Это задание посвящено линейной регрессии. На примере прогнозирования роста человека по его весу Вы увидите, какая математика за этим стоит, а заодно познакомитесь с основными библиотеками Python, необходимыми для дальнейшего прохождения курса.
Материалы
Лекции данного курса по линейным моделям и градиентному спуску
Документация по библиотекам NumPy и SciPy
Документация по библиотеке Matplotlib
Документация по библиотеке Pandas
Pandas Cheat Sheet
Документация по библиотеке Seaborn
Задание 1. Первичный анализ данных c Pandas
В этом заданиии мы будем использовать данные SOCR по росту и весу 25 тысяч подростков.
[1]. Если у Вас не установлена библиотека Seaborn - выполните в терминале команду conda install seaborn. (Seaborn не входит в сборку Anaconda, но эта библиотека предоставляет удобную высокоуровневую функциональность для визуализации данных).
Step1: Считаем данные по росту и весу (weights_heights.csv, приложенный в задании) в объект Pandas DataFrame
Step2: Чаще всего первое, что надо надо сделать после считывания данных - это посмотреть на первые несколько записей. Так можно отловить ошибки чтения данных (например, если вместо 10 столбцов получился один, в названии которого 9 точек с запятой). Также это позволяет познакомиться с данными, как минимум, посмотреть на признаки и их природу (количественный, категориальный и т.д.).
После этого стоит построить гистограммы распределения признаков - это опять-таки позволяет понять природу признака (степенное у него распределение, или нормальное, или какое-то еще). Также благодаря гистограмме можно найти какие-то значения, сильно не похожие на другие - "выбросы" в данных.
Гистограммы удобно строить методом plot Pandas DataFrame с аргументом kind='hist'.
Пример. Построим гистограмму распределения роста подростков из выборки data. Используем метод plot для DataFrame data c аргументами y='Height' (это тот признак, распределение которого мы строим)
Step3: Аргументы
Step4: Один из эффективных методов первичного анализа данных - отображение попарных зависимостей признаков. Создается $m \times m$ графиков (m - число признаков), где по диагонали рисуются гистограммы распределения признаков, а вне диагонали - scatter plots зависимости двух признаков. Это можно делать с помощью метода $scatter_matrix$ Pandas Data Frame или pairplot библиотеки Seaborn.
Чтобы проиллюстрировать этот метод, интересней добавить третий признак. Создадим признак Индекс массы тела (BMI). Для этого воспользуемся удобной связкой метода apply Pandas DataFrame и lambda-функций Python.
Step5: [3]. Постройте картинку, на которой будут отображены попарные зависимости признаков , 'Height', 'Weight' и 'BMI' друг от друга. Используйте метод pairplot библиотеки Seaborn.
Step6: Часто при первичном анализе данных надо исследовать зависимость какого-то количественного признака от категориального (скажем, зарплаты от пола сотрудника). В этом помогут "ящики с усами" - boxplots библиотеки Seaborn. Box plot - это компактный способ показать статистики вещественного признака (среднее и квартили) по разным значениям категориального признака. Также помогает отслеживать "выбросы" - наблюдения, в которых значение данного вещественного признака сильно отличается от других.
[4]. Создайте в DataFrame data новый признак weight_category, который будет иметь 3 значения
Step7: [5]. Постройте scatter plot зависимости роста от веса, используя метод plot для Pandas DataFrame с аргументом kind='scatter'. Подпишите картинку.
Step8: Задание 2. Минимизация квадратичной ошибки
В простейшей постановке задача прогноза значения вещественного признака по прочим признакам (задача восстановления регрессии) решается минимизацией квадратичной функции ошибки.
[6]. Напишите функцию, которая по двум параметрам $w_0$ и $w_1$ вычисляет квадратичную ошибку приближения зависимости роста $y$ от веса $x$ прямой линией $y = w_0 + w_1 * x$
Step9: Итак, мы решаем задачу
Step10: Минимизация квадратичной функции ошибки - относительная простая задача, поскольку функция выпуклая. Для такой задачи существует много методов оптимизации. Посмотрим, как функция ошибки зависит от одного параметра (наклон прямой), если второй параметр (свободный член) зафиксировать.
[8]. Постройте график зависимости функции ошибки, посчитанной в п. 6, от параметра $w_1$ при $w_0$ = 50. Подпишите оси и график.
Step11: Теперь методом оптимизации найдем "оптимальный" наклон прямой, приближающей зависимость роста от веса, при фиксированном коэффициенте $w_0 = 50$.
[9]. С помощью метода minimize_scalar из scipy.optimize найдите минимум функции, определенной в п. 6, для значений параметра $w_1$ в диапазоне [-5,5]. Проведите на графике из п. 5 Задания 1 прямую, соответствующую значениям параметров ($w_0$, $w_1$) = (50, $w_1_opt$), где $w_1_opt$ – найденное в п. 8 оптимальное значение параметра $w_1$.
Step12: При анализе многомерных данных человек часто хочет получить интуитивное представление о природе данных с помощью визуализации. Увы, при числе признаков больше 3 такие картинки нарисовать невозможно. На практике для визуализации данных в 2D и 3D в данных выделаяют 2 или, соответственно, 3 главные компоненты (как именно это делается - мы увидим далее в курсе) и отображают данные на плоскости или в объеме.
Посмотрим, как в Python рисовать 3D картинки, на примере отображения функции $z(x,y) = sin(\sqrt{x^2+y^2})$ для значений $x$ и $y$ из интервала [-5,5] c шагом 0.25.
Step13: Создаем объекты типа matplotlib.figure.Figure (рисунок) и matplotlib.axes._subplots.Axes3DSubplot (ось).
Step14: [10]. Постройте 3D-график зависимости функции ошибки, посчитанной в п.6 от параметров $w_0$ и $w_1$. Подпишите ось $x$ меткой «Intercept», ось $y$ – меткой «Slope», a ось $z$ – меткой «Error».
Step15: [11]. С помощью метода minimize из scipy.optimize найдите минимум функции, определенной в п. 6, для значений параметра $w_0$ в диапазоне [-100,100] и $w_1$ - в диапазоне [-5, 5]. Начальная точка – ($w_0$, $w_1$) = (0, 0). Используйте метод оптимизации L-BFGS-B (аргумент method метода minimize). Проведите на графике из п. 5 Задания 1 прямую, соответствующую найденным оптимальным значениям параметров $w_0$ и $w_1$. Подпишите оси и график. | Python Code:
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Линейная регрессия и основные библиотеки Python для анализа данных и научных вычислений
Это задание посвящено линейной регрессии. На примере прогнозирования роста человека по его весу Вы увидите, какая математика за этим стоит, а заодно познакомитесь с основными библиотеками Python, необходимыми для дальнейшего прохождения курса.
Материалы
Лекции данного курса по линейным моделям и градиентному спуску
Документация по библиотекам NumPy и SciPy
Документация по библиотеке Matplotlib
Документация по библиотеке Pandas
Pandas Cheat Sheet
Документация по библиотеке Seaborn
Задание 1. Первичный анализ данных c Pandas
В этом заданиии мы будем использовать данные SOCR по росту и весу 25 тысяч подростков.
[1]. Если у Вас не установлена библиотека Seaborn - выполните в терминале команду conda install seaborn. (Seaborn не входит в сборку Anaconda, но эта библиотека предоставляет удобную высокоуровневую функциональность для визуализации данных).
End of explanation
data = pd.read_csv('weights_heights.csv', index_col='Index')
Explanation: Считаем данные по росту и весу (weights_heights.csv, приложенный в задании) в объект Pandas DataFrame:
End of explanation
data.plot(y='Height', kind='hist',
color='red', title='Height (inch.) distribution')
Explanation: Чаще всего первое, что надо надо сделать после считывания данных - это посмотреть на первые несколько записей. Так можно отловить ошибки чтения данных (например, если вместо 10 столбцов получился один, в названии которого 9 точек с запятой). Также это позволяет познакомиться с данными, как минимум, посмотреть на признаки и их природу (количественный, категориальный и т.д.).
После этого стоит построить гистограммы распределения признаков - это опять-таки позволяет понять природу признака (степенное у него распределение, или нормальное, или какое-то еще). Также благодаря гистограмме можно найти какие-то значения, сильно не похожие на другие - "выбросы" в данных.
Гистограммы удобно строить методом plot Pandas DataFrame с аргументом kind='hist'.
Пример. Построим гистограмму распределения роста подростков из выборки data. Используем метод plot для DataFrame data c аргументами y='Height' (это тот признак, распределение которого мы строим)
End of explanation
data.head(5)
data.plot(y='Weight', kind='hist',
color='green', title='Weight (lb.) distribution')
Explanation: Аргументы:
y='Height' - тот признак, распределение которого мы строим
kind='hist' - означает, что строится гистограмма
color='red' - цвет
[2]. Посмотрите на первые 5 записей с помощью метода head Pandas DataFrame. Нарисуйте гистограмму распределения веса с помощью метода plot Pandas DataFrame. Сделайте гистограмму зеленой, подпишите картинку.
End of explanation
def make_bmi(height_inch, weight_pound):
METER_TO_INCH, KILO_TO_POUND = 39.37, 2.20462
return (weight_pound / KILO_TO_POUND) / \
(height_inch / METER_TO_INCH) ** 2
data['BMI'] = data.apply(lambda row: make_bmi(row['Height'],
row['Weight']), axis=1)
Explanation: Один из эффективных методов первичного анализа данных - отображение попарных зависимостей признаков. Создается $m \times m$ графиков (m - число признаков), где по диагонали рисуются гистограммы распределения признаков, а вне диагонали - scatter plots зависимости двух признаков. Это можно делать с помощью метода $scatter_matrix$ Pandas Data Frame или pairplot библиотеки Seaborn.
Чтобы проиллюстрировать этот метод, интересней добавить третий признак. Создадим признак Индекс массы тела (BMI). Для этого воспользуемся удобной связкой метода apply Pandas DataFrame и lambda-функций Python.
End of explanation
sns.pairplot(data)
Explanation: [3]. Постройте картинку, на которой будут отображены попарные зависимости признаков , 'Height', 'Weight' и 'BMI' друг от друга. Используйте метод pairplot библиотеки Seaborn.
End of explanation
def weight_category(weight):
if weight < 120:
return 1
elif weight >= 150:
return 3
else:
return 2
data['weight_cat'] = data['Weight'].apply(weight_category)
bxp = sns.boxplot(x="weight_cat", y="Height", data=data)
bxp.set_xlabel(u'Весовая категория')
bxp.set_ylabel(u'Рост')
Explanation: Часто при первичном анализе данных надо исследовать зависимость какого-то количественного признака от категориального (скажем, зарплаты от пола сотрудника). В этом помогут "ящики с усами" - boxplots библиотеки Seaborn. Box plot - это компактный способ показать статистики вещественного признака (среднее и квартили) по разным значениям категориального признака. Также помогает отслеживать "выбросы" - наблюдения, в которых значение данного вещественного признака сильно отличается от других.
[4]. Создайте в DataFrame data новый признак weight_category, который будет иметь 3 значения: 1 – если вес меньше 120 фунтов. (~ 54 кг.), 3 - если вес больше или равен 150 фунтов (~68 кг.), 2 – в остальных случаях. Постройте «ящик с усами» (boxplot), демонстрирующий зависимость роста от весовой категории. Используйте метод boxplot библиотеки Seaborn и метод apply Pandas DataFrame. Подпишите ось y меткой «Рост», ось x – меткой «Весовая категория».
End of explanation
data.plot(y='Height', x='Weight', kind='scatter', title=u'Зависимость роста от веса')
Explanation: [5]. Постройте scatter plot зависимости роста от веса, используя метод plot для Pandas DataFrame с аргументом kind='scatter'. Подпишите картинку.
End of explanation
def error(w):
return np.sum((data['Height'] - (w[0] + w[1] * data['Weight'])) ** 2)
Explanation: Задание 2. Минимизация квадратичной ошибки
В простейшей постановке задача прогноза значения вещественного признака по прочим признакам (задача восстановления регрессии) решается минимизацией квадратичной функции ошибки.
[6]. Напишите функцию, которая по двум параметрам $w_0$ и $w_1$ вычисляет квадратичную ошибку приближения зависимости роста $y$ от веса $x$ прямой линией $y = w_0 + w_1 * x$:
$$error(w_0, w_1) = \sum_{i=1}^n {(y_i - (w_0 + w_1 * x_i))}^2 $$
Здесь $n$ – число наблюдений в наборе данных, $y_i$ и $x_i$ – рост и вес $i$-ого человека в наборе данных.
End of explanation
x = np.linspace(60,180)
y1 = 60 + 0.05 * x
y2 = 50 + 0.16 * x
plt.figure()
data.plot(y='Height', x='Weight', kind='scatter', color='green', title=u'Зависимость роста от веса')
plt.xlabel(u'Вес')
plt.ylabel(u'Рост')
plt.plot(x,y1)
plt.plot(x,y2)
Explanation: Итак, мы решаем задачу: как через облако точек, соответсвующих наблюдениям в нашем наборе данных, в пространстве признаков "Рост" и "Вес" провести прямую линию так, чтобы минимизировать функционал из п. 6. Для начала давайте отобразим хоть какие-то прямые и убедимся, что они плохо передают зависимость роста от веса.
[7]. Проведите на графике из п. 5 Задания 1 две прямые, соответствующие значениям параметров ($w_0, w_1) = (60, 0.05)$ и ($w_0, w_1) = (50, 0.16)$. Используйте метод plot из matplotlib.pyplot, а также метод linspace библиотеки NumPy. Подпишите оси и график.
End of explanation
w1 = np.linspace(-10, 10)
w0 = [50] * len(w1)
w = zip(w0, w1)
e = []
for weight in w:
e.append(error(weight))
plt.plot(w1, e)
plt.xlabel('w1')
plt.ylabel('error')
plt.title(u'Зависимость ошибки от w1 при w0 = 50')
Explanation: Минимизация квадратичной функции ошибки - относительная простая задача, поскольку функция выпуклая. Для такой задачи существует много методов оптимизации. Посмотрим, как функция ошибки зависит от одного параметра (наклон прямой), если второй параметр (свободный член) зафиксировать.
[8]. Постройте график зависимости функции ошибки, посчитанной в п. 6, от параметра $w_1$ при $w_0$ = 50. Подпишите оси и график.
End of explanation
from scipy.optimize import minimize_scalar
def error50(w1):
return np.sum((data['Height']-(50+w1*data['Weight']))**2)
w1_opt = minimize_scalar(error50, bounds=(-5,5), method='bounded')
plt.figure()
data.plot(y='Height', x='Weight', kind='scatter', color='green', title=u'Оптимальный наклон прямой при w0=50')
plt.plot(x, 50 + w1_opt.x * x)
Explanation: Теперь методом оптимизации найдем "оптимальный" наклон прямой, приближающей зависимость роста от веса, при фиксированном коэффициенте $w_0 = 50$.
[9]. С помощью метода minimize_scalar из scipy.optimize найдите минимум функции, определенной в п. 6, для значений параметра $w_1$ в диапазоне [-5,5]. Проведите на графике из п. 5 Задания 1 прямую, соответствующую значениям параметров ($w_0$, $w_1$) = (50, $w_1_opt$), где $w_1_opt$ – найденное в п. 8 оптимальное значение параметра $w_1$.
End of explanation
from mpl_toolkits.mplot3d import Axes3D
Explanation: При анализе многомерных данных человек часто хочет получить интуитивное представление о природе данных с помощью визуализации. Увы, при числе признаков больше 3 такие картинки нарисовать невозможно. На практике для визуализации данных в 2D и 3D в данных выделаяют 2 или, соответственно, 3 главные компоненты (как именно это делается - мы увидим далее в курсе) и отображают данные на плоскости или в объеме.
Посмотрим, как в Python рисовать 3D картинки, на примере отображения функции $z(x,y) = sin(\sqrt{x^2+y^2})$ для значений $x$ и $y$ из интервала [-5,5] c шагом 0.25.
End of explanation
fig = plt.figure()
ax = fig.gca(projection='3d') # get current axis
# Создаем массивы NumPy с координатами точек по осям X и У.
# Используем метод meshgrid, при котором по векторам координат
# создается матрица координат. Задаем нужную функцию Z(x, y).
X = np.arange(-5, 5, 0.25)
Y = np.arange(-5, 5, 0.25)
X, Y = np.meshgrid(X, Y)
Z = np.sin(np.sqrt(X**2 + Y**2))
# Наконец, используем метод *plot_surface* объекта
# типа Axes3DSubplot. Также подписываем оси.
surf = ax.plot_surface(X, Y, Z)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
plt.show()
Explanation: Создаем объекты типа matplotlib.figure.Figure (рисунок) и matplotlib.axes._subplots.Axes3DSubplot (ось).
End of explanation
w0 = np.arange(-100, 100.25)
w1 = np.arange(-5, 5, 0.25)
w0,w1 = np.meshgrid(w0, w1)
def error_arr(w0,w1):
a=w0.shape[0]
b=w0.shape[1]
Z=np.zeros((a,b))
for i in range(a):
for j in range(b):
Z[i,j]=error((w0[i,j],w1[i,j]))
return Z
z = error_arr(w0,w1)
fig = plt.figure()
ax = fig.gca(projection='3d')
surf = ax.plot_surface(w0, w1, z)
ax.set_xlabel('Intercept')
ax.set_ylabel('Slope')
ax.set_zlabel('Error')
plt.show()
Explanation: [10]. Постройте 3D-график зависимости функции ошибки, посчитанной в п.6 от параметров $w_0$ и $w_1$. Подпишите ось $x$ меткой «Intercept», ось $y$ – меткой «Slope», a ось $z$ – меткой «Error».
End of explanation
from scipy.optimize import minimize
res = minimize(error, [0,0], method='L-BFGS-B', bounds=[(-100,100),(-5,5)])
w_opt = res.x
print "w0 = %s\nw1 = %s" % (w_opt[0],w_opt[1])
plt.figure()
data.plot(y='Height', x='Weight', kind='scatter', color='green', title=u'Оптимальная прямая')
plt.plot(x,w_opt[0]+w_opt[1]*x)
Explanation: [11]. С помощью метода minimize из scipy.optimize найдите минимум функции, определенной в п. 6, для значений параметра $w_0$ в диапазоне [-100,100] и $w_1$ - в диапазоне [-5, 5]. Начальная точка – ($w_0$, $w_1$) = (0, 0). Используйте метод оптимизации L-BFGS-B (аргумент method метода minimize). Проведите на графике из п. 5 Задания 1 прямую, соответствующую найденным оптимальным значениям параметров $w_0$ и $w_1$. Подпишите оси и график.
End of explanation |
15,068 | Given the following text description, write Python code to implement the functionality described.
Description:
You're a hungry rabbit, and you already have eaten a certain number of carrots,
but now you need to eat more carrots to complete the day's meals.
you should return an array of [ total number of eaten carrots after your meals,
the number of carrots left after your meals ]
if there are not enough remaining carrots, you will eat all remaining carrots, but will still be hungry.
Example:
* eat(5, 6, 10) -> [11, 4]
* eat(4, 8, 9) -> [12, 1]
* eat(1, 10, 10) -> [11, 0]
* eat(2, 11, 5) -> [7, 0]
Variables:
@number : integer
the number of carrots that you have eaten.
@need : integer
the number of carrots that you need to eat.
@remaining : integer
the number of remaining carrots thet exist in stock
Constrain:
* 0 <= number <= 1000
* 0 <= need <= 1000
* 0 <= remaining <= 1000
Have fun :)
| Python Code:
def eat(number, need, remaining):
if(need <= remaining):
return [ number + need , remaining-need ]
else:
return [ number + remaining , 0] |
15,069 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Multiple Regression using Statsmodels
This tutorial comes from datarobot's blog post on multi-regression using statsmodel. I only fixed the broken links to the data.
This is part of a series of blog posts showing how to do common statistical learning techniques with Python. We provide only a small amount of background on the concepts and techniques we cover, so if you’d like a more thorough explanation check out Introduction to Statistical Learning or sign up for the free online course run by the book's authors here.
Earlier we covered Ordinary Least Squares regression with a single variable. In this posting we will build upon that by extending Linear Regression to multiple input variables giving rise to Multiple Regression, the workhorse of statistical learning.
We first describe Multiple Regression in an intuitive way by moving from a straight line in a single predictor case to a 2d plane in the case of two predictors. Next we explain how to deal with categorical variables in the context of linear regression. The final section of the post investigates basic extensions. This includes interaction terms and fitting non-linear relationships using polynomial regression.
Understanding Multiple Regression
In Ordinary Least Squares Regression with a single variable we described the relationship between the predictor and the response with a straight line. In the case of multiple regression we extend this idea by fitting a $p$-dimensional hyperplane to our $p$ predictors.
We can show this for two predictor variables in a three dimensional plot. In the following example we will use the advertising dataset which consists of the sales of products and their advertising budget in three different media TV, radio, newspaper.
Step1: The multiple regression model describes the response as a weighted sum of the predictors
Step2: You can also use the formulaic interface of statsmodels to compute regression with multiple predictors. You just need append the predictors to the formula via a '+' symbol.
Step3: Handling Categorical Variables
Often in statistical learning and data analysis we encounter variables that are not quantitative. A common example is gender or geographic region. We would like to be able to handle them naturally. Here is a sample dataset investigating chronic heart disease.
Step4: The variable famhist holds if the patient has a family history of coronary artery disease. The percentage of the response chd (chronic heart disease ) for patients with absent/present family history of coronary artery disease is
Step5: These two levels (absent/present) have a natural ordering to them, so we can perform linear regression on them, after we convert them to numeric. This can be done using pd.Categorical.
Step6: There are several possible approaches to encode categorical values, and statsmodels has built-in support for many of them. In general these work by splitting a categorical variable into many different binary variables. The simplest way to encode categoricals is "dummy-encoding" which encodes a k-level categorical variable into k-1 binary variables. In statsmodels this is done easily using the C() function.
After we performed dummy encoding the equation for the fit is now
Step7: Because hlthp is a binary variable we can visualize the linear regression model by plotting two lines
Step8: Notice that the two lines are parallel. This is because the categorical variable affects only the intercept and not the slope (which is a function of logincome).
We can then include an interaction term to explore the effect of an interaction between the two -- i.e. we let the slope be different for the two categories.
Step9: The * in the formula means that we want the interaction term in addition each term separately (called main-effects). If you want to include just an interaction, use | Python Code:
import pandas as pd
import numpy as np
import statsmodels.api as sm
import matplotlib.pyplot as plt
%matplotlib inline
df_adv = pd.read_csv('http://www-bcf.usc.edu/~gareth/ISL/Advertising.csv', index_col=0)
X = df_adv[['TV', 'Radio']]
y = df_adv['Sales']
df_adv.head()
Explanation: Multiple Regression using Statsmodels
This tutorial comes from datarobot's blog post on multi-regression using statsmodel. I only fixed the broken links to the data.
This is part of a series of blog posts showing how to do common statistical learning techniques with Python. We provide only a small amount of background on the concepts and techniques we cover, so if you’d like a more thorough explanation check out Introduction to Statistical Learning or sign up for the free online course run by the book's authors here.
Earlier we covered Ordinary Least Squares regression with a single variable. In this posting we will build upon that by extending Linear Regression to multiple input variables giving rise to Multiple Regression, the workhorse of statistical learning.
We first describe Multiple Regression in an intuitive way by moving from a straight line in a single predictor case to a 2d plane in the case of two predictors. Next we explain how to deal with categorical variables in the context of linear regression. The final section of the post investigates basic extensions. This includes interaction terms and fitting non-linear relationships using polynomial regression.
Understanding Multiple Regression
In Ordinary Least Squares Regression with a single variable we described the relationship between the predictor and the response with a straight line. In the case of multiple regression we extend this idea by fitting a $p$-dimensional hyperplane to our $p$ predictors.
We can show this for two predictor variables in a three dimensional plot. In the following example we will use the advertising dataset which consists of the sales of products and their advertising budget in three different media TV, radio, newspaper.
End of explanation
X = df_adv[['TV', 'Radio']]
y = df_adv['Sales']
## fit a OLS model with intercept on TV and Radio
X = sm.add_constant(X)
est = sm.OLS(y, X).fit()
est.summary()
Explanation: The multiple regression model describes the response as a weighted sum of the predictors:
$Sales = \beta_0 + \beta_1 \times TV + \beta_2 \times Radio$
This model can be visualized as a 2-d plane in 3-d space:
The plot above shows data points above the hyperplane in white and points below the hyperplane in black. The color of the plane is determined by the corresonding predicted Sales values (blue = low, red = high). The Python code to generate the 3-d plot can be found in the appendix.
Just as with the single variable case, calling est.summary will give us detailed information about the model fit. You can find a description of each of the fields in the tables below in the previous blog post here.
End of explanation
# import formula api as alias smf
import statsmodels.formula.api as smf
# formula: response ~ predictor + predictor
est = smf.ols(formula='Sales ~ TV + Radio', data=df_adv).fit()
Explanation: You can also use the formulaic interface of statsmodels to compute regression with multiple predictors. You just need append the predictors to the formula via a '+' symbol.
End of explanation
import pandas as pd
df = pd.read_csv('http://statweb.stanford.edu/~tibs/ElemStatLearn/datasets/SAheart.data', index_col=0)
# copy data and separate predictors and response
X = df.copy()
y = X.pop('chd')
df.head()
Explanation: Handling Categorical Variables
Often in statistical learning and data analysis we encounter variables that are not quantitative. A common example is gender or geographic region. We would like to be able to handle them naturally. Here is a sample dataset investigating chronic heart disease.
End of explanation
# compute percentage of chronic heart disease for famhist
y.groupby(X.famhist).mean()
Explanation: The variable famhist holds if the patient has a family history of coronary artery disease. The percentage of the response chd (chronic heart disease ) for patients with absent/present family history of coronary artery disease is:
End of explanation
import statsmodels.formula.api as smf
# encode df.famhist as a numeric via pd.Factor
df['famhist_ord'] = pd.Categorical(df.famhist).labels
est = smf.ols(formula="chd ~ famhist_ord", data=df).fit()
Explanation: These two levels (absent/present) have a natural ordering to them, so we can perform linear regression on them, after we convert them to numeric. This can be done using pd.Categorical.
End of explanation
df = pd.read_csv('https://raw.githubusercontent.com/statsmodels/statsmodels/master/statsmodels/datasets/randhie/src/randhie.csv')
df["logincome"] = np.log1p(df.income)
df[['mdvis', 'logincome', 'hlthp']].tail()
Explanation: There are several possible approaches to encode categorical values, and statsmodels has built-in support for many of them. In general these work by splitting a categorical variable into many different binary variables. The simplest way to encode categoricals is "dummy-encoding" which encodes a k-level categorical variable into k-1 binary variables. In statsmodels this is done easily using the C() function.
After we performed dummy encoding the equation for the fit is now:
$ \hat{y} = \text{Intercept} + C(famhist)[T.Present] \times I(\text{famhist} = \text{Present})$
where $I$ is the indicator function that is 1 if the argument is true and 0 otherwise.
Hence the estimated percentage with chronic heart disease when famhist == present is 0.2370 + 0.2630 = 0.5000 and the estimated percentage with chronic heart disease when famhist == absent is 0.2370.
This same approach generalizes well to cases with more than two levels. For example, if there were entries in our dataset with famhist equal to 'Missing' we could create two 'dummy' variables, one to check if famhis equals present, and another to check if famhist equals 'Missing'.
Interactions
Now that we have covered categorical variables, interaction terms are easier to explain.
We might be interested in studying the relationship between doctor visits (mdvis) and both log income and the binary variable health status (hlthp).
End of explanation
plt.scatter(df.logincome, df.mdvis, alpha=0.3)
plt.xlabel('Log income')
plt.ylabel('Number of visits')
income_linspace = np.linspace(df.logincome.min(), df.logincome.max(), 100)
est = smf.ols(formula='mdvis ~ logincome + hlthp', data=df).fit()
plt.plot(income_linspace, est.params[0] + est.params[1] * income_linspace + est.params[2] * 0, 'r')
plt.plot(income_linspace, est.params[0] + est.params[1] * income_linspace + est.params[2] * 1, 'g')
short_summary(est)
Explanation: Because hlthp is a binary variable we can visualize the linear regression model by plotting two lines: one for hlthp == 0 and one for hlthp == 1.
End of explanation
plt.scatter(df.logincome, df.mdvis, alpha=0.3)
plt.xlabel('Log income')
plt.ylabel('Number of visits')
est = smf.ols(formula='mdvis ~ hlthp * logincome', data=df).fit()
plt.plot(income_linspace, est.params[0] + est.params[1] * 0 + est.params[2] * income_linspace +
est.params[3] * 0 * income_linspace, 'r')
plt.plot(income_linspace, est.params[0] + est.params[1] * 1 + est.params[2] * income_linspace +
est.params[3] * 1 * income_linspace, 'g')
short_summary(est)
Explanation: Notice that the two lines are parallel. This is because the categorical variable affects only the intercept and not the slope (which is a function of logincome).
We can then include an interaction term to explore the effect of an interaction between the two -- i.e. we let the slope be different for the two categories.
End of explanation
# load the boston housing dataset - median house values in the Boston area
df = pd.read_csv('http://vincentarelbundock.github.io/Rdatasets/csv/MASS/Boston.csv')
# plot lstat (% lower status of the population) against median value
plt.figure(figsize=(6 * 1.618, 6))
plt.scatter(df.lstat, df.medv, s=10, alpha=0.3)
plt.xlabel('lstat')
plt.ylabel('medv')
# points linearlyd space on lstats
x = pd.DataFrame({'lstat': np.linspace(df.lstat.min(), df.lstat.max(), 100)})
# 1-st order polynomial
poly_1 = smf.ols(formula='medv ~ 1 + lstat', data=df).fit()
plt.plot(x.lstat, poly_1.predict(x), 'b-', label='Poly n=1 $R^2$=%.2f' % poly_1.rsquared,
alpha=0.9)
# 2-nd order polynomial
poly_2 = smf.ols(formula='medv ~ 1 + lstat + I(lstat ** 2.0)', data=df).fit()
plt.plot(x.lstat, poly_2.predict(x), 'g-', label='Poly n=2 $R^2$=%.2f' % poly_2.rsquared,
alpha=0.9)
# 3-rd order polynomial
poly_3 = smf.ols(formula='medv ~ 1 + lstat + I(lstat ** 2.0) + I(lstat ** 3.0)', data=df).fit()
plt.plot(x.lstat, poly_3.predict(x), 'r-', alpha=0.9,
label='Poly n=3 $R^2$=%.2f' % poly_3.rsquared)
plt.legend()
Explanation: The * in the formula means that we want the interaction term in addition each term separately (called main-effects). If you want to include just an interaction, use : instead. This is generally avoided in analysis because it is almost always the case that, if a variable is important due to an interaction, it should have an effect by itself.
To summarize what is happening here:
If we include the category variables without interactions we have two lines, one for hlthp == 1 and one for hlthp == 0, with all having the same slope but different intercepts.
If we include the interactions, now each of the lines can have a different slope. This captures the effect that variation with income may be different for people who are in poor health than for people who are in better health.
For more information on the supported formulas see the documentation of patsy, used by statsmodels to parse the formula.
Polynomial regression
Despite its name, linear regression can be used to fit non-linear functions. A linear regression model is linear in the model parameters, not necessarily in the predictors. If you add non-linear transformations of your predictors to the linear regression model, the model will be non-linear in the predictors.
A very popular non-linear regression technique is Polynomial Regression, a technique which models the relationship between the response and the predictors as an n-th order polynomial. The higher the order of the polynomial the more "wigglier" functions you can fit. Using higher order polynomial comes at a price, however. First, the computational complexity of model fitting grows as the number of adaptable parameters grows. Second, more complex models have a higher risk of overfitting. Overfitting refers to a situation in which the model fits the idiosyncrasies of the training data and loses the ability to generalize from the seen to predict the unseen.
To illustrate polynomial regression we will consider the Boston housing dataset. We'll look into the task to predict median house values in the Boston area using the predictor lstat, defined as the "proportion of the adults without some high school education and proportion of male workes classified as laborers" (see Hedonic House Prices and the Demand for Clean Air, Harrison & Rubinfeld, 1978).
We can clearly see that the relationship between medv and lstat is non-linear: the blue (straight) line is a poor fit; a better fit can be obtained by including higher order terms.
End of explanation |
15,070 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Churn Prediction - Predicting when your customers will churn
1 - Introduction
A software as a service (SaaS) company provides a suite of products for Small-to-Medium enterprises, such as data storage, Accounting, Travel and Expenses management as well as Payroll management.
So as to help the CFO forecast the acquisition and marketing costs for the next fiscal year, the Data Science team wants to build a churn model to predict when customers are likely to stop their monthly subscription. Thus, once customers have been flagged as likely to churn within a certain time window, the company could take the necessary retention actions.
2 - Dataset
2.1 - Description and Overview
Step1: 2.2 - From categorical to numerical
There are several categorical features that need to be encoded into one-hot vectors
Step2: 3 - Exploratory Data Analysis
As this tutorial is mainly designed to provide an example of how to use Pysurvival, we will not perform a thorough exploratory data analysis but we greatly encourage the reader to do so by taking a look at the predictive maintenance tutorial that provides a very detailed study.
Here, we will just check if the dataset contains Null values or duplicated rows, and have a look at feature correlations.
3.1 - Null values and duplicates
The first thing to do is checking if the raw_dataset contains Null values and has duplicated rows.
Step3: As it turns out the raw_dataset doesn't have any Null values or duplicates.
3.2 - Correlations
Let's compute and visualize the correlation between the features
Step4: 4 - Modeling
4.1 - Building the model
So as to perform cross-validation later on and assess the performance of the model, let's split the dataset into training and testing sets.
Step5: Let's now fit an Extra Survival Trees model to the training set.
Note
Step6: 4.2 - Variables importance
Having built a Survival Forest model allows us to compute the features importance
Step7: Thanks to the feature importance, we get a better understanding of what drives retention or churn. Here, the Accounting and Payroll Management products, score on the satisfaction survey as well as the amount of time spent on the phone with customer support play a primordial role.
Note
Step8: 5.2 - Brier Score
The Brier score measures the average discrepancies between the status and the estimated probabilities at a given time. Thus, the lower the score (usually below 0.25), the better the predictive performance. To assess the overall error measure across multiple time points, the Integrated Brier Score (IBS) is usually computed as well.
Step9: The IBS is equal to 0.1 on the entire model time axis. This indicates that the model will have good predictive abilities.
6 - Predictions
6.1 - Overall predictions
Now that we have built a model that seems to provide great performances, let's compare the time series of the actual and predicted number of customers who stop doing business with the SaaS company, for each time t.
Step10: The model provides very good results overall as on an entire 12 months window, it only makes an average absolute error of ~7 customers.
6.2 - Individual predictions
Now that we know that we can provide reliable predictions for an entire cohort, let's compute the probability of remaining a customer for all times t.
First, we can construct the risk groups based on risk scores distribution. The helper function create_risk_groups, which can be found in pysurvival.utils.display, will help us do that
Step11: Here, it is possible to distinguish 3 main groups
Step12: Here, we can see that the model manages to provide a great prediction of the event time.
7 - Conclusion
We can now save our model so as to put it in production and score future customers. | Python Code:
# Importing modules
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
from pysurvival.datasets import Dataset
%pylab inline
# Reading the dataset
raw_dataset = Dataset('churn').load()
print("The raw_dataset has the following shape: {}.".format(raw_dataset.shape))
raw_dataset.head(2)
Explanation: Churn Prediction - Predicting when your customers will churn
1 - Introduction
A software as a service (SaaS) company provides a suite of products for Small-to-Medium enterprises, such as data storage, Accounting, Travel and Expenses management as well as Payroll management.
So as to help the CFO forecast the acquisition and marketing costs for the next fiscal year, the Data Science team wants to build a churn model to predict when customers are likely to stop their monthly subscription. Thus, once customers have been flagged as likely to churn within a certain time window, the company could take the necessary retention actions.
2 - Dataset
2.1 - Description and Overview
End of explanation
# Creating one-hot vectors
categories = ['product_travel_expense', 'product_payroll',
'product_accounting', 'us_region', 'company_size']
dataset = pd.get_dummies(raw_dataset, columns=categories, drop_first=True)
# Creating the time and event columns
time_column = 'months_active'
event_column = 'churned'
# Extracting the features
features = np.setdiff1d(dataset.columns, [time_column, event_column] ).tolist()
Explanation: 2.2 - From categorical to numerical
There are several categorical features that need to be encoded into one-hot vectors:
* product_travel_expense
* product_payroll
* product_accounting
* us_region
* company_size
End of explanation
# Checking for null values
N_null = sum(dataset[features].isnull().sum())
print("The raw_dataset contains {} null values".format(N_null)) #0 null values
# Removing duplicates if there exist
N_dupli = sum(dataset.duplicated(keep='first'))
dataset = dataset.drop_duplicates(keep='first').reset_index(drop=True)
print("The raw_dataset contains {} duplicates".format(N_dupli))
# Number of samples in the dataset
N = dataset.shape[0]
Explanation: 3 - Exploratory Data Analysis
As this tutorial is mainly designed to provide an example of how to use Pysurvival, we will not perform a thorough exploratory data analysis but we greatly encourage the reader to do so by taking a look at the predictive maintenance tutorial that provides a very detailed study.
Here, we will just check if the dataset contains Null values or duplicated rows, and have a look at feature correlations.
3.1 - Null values and duplicates
The first thing to do is checking if the raw_dataset contains Null values and has duplicated rows.
End of explanation
from pysurvival.utils.display import correlation_matrix
correlation_matrix(dataset[features], figure_size=(30,15), text_fontsize=10)
Explanation: As it turns out the raw_dataset doesn't have any Null values or duplicates.
3.2 - Correlations
Let's compute and visualize the correlation between the features
End of explanation
# Building training and testing sets
from sklearn.model_selection import train_test_split
index_train, index_test = train_test_split( range(N), test_size = 0.35)
data_train = dataset.loc[index_train].reset_index( drop = True )
data_test = dataset.loc[index_test].reset_index( drop = True )
# Creating the X, T and E inputs
X_train, X_test = data_train[features], data_test[features]
T_train, T_test = data_train[time_column], data_test[time_column]
E_train, E_test = data_train[event_column], data_test[event_column]
Explanation: 4 - Modeling
4.1 - Building the model
So as to perform cross-validation later on and assess the performance of the model, let's split the dataset into training and testing sets.
End of explanation
from pysurvival.models.survival_forest import ExtraSurvivalTreesModel
# Fitting the model
xst = ExtraSurvivalTreesModel(num_trees=200)
xst.fit(X_train, T_train, E_train, max_features="sqrt",
max_depth=5, min_node_size=20, num_random_splits= 200 )
Explanation: Let's now fit an Extra Survival Trees model to the training set.
Note: The choice of the model and hyperparameters was obtained using grid-search selection, not displayed in this tutorial.
End of explanation
# Computing variables importance
xst.variable_importance_table.head(5)
Explanation: 4.2 - Variables importance
Having built a Survival Forest model allows us to compute the features importance:
End of explanation
from pysurvival.utils.metrics import concordance_index
c_index = concordance_index(xst, X_test, T_test, E_test)
print('C-index: {:.2f}'.format(c_index))
Explanation: Thanks to the feature importance, we get a better understanding of what drives retention or churn. Here, the Accounting and Payroll Management products, score on the satisfaction survey as well as the amount of time spent on the phone with customer support play a primordial role.
Note: The importance is the difference in prediction error between the perturbed and unperturbed error rate as depicted by Breiman et al.
5 - Cross Validation
In order to assess the model performance, we previously split the original dataset into training and testing sets, so that we can now compute its performance metrics on the testing set:
5.1 - C-index
The C-index represents the global assessment of the model discrimination power: this is the model’s ability to correctly provide a reliable ranking of the survival times based on the individual risk scores. In general, when the C-index is close to 1, the model has an almost perfect discriminatory power; but if it is close to 0.5, it has no ability to discriminate between low and high risk subjects.
End of explanation
from pysurvival.utils.display import integrated_brier_score
ibs = integrated_brier_score(xst, X_test, T_test, E_test, t_max=12, figure_size=(15,5))
print('IBS: {:.2f}'.format(ibs))
Explanation: 5.2 - Brier Score
The Brier score measures the average discrepancies between the status and the estimated probabilities at a given time. Thus, the lower the score (usually below 0.25), the better the predictive performance. To assess the overall error measure across multiple time points, the Integrated Brier Score (IBS) is usually computed as well.
End of explanation
from pysurvival.utils.display import compare_to_actual
results = compare_to_actual(xst, X_test, T_test, E_test,
is_at_risk = False, figure_size=(16, 6),
metrics = ['rmse', 'mean', 'median'])
Explanation: The IBS is equal to 0.1 on the entire model time axis. This indicates that the model will have good predictive abilities.
6 - Predictions
6.1 - Overall predictions
Now that we have built a model that seems to provide great performances, let's compare the time series of the actual and predicted number of customers who stop doing business with the SaaS company, for each time t.
End of explanation
from pysurvival.utils.display import create_risk_groups
risk_groups = create_risk_groups(model=xst, X=X_test,
use_log = True, num_bins=30, figure_size=(20, 4),
low={'lower_bound':0, 'upper_bound':1.65, 'color':'red'},
medium={'lower_bound':1.65, 'upper_bound':2.2,'color':'green'},
high={'lower_bound':2.2, 'upper_bound':3, 'color':'blue'}
)
Explanation: The model provides very good results overall as on an entire 12 months window, it only makes an average absolute error of ~7 customers.
6.2 - Individual predictions
Now that we know that we can provide reliable predictions for an entire cohort, let's compute the probability of remaining a customer for all times t.
First, we can construct the risk groups based on risk scores distribution. The helper function create_risk_groups, which can be found in pysurvival.utils.display, will help us do that:
End of explanation
# Initializing the figure
fig, ax = plt.subplots(figsize=(15, 5))
# Selecting a random individual that experienced an event from each group
groups = []
for i, (label, (color, indexes)) in enumerate(risk_groups.items()) :
# Selecting the individuals that belong to this group
if len(indexes) == 0 :
continue
X = X_test.values[indexes, :]
T = T_test.values[indexes]
E = E_test.values[indexes]
# Randomly extracting an individual that experienced an event
choices = np.argwhere((E==1.)).flatten()
if len(choices) == 0 :
continue
k = np.random.choice( choices, 1)[0]
# Saving the time of event
t = T[k]
# Computing the Survival function for all times t
survival = xst.predict_survival(X[k, :]).flatten()
# Displaying the functions
label_ = '{} risk'.format(label)
plt.plot(xst.times, survival, color = color, label=label_, lw=2)
groups.append(label)
# Actual time
plt.axvline(x=t, color=color, ls ='--')
ax.annotate('T={:.1f}'.format(t), xy=(t, 0.5*(1.+0.2*i)),
xytext=(t, 0.5*(1.+0.2*i)), fontsize=12)
# Show everything
groups_str = ', '.join(groups)
title = "Comparing Survival functions between {} risk grades".format(groups_str)
plt.legend(fontsize=12)
plt.title(title, fontsize=15)
plt.ylim(0, 1.05)
plt.show()
Explanation: Here, it is possible to distinguish 3 main groups: low, medium and high risk groups. Because the C-index is high, the model will be able to rank the survival times of a random unit of each group, such that $t_{high} \leq t_{medium} \leq t_{low}$.
Let's randomly select individual unit in each group and compare their likelihood to remain a customer. To demonstrate our point, we will purposely select units which experienced an event to visualize the actual time of event.
End of explanation
# Let's now save our model
from pysurvival.utils import save_model
save_model(xst, '/Users/xxx/Desktop/churn_csf.zip')
Explanation: Here, we can see that the model manages to provide a great prediction of the event time.
7 - Conclusion
We can now save our model so as to put it in production and score future customers.
End of explanation |
15,071 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The following notebook is a tutorial on machine learning to detect change using geospatial_learn
Documentation on the lib can be found here
Step1: Before we begin!
In jupyter, to see the docstring, which explains any function (provided someone has written it!) enter the function as you normaly would, but put a question mark at the end and press shift and enter
Step2: change directory to where you have saved the files
Paths to the 2 images and model - please alter as appropriate in your own dir
Step3: First thing to do is stack our 'before' and 'after' images
Step4: Next classify the temporal S2 stack
Step5: A note on model creation with k-fold cross validated grid search
If you wish to create your own model with training samples train the model with the above data.
Please note this will take time with a large training set
We first define the parameters we wish to grid search over. The parameters below are just an example, It is of course possible for these to be more numerous at the cost of processing time. The time is a function of the number of possibilities per parameter. There are defaults in geospatial-learn, but it is recommended you define your own.
```python
params = {'n_estimators'
Step6: Lastly, polygonise the thematic raster for visualisation in QGIS
There is a style file available for this in the zip called 'Change_style.qml'.
For those not familiar with python, the line below uses some string concatenation out of lazyness for renaming files.
Step7: As well as a thematic map, we can produce a multiband map of class probabilities with the following function
python
learning.prob_pixel_bloc(rfModel, stkRas, 8, probMap, 8, blocksize=256)
The input variables are the same as the classify function except we also input the number of classes (7 in this case)
Step8: Check the results in QGIS! | Python Code:
%matplotlib inline
Explanation: The following notebook is a tutorial on machine learning to detect change using geospatial_learn
Documentation on the lib can be found here:
http://geospatial-learn.readthedocs.io/en/latest/
Please use QGIS to visualise results as this is quicker than plotting them in the notebook.
Two Sentinel 2 subsets have been provided along with a pre-made model for detecting change. It is possible you could create your own model - the code is supplied to do so, but this would involve a bit of processing time!
The change detection method used here classifies the change direcly rather than differencing two maps. The training data was collected over 1.5 yrs worth of S2 data over some areas in Kenya
End of explanation
import matplotlib.pyplot as plt
from geospatial_learn import raster, learning, shape
cd S2_change
Explanation: Before we begin!
In jupyter, to see the docstring, which explains any function (provided someone has written it!) enter the function as you normaly would, but put a question mark at the end and press shift and enter:
python
geodata.stack_ras?
A scrollable text will appear with an explanation
End of explanation
im1 = ('S2_mau_clip_dec2015.tif')
im2 = ('S2_mau_clip_dec2016.tif')
rfModel = 'Ch_MYE_cv5_rf.gz'
Explanation: change directory to where you have saved the files
Paths to the 2 images and model - please alter as appropriate in your own dir
End of explanation
stkRas = 'S2_ch_stk.tif'
geodata.stack_ras([im1,im2], stkRas)
Explanation: First thing to do is stack our 'before' and 'after' images
End of explanation
outMap = 'S2_ch_map'
Explanation: Next classify the temporal S2 stack
End of explanation
learning.classify_pixel_bloc(rfModel, stkRas, 8, outMap, blocksize=256)
Explanation: A note on model creation with k-fold cross validated grid search
If you wish to create your own model with training samples train the model with the above data.
Please note this will take time with a large training set
We first define the parameters we wish to grid search over. The parameters below are just an example, It is of course possible for these to be more numerous at the cost of processing time. The time is a function of the number of possibilities per parameter. There are defaults in geospatial-learn, but it is recommended you define your own.
```python
params = {'n_estimators': [500], 'max_features': ['sqrt', 'log2'],
'min_samples_split':[5,10,20,50], 'min_samples_leaf': [5,10,20,50]}
```
When we execute the create_model function we get a summary of the no of model fits
'Fitting 5 folds for each of 18 candidates, totalling 90 fits'
I have fixed the n_estimators (trees) at 500 below but this could be varied also.
For a full list of params see:
http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html
We also need a model path to save to:
python
outModel = 'path/to/mymodel.gz'
Then finally run the model calibration:
python
learning.create_model(trainPix, outPixmodel, clf='rf', cv=3, params=params)
End of explanation
geodata.polygonize(outMap+'.tif', outMap)
Explanation: Lastly, polygonise the thematic raster for visualisation in QGIS
There is a style file available for this in the zip called 'Change_style.qml'.
For those not familiar with python, the line below uses some string concatenation out of lazyness for renaming files.
End of explanation
learning.prob_pixel_bloc(rfModel, stkRas, 8, outMap+'prob',7, blocksize=256)
Explanation: As well as a thematic map, we can produce a multiband map of class probabilities with the following function
python
learning.prob_pixel_bloc(rfModel, stkRas, 8, probMap, 8, blocksize=256)
The input variables are the same as the classify function except we also input the number of classes (7 in this case)
End of explanation
geodata.temporal_comp?
learning.plot_feature_importances(rfModel, ['b','g', 'r', 'nir','b1','g1', 'r1', 'nir1'])
learning.plot_feature_importances?
Explanation: Check the results in QGIS!
End of explanation |
15,072 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sealevel monitor
This document is used to monitor the current sea level along the Dutch coast. The sea level is measured using a number of tide gauges. Six long running tide gauges are considered "main stations". The mean of these stations is used to estimate the "current sea-level rise". The measurements since 1890 are taken into account. Measurements before that are considered less valid because the Amsterdam Ordnance Datum was not yet normalized.
Step1: The global collection of tide gauge records at the PSMSL is used to access the data. The other way to access the data is to ask the service desk data at Rijkswaterstaat. There are two types of datasets the "Revised Local Reference" and "Metric". For the Netherlands the difference is that the "Revised Local Reference" undoes the corrections from the NAP correction in 2014, to get a consistent dataset.
Step5: Now that we have defined which tide gauges we are monitoring we can start downloading the relevant data.
Step6: Now that we have all data downloaded we can compute the mean.
Step7: Methods
Now we can define the statistical model. The "current sea-level rise" is defined by the following formula. Please note that the selected epoch of 1970 is arbitrary.
$
H(t) = a + b_{trend}(t-1970) + b_u\cos(2\pi\frac{t - 1970}{18.613}) + b_v\sin(2\pi\frac{t - 1970}{18.613})
$
The terms are refered to as Constant ($a$), Trend ($b_{trend}$), Nodal U ($b_u$) and Nodal V ($b_v$).
Alternative models are used to detect if sea-level rise is increasing. These models include the broken linear model, defined by a possible change in trend starting at 1993. This timespan is the start of the "satellite era" (start of TOPEX/Poseidon measurements), it is also often referred to as the start of acceleration because the satellite measurements tend to show a higher rate of sea level than the "tide-gauge era" (1900-2000). If this model fits better than the linear model, one could say that there is a "increase in sea-level rise".
$
H(t) = a + b_{trend}(t-1970) + b_{broken}(t > 1993)*(t-1993) + b_{u}\cos(2\pi\frac{t - 1970}{18.613}) + b_{v}\sin(2\pi\frac{t - 1970}{18.613})
$
Another way to look at increased sea-level rise is to look at sea-level acceleration. To detect sea-level acceleration one can use a quadratic model.
$
H(t) = a + b_{trend}(t-1970) + b_{quadratic}(t - 1970)*(t-1970) + b_{u}\cos(2\pi\frac{t - 1970}{18.613}) + b_{v}\sin(2\pi\frac{t - 1970}{18.613})
$
Step8: Is there a sea-level acceleration?
The following section computes two common models to detect sea-level acceleration. The broken linear model expects that sea level has been rising faster since 1990. The quadratic model assumes that the sea-level is accelerating continuously. Both models are compared to the linear model. The extra terms are tested for significance and the AIC is computed to see which model is "better".
Step9: Conclusions
Below are some statements that depend on the output calculated above. | Python Code:
# this is a list of packages that are used in this notebook
# these come with python
import io
import zipfile
import functools
# you can install these packages using pip or anaconda
# (requests numpy pandas bokeh pyproj statsmodels)
# for downloading
import requests
# computation libraries
import numpy as np
import pandas
# coordinate systems
import pyproj
# statistics
import statsmodels.api as sm
# plotting
import bokeh.charts
import bokeh.io
import bokeh.plotting
import bokeh.tile_providers
import bokeh.palettes
# displaying things
from ipywidgets import Image
import IPython.display
# Some coordinate systems
WEBMERCATOR = pyproj.Proj(init='epsg:3857')
WGS84 = pyproj.Proj(init='epsg:4326')
# If this notebook is not showing up with figures, you can use the following url:
# https://nbviewer.ipython.org/github/openearth/notebooks/blob/master/sealevelmonitor.ipynb
bokeh.io.output_notebook()
Explanation: Sealevel monitor
This document is used to monitor the current sea level along the Dutch coast. The sea level is measured using a number of tide gauges. Six long running tide gauges are considered "main stations". The mean of these stations is used to estimate the "current sea-level rise". The measurements since 1890 are taken into account. Measurements before that are considered less valid because the Amsterdam Ordnance Datum was not yet normalized.
End of explanation
urls = {
'metric_monthly': 'http://www.psmsl.org/data/obtaining/met.monthly.data/met_monthly.zip',
'rlr_monthly': 'http://www.psmsl.org/data/obtaining/rlr.annual.data/rlr_monthly.zip',
'rlr_annual': 'http://www.psmsl.org/data/obtaining/rlr.annual.data/rlr_annual.zip'
}
dataset_name = 'rlr_annual'
# these compute the rlr back to NAP (ignoring the undoing of the NAP correction)
main_stations = {
20: {
'name': 'Vlissingen',
'rlr2nap': lambda x: x - (6976-46)
},
22: {
'name': 'Hoek van Holland',
'rlr2nap': lambda x:x - (6994 - 121)
},
23: {
'name': 'Den Helder',
'rlr2nap': lambda x: x - (6988-42)
},
24: {
'name': 'Delfzijl',
'rlr2nap': lambda x: x - (6978-155)
},
25: {
'name': 'Harlingen',
'rlr2nap': lambda x: x - (7036-122)
},
32: {
'name': 'IJmuiden',
'rlr2nap': lambda x: x - (7033-83)
}
}
# the main stations are defined by their ids
main_stations_idx = list(main_stations.keys())
main_stations_idx
# download the zipfile
resp = requests.get(urls[dataset_name])
# we can read the zipfile
stream = io.BytesIO(resp.content)
zf = zipfile.ZipFile(stream)
# this list contains a table of
# station ID, latitude, longitude, station name, coastline code, station code, and quality flag
csvtext = zf.read('{}/filelist.txt'.format(dataset_name))
stations = pandas.read_csv(
io.BytesIO(csvtext),
sep=';',
names=('id', 'lat', 'lon', 'name', 'coastline_code', 'station_code', 'quality'),
converters={
'name': str.strip,
'quality': str.strip
}
)
stations = stations.set_index('id')
# the dutch stations in the PSMSL database, make a copy
# or use stations.coastline_code == 150 for all dutch stations
selected_stations = stations.ix[main_stations_idx].copy()
# set the main stations, this should be a list of 6 stations
selected_stations
# show all the stations on a map
# compute the bounds of the plot
sw = (50, -5)
ne = (55, 10)
# transform to web mercator
sw_wm = pyproj.transform(WGS84, WEBMERCATOR, sw[1], sw[0])
ne_wm = pyproj.transform(WGS84, WEBMERCATOR, ne[1], ne[0])
# create a plot
fig = bokeh.plotting.figure(tools='pan, wheel_zoom', plot_width=600, plot_height=200, x_range=(sw_wm[0], ne_wm[0]), y_range=(sw_wm[1], ne_wm[1]))
fig.axis.visible = False
# add some background tiles
fig.add_tile(bokeh.tile_providers.STAMEN_TERRAIN)
# add the stations
x, y = pyproj.transform(WGS84, WEBMERCATOR, np.array(stations.lon), np.array(stations.lat))
fig.circle(x, y)
x, y = pyproj.transform(WGS84, WEBMERCATOR, np.array(selected_stations.lon), np.array(selected_stations.lat))
_ = fig.circle(x, y, color='red')
# show the plot
bokeh.io.show(fig)
Explanation: The global collection of tide gauge records at the PSMSL is used to access the data. The other way to access the data is to ask the service desk data at Rijkswaterstaat. There are two types of datasets the "Revised Local Reference" and "Metric". For the Netherlands the difference is that the "Revised Local Reference" undoes the corrections from the NAP correction in 2014, to get a consistent dataset.
End of explanation
# each station has a number of files that you can look at.
# here we define a template for each filename
# stations that we are using for our computation
# define the name formats for the relevant files
names = {
'datum': '{dataset}/RLR_info/{id}.txt',
'diagram': '{dataset}/RLR_info/{id}.png',
'url': 'http://www.psmsl.org/data/obtaining/rlr.diagrams/{id}.php',
'data': '{dataset}/data/{id}.rlrdata',
'doc': '{dataset}/docu/{id}.txt',
'contact': '{dataset}/docu/{id}_auth.txt'
}
def get_url(station, dataset):
return the url of the station information (diagram and datum)
info = dict(
dataset=dataset,
id=station.name
)
url = names['url'].format(**info)
return url
# fill in the dataset parameter using the global dataset_name
f = functools.partial(get_url, dataset=dataset_name)
# compute the url for each station
selected_stations['url'] = selected_stations.apply(f, axis=1)
selected_stations
def missing2nan(value, missing=-99999):
convert the value to nan if the float of value equals the missing value
value = float(value)
if value == missing:
return np.nan
return value
def get_data(station, dataset):
get data for the station (pandas record) from the dataset (url)
info = dict(
dataset=dataset,
id=station.name
)
bytes = zf.read(names['data'].format(**info))
df = pandas.read_csv(
io.BytesIO(bytes),
sep=';',
names=('year', 'height', 'interpolated', 'flags'),
converters={
"height": lambda x: main_stations[station.name]['rlr2nap'](missing2nan(x)),
"interpolated": str.strip,
}
)
df['station'] = station.name
return df
# get data for all stations
f = functools.partial(get_data, dataset=dataset_name)
# look up the data for each station
selected_stations['data'] = [f(station) for _, station in selected_stations.iterrows()]
# we now have data for each station
selected_stations[['name', 'data']]
Explanation: Now that we have defined which tide gauges we are monitoring we can start downloading the relevant data.
End of explanation
# compute the mean
grouped = pandas.concat(selected_stations['data'].tolist())[['year', 'height']].groupby('year')
mean_df = grouped.mean().reset_index()
# filter out non-trusted part (before NAP)
mean_df = mean_df[mean_df['year'] >= 1890].copy()
# these are the mean waterlevels
mean_df.tail()
# show all the stations, including the mean
title = 'Sea-surface height for Dutch tide gauges [{year_min} - {year_max}]'.format(
year_min=mean_df.year.min(),
year_max=mean_df.year.max()
)
fig = bokeh.plotting.figure(title=title, x_range=(1860, 2020), plot_width=900, plot_height=400)
colors = bokeh.palettes.Accent6
for color, (id_, station) in zip(colors, selected_stations.iterrows()):
data = station['data']
fig.circle(data.year, data.height, color=color, legend=station['name'], alpha=0.5)
fig.line(mean_df.year, mean_df.height, line_width=3, alpha=0.7, color='black', legend='Mean')
fig.legend.location = "bottom_right"
fig.yaxis.axis_label = 'waterlevel [mm] above NAP'
fig.xaxis.axis_label = 'year'
bokeh.io.show(fig)
Explanation: Now that we have all data downloaded we can compute the mean.
End of explanation
# define the statistical model
y = mean_df['height']
X = np.c_[
mean_df['year']-1970,
np.cos(2*np.pi*(mean_df['year']-1970)/18.613),
np.sin(2*np.pi*(mean_df['year']-1970)/18.613)
]
X = sm.add_constant(X)
model = sm.OLS(y, X)
fit = model.fit()
fit.summary(yname='Sea-surface height', xname=['Constant', 'Trend', 'Nodal U', 'Nodal V'])
# things to check:
# Durbin Watson should be >1 for no worries, >2 for no autocorrelation
# JB should be non-significant for normal residuals
# abs(x2.t) + abs(x3.t) should be > 3, otherwise adding nodal is not useful
fig = bokeh.plotting.figure(x_range=(1860, 2020), plot_width=900, plot_height=400)
for color, (id_, station) in zip(colors, selected_stations.iterrows()):
data = station['data']
fig.circle(data.year, data.height, color=color, legend=station['name'], alpha=0.8)
fig.circle(mean_df.year, mean_df.height, line_width=3, legend='Mean', color='black', alpha=0.5)
fig.line(mean_df.year, fit.predict(), line_width=3, legend='Current')
fig.legend.location = "bottom_right"
fig.yaxis.axis_label = 'waterlevel [mm] above N.A.P.'
fig.xaxis.axis_label = 'year'
bokeh.io.show(fig)
Explanation: Methods
Now we can define the statistical model. The "current sea-level rise" is defined by the following formula. Please note that the selected epoch of 1970 is arbitrary.
$
H(t) = a + b_{trend}(t-1970) + b_u\cos(2\pi\frac{t - 1970}{18.613}) + b_v\sin(2\pi\frac{t - 1970}{18.613})
$
The terms are refered to as Constant ($a$), Trend ($b_{trend}$), Nodal U ($b_u$) and Nodal V ($b_v$).
Alternative models are used to detect if sea-level rise is increasing. These models include the broken linear model, defined by a possible change in trend starting at 1993. This timespan is the start of the "satellite era" (start of TOPEX/Poseidon measurements), it is also often referred to as the start of acceleration because the satellite measurements tend to show a higher rate of sea level than the "tide-gauge era" (1900-2000). If this model fits better than the linear model, one could say that there is a "increase in sea-level rise".
$
H(t) = a + b_{trend}(t-1970) + b_{broken}(t > 1993)*(t-1993) + b_{u}\cos(2\pi\frac{t - 1970}{18.613}) + b_{v}\sin(2\pi\frac{t - 1970}{18.613})
$
Another way to look at increased sea-level rise is to look at sea-level acceleration. To detect sea-level acceleration one can use a quadratic model.
$
H(t) = a + b_{trend}(t-1970) + b_{quadratic}(t - 1970)*(t-1970) + b_{u}\cos(2\pi\frac{t - 1970}{18.613}) + b_{v}\sin(2\pi\frac{t - 1970}{18.613})
$
End of explanation
# define the statistical model
y = mean_df['height']
X = np.c_[
mean_df['year']-1970,
(mean_df['year'] > 1993) * (mean_df['year'] - 1993),
np.cos(2*np.pi*(mean_df['year']-1970)/18.613),
np.sin(2*np.pi*(mean_df['year']-1970)/18.613)
]
X = sm.add_constant(X)
model_broken_linear = sm.OLS(y, X)
fit_broken_linear = model_broken_linear.fit()
# define the statistical model
y = mean_df['height']
X = np.c_[
mean_df['year']-1970,
(mean_df['year'] - 1970) * (mean_df['year'] - 1970),
np.cos(2*np.pi*(mean_df['year']-1970)/18.613),
np.sin(2*np.pi*(mean_df['year']-1970)/18.613)
]
X = sm.add_constant(X)
model_quadratic = sm.OLS(y, X)
fit_quadratic = model_quadratic.fit()
fit_broken_linear.summary(yname='Sea-surface height', xname=['Constant', 'Trend', 'Trend(year > 1990)', 'Nodal U', 'Nodal V'])
fit_quadratic.summary(yname='Sea-surface height', xname=['Constant', 'Trend', 'Trend**2', 'Nodal U', 'Nodal V'])
fig = bokeh.plotting.figure(x_range=(1860, 2020), plot_width=900, plot_height=400)
for color, (id_, station) in zip(colors, selected_stations.iterrows()):
data = station['data']
fig.circle(data.year, data.height, color=color, legend=station['name'], alpha=0.8)
fig.circle(mean_df.year, mean_df.height, line_width=3, legend='Mean', color='black', alpha=0.5)
fig.line(mean_df.year, fit.predict(), line_width=3, legend='Current')
fig.line(mean_df.year, fit_broken_linear.predict(), line_width=3, color='#33bb33', legend='Broken')
fig.line(mean_df.year, fit_quadratic.predict(), line_width=3, color='#3333bb', legend='Quadratic')
fig.legend.location = "top_left"
fig.yaxis.axis_label = 'waterlevel [mm] above N.A.P.'
fig.xaxis.axis_label = 'year'
bokeh.io.show(fig)
Explanation: Is there a sea-level acceleration?
The following section computes two common models to detect sea-level acceleration. The broken linear model expects that sea level has been rising faster since 1990. The quadratic model assumes that the sea-level is accelerating continuously. Both models are compared to the linear model. The extra terms are tested for significance and the AIC is computed to see which model is "better".
End of explanation
msg = '''The current average waterlevel above NAP (in mm),
based on the 6 main tide gauges for the year {year} is {height:.1f} cm.
The current sea-level rise is {rate:.0f} cm/century'''
print(msg.format(year=mean_df['year'].iloc[-1], height=fit.predict()[-1]/10.0, rate=fit.params.x1*100.0/10))
if (fit.aic < fit_broken_linear.aic):
print('The linear model is a higher quality model (smaller AIC) than the broken linear model.')
else:
print('The broken linear model is a higher quality model (smaller AIC) than the linear model.')
if (fit_broken_linear.pvalues['x2'] < 0.05):
print('The trend break is bigger than we would have expected under the assumption that there was no trend break.')
else:
print('Under the assumption that there is no trend break, we would have expected a trend break as big as we have seen.')
if (fit.aic < fit_quadratic.aic):
print('The linear model is a higher quality model (smaller AIC) than the quadratic model.')
else:
print('The quadratic model is a higher quality model (smaller AIC) than the linear model.')
if (fit_quadratic.pvalues['x2'] < 0.05):
print('The quadratic term is bigger than we would have expected under the assumption that there was no quadraticness.')
else:
print('Under the assumption that there is no quadraticness, we would have expected a quadratic term as big as we have seen.')
Explanation: Conclusions
Below are some statements that depend on the output calculated above.
End of explanation |
15,073 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment analysis with TFLearn
In this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using TFLearn, a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you.
We'll start off by importing all the modules we'll need, then load and prepare the data.
Step1: Preparing the data
Following along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this.
Read the data
Use the pandas library to read the reviews and postive/negative labels from comma-separated files. The data we're using has already been preprocessed a bit and we know it uses only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.
Step2: Counting word frequency
To start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class.
Exercise
Step3: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
Step4: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
Step5: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.
Note
Step6: Text to vector function
Now we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this
Step7: If you do this right, the following code should return
```
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[
Step8: Now, run through our entire review data set and convert each review to a word vector.
Step9: Train, Validation, Test sets
Now that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later.
Step10: Building the network
TFLearn lets you build the network by defining the layers.
Input layer
For the input layer, you just need to tell it how many units you have. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size.
The number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units).
Output layer
The last layer you add is used as the output layer. Therefore, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax.
net = tflearn.fully_connected(net, 2, activation='softmax')
Training
To set how you train the network, use
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords
Step11: Intializing the model
Next we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want.
Note
Step12: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors.
You can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network.
Step13: Testing
After you're satisified with your hyperparameters, you can run the network on the test set to measure its performance. Remember, only do this after finalizing the hyperparameters.
Step14: Try out your own text! | Python Code:
import pandas as pd
import numpy as np
import tensorflow as tf
import tflearn
from tflearn.data_utils import to_categorical
Explanation: Sentiment analysis with TFLearn
In this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using TFLearn, a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you.
We'll start off by importing all the modules we'll need, then load and prepare the data.
End of explanation
reviews = pd.read_csv('reviews.txt', header=None)
labels = pd.read_csv('labels.txt', header=None)
Explanation: Preparing the data
Following along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this.
Read the data
Use the pandas library to read the reviews and postive/negative labels from comma-separated files. The data we're using has already been preprocessed a bit and we know it uses only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.
End of explanation
from collections import Counter
total_counts = Counter() # bag of words here
for _, row in reviews.iterrows():
total_counts.update(row[0].split(' '))
print("Total words in data set: ", len(total_counts))
Explanation: Counting word frequency
To start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class.
Exercise: Create the bag of words from the reviews data and assign it to total_counts. The reviews are stores in the reviews Pandas DataFrame. If you want the reviews as a Numpy array, use reviews.values. You can iterate through the rows in the DataFrame with for idx, row in reviews.iterrows(): (documentation). When you break up the reviews into words, use .split(' ') instead of .split() so your results match ours.
End of explanation
vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000]
print(vocab[:60])
Explanation: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
End of explanation
print(vocab[-1], ': ', total_counts[vocab[-1]])
Explanation: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
End of explanation
word2idx = {word: i for i, word in enumerate(vocab)}## create the word-to-index dictionary here
Explanation: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.
Note: When you run, you may see a different word from the one shown above, but it will also have the value 30. That's because there are many words tied for that number of counts, and the Counter class does not guarantee which one will be returned in the case of a tie.
Now for each review in the data, we'll make a word vector. First we need to make a mapping of word to index, pretty easy to do with a dictionary comprehension.
Exercise: Create a dictionary called word2idx that maps each word in the vocabulary to an index. The first word in vocab has index 0, the second word has index 1, and so on.
End of explanation
def text_to_vector(text):
word_vector = np.zeros(len(vocab), dtype=np.int_)
for word in text.split(' '):
if word in word2idx:
idx = word2idx[word]
word_vector[idx] += 1
return np.array(word_vector)
Explanation: Text to vector function
Now we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this:
Initialize the word vector with np.zeros, it should be the length of the vocabulary.
Split the input string of text into a list of words with .split(' '). Again, if you call .split() instead, you'll get slightly different results than what we show here.
For each word in that list, increment the element in the index associated with that word, which you get from word2idx.
Note: Since all words aren't in the vocab dictionary, you'll get a key error if you run into one of those words. You can use the .get method of the word2idx dictionary to specify a default returned value when you make a key error. For example, word2idx.get(word, None) returns None if word doesn't exist in the dictionary.
End of explanation
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
Explanation: If you do this right, the following code should return
```
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
array([0, 1, 0, 0, 2, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0])
```
End of explanation
word_vectors = np.zeros((len(reviews), len(vocab)), dtype=np.int_)
for ii, (_, text) in enumerate(reviews.iterrows()):
word_vectors[ii] = text_to_vector(text[0])
# Printing out the first 5 word vectors
word_vectors[:5, :23]
Explanation: Now, run through our entire review data set and convert each review to a word vector.
End of explanation
Y = (labels=='positive').astype(np.int_)
records = len(labels)
shuffle = np.arange(records)
np.random.shuffle(shuffle)
test_fraction = 0.9
train_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):]
trainX, trainY = word_vectors[train_split,:], to_categorical(Y.values[train_split, 0], 2)
testX, testY = word_vectors[test_split,:], to_categorical(Y.values[test_split], 2)
Y.values[[0,1]]
Explanation: Train, Validation, Test sets
Now that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later.
End of explanation
# Network building
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
net = tflearn.input_data([None, 10000])
net = tflearn.fully_connected(net, 200, activation='ReLU')
net = tflearn.fully_connected(net, 25, activation='ReLU')
net = tflearn.fully_connected(net, 2, activation='softmax')
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
model = tflearn.DNN(net)
return model
Explanation: Building the network
TFLearn lets you build the network by defining the layers.
Input layer
For the input layer, you just need to tell it how many units you have. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size.
The number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units).
Output layer
The last layer you add is used as the output layer. Therefore, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax.
net = tflearn.fully_connected(net, 2, activation='softmax')
Training
To set how you train the network, use
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords:
optimizer sets the training method, here stochastic gradient descent
learning_rate is the learning rate
loss determines how the network error is calculated. In this example, with the categorical cross-entropy.
Finally you put all this together to create the model with tflearn.DNN(net). So it ends up looking something like
net = tflearn.input_data([None, 10]) # Input
net = tflearn.fully_connected(net, 5, activation='ReLU') # Hidden
net = tflearn.fully_connected(net, 2, activation='softmax') # Output
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
model = tflearn.DNN(net)
Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.
End of explanation
model = build_model()
Explanation: Intializing the model
Next we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want.
Note: You might get a bunch of warnings here. TFLearn uses a lot of deprecated code in TensorFlow. Hopefully it gets updated to the new TensorFlow version soon.
End of explanation
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=100)
Explanation: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors.
You can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network.
End of explanation
predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)
test_accuracy = np.mean(predictions == testY[:,0], axis=0)
print("Test accuracy: ", test_accuracy)
Explanation: Testing
After you're satisified with your hyperparameters, you can run the network on the test set to measure its performance. Remember, only do this after finalizing the hyperparameters.
End of explanation
# Helper function that uses your model to predict sentiment
def test_sentence(sentence):
positive_prob = model.predict([text_to_vector(sentence.lower())])[0][1]
print('Sentence: {}'.format(sentence))
print('P(positive) = {:.3f} :'.format(positive_prob),
'Positive' if positive_prob > 0.5 else 'Negative')
sentence = "Moonlight is by far the best movie of 2016."
test_sentence(sentence)
sentence = "It's amazing anyone could be talented enough to make something this spectacularly awful"
test_sentence(sentence)
sentence = 'terrible ugly'
test_sentence(sentence)
Explanation: Try out your own text!
End of explanation |
15,074 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Use-case
Step1: Storing of video data
Step2: Tracking data
Step3: Addtional Information | Python Code:
import nixio as nix
import numpy as np
import matplotlib
matplotlib.use('TkAgg')
import matplotlib.pyplot as plt
from utils.notebook import print_stats
from utils.video_player import Playback
nix_file = nix.File.open('data/tracking_data.h5', nix.FileMode.ReadOnly)
print_stats(nix_file.blocks)
b = nix_file.blocks[0]
print_stats(b.data_arrays)
print_stats(b.multi_tags)
Explanation: Use-case: Fish tracking
Classical conditioning experiments of weakly electric fish Apteronotus albifrons
-- Benda Lab, University of Tübingen, Germany --
Context:
Fish are trained to choose one electical stimulus.
Trials are videotaped @25Hz using an IR camera.
Fish are tracked, position and orientation extracted.
End of explanation
video = [a for a in b.data_arrays if a.name == "video"][0]
fig = plt.figure(facecolor='white', figsize=(1024 / 90, 768 / 90), dpi=90)
pb = Playback(fig,video)
pb.start()
Explanation: Storing of video data:
Movies are, depending on the number of color channels, stored as 3D, respectively 4D DataArrays.
End of explanation
# get the tag linking tracking and video data
tag = [t for t in b.multi_tags if t.name == "tracking"][0]
fig = plt.figure(facecolor='white', figsize=(1024 / 90, 768 / 90), dpi=90)
pb = Playback(fig, video, tracking_tag=tag)
pb.start()
Explanation: Tracking data:
Tracking data is stored as positions in the 4D Matrix, the fourth dimension specifies the time (frame) at which an objkect was tracked. Link between video data and position data is established using a MultiTag entity.
End of explanation
fig = plt.figure(facecolor='white', figsize=(1024 / 90, 768 / 90), dpi=90)
pb = Playback(fig, video, tracking_tag=tag, show_orientation=True)
pb.start()
nix_file.close()
Explanation: Addtional Information:
During tracking additional information, i.e. the fish's orientation, is gathered. For each position there is also an orientation. This information is stored as a Feature of the tracking.
End of explanation |
15,075 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Seismic petrophysics, part 2
In part 1 we loaded some logs and used a data framework called Pandas pandas to manage them. We made a lithology–fluid class (LFC) log, and used it to color a crossplot. This time, we take the workflow further with fluid replacement modeling based on Gassmann’s equation. This is just an introduction; see Wang (2001) and Smith et al. (2003) for comprehensive overviews.
Fluid replacement modeling
Fluid replacement modeling (FRM in short), based on Gassmann's equation, is one of the key activities for any kind of quantitative work (see Wang, 2001 and Smith et al., 2003, for an exhaustive overview).
It is used to model the response of elastic logs (i.e., Vp, Vs and density) in different fluid conditions, thus allowing to study how much a rock would change in terms of velocity (or impedances) if it was filled with gas instead of brine for example; but what it also does is to bring all elastic logs to a common fluid denominator to focus only on lithological variations, disregarding fluid effects.
The inputs to FRM are $k_s$ and $\mu_s$ (saturated bulk and shear moduli which we can get from recorded Vps, Vs and density logs), $k_d$ and $\mu_d$ (dry-rock bulk and shear moduli), $k_0$ (mineral bulk modulus), $k_f$ (fluid bulk modulus) and porosity, $\varphi$. Reasonable estimates of mineral and fluid bulk moduli and porosity are easily computed and are shown below. The real unknowns, what is arguably the core issue of rock physics, are the dry-rock moduli.
And here we come to Gassmann's equation; I can use it in its inverse form to calculate $k_d$
Step1: What this function does is to get the relevant inputs which are
Step2: I can use the same function to also compute the fluid bulk modulus log which is usually done via Reuss average (the lower bound k_l in the vrh function above)
Step3: Then I calculate the original (insitu) fluid density rho_fl and bulk modulus k_fl, and the average mineral bulk modulus k0
Step4: ...and put it all together using the frm function defined above
Step5: Now I create 3 sets of copies of the original elastic logs stored in my DataFrame logs (logs.VP, logs.VSB, logs.RHO) for the three fluid scenarios investigated (and I will append an appropriate suffix to identify these 3 cases, i.e. _FRMB for brine, _FRMG for gas and _FRMO for oil, respectively). These three sets will be placeholders to store the values of the actual fluid-replaced logs (vpb, vsb, rhob, etc.).
To do that I need once more to define the flag logs brine_sand, oil_sand and shale as discussed in Part 1
Step6: The syntax I use to do this is
Step7: Finally, I will add three more LFC logs that will be companions to the new fluid-replaced logs.
The LFC log for brine-replaced logs will be always 1 whenever there's sand, because fluid replacement will have acted on all sand points and replaced whatever fluid we had originally with brine. Same thing for LFC_O and LFC_G (LFC for oil and gas-replaced logs)
Step8: And this is the same summary plot that I have used above, updated to show the fluid changes in the elastic logs Ip and Vp/Vs. It is also zoomed into the reservoir between 2150 and 2200 m, and the LFC log is the original one, i.e. it reflects the insitu case.
Step9: Let's have a look at the results in the Ip versus Vp/Vs crossplot domain; I will now plot 4 different plots to compare the initial situation to the results of the 4 fluid replacements
Step10: statistical analysis
generalities
After FRM I have created an augmented dataset. What I need to do now is to do a further abstraction, i.e. moving away from the intricacies and local irregularities of the real data with the final goal of creating a fully synthetic dataset representing an idealized version of a reservoir complex.
To do that I will do a statistical analysis to describe tendency, dispersion and correlation between certain elastic properties for each litho-fluid class.
Central tendency is simply described by calculating the mean values of some desired elastic property for all the existing classes; dispersion and correlation are summarised with the covariance matrix, which can be written like this (for two generic variables X and Y)
Step11: What I have done here is to first define 3 lists containing the names of the logs we want to extract (lines 1-4). Then I extract into 4 separate temporary DataFrames (lines 5-8) different sets of logs, e.g. ww0 will contain only the logs LFC,IP,VPVS, and ww1 will hold only LFC_B,IP_FRMB, VPVS_FRMB. I will also rename the fluid-replaced logs to have the same name as my insitu logs using ww1.columns=[lognames0]. In this way, when I merge all these 3 DataFrame together (line 9) I will have created a megalog (ww) that includes all values of Ip and Vp/Vs that are both measured for a certain facies, and synthetically created through fluid substitution.
In other words, I have now a superpowered, data-augmented megalog.
Now, on to step 2
Step12: With the code above I simply build the headers for a Pandas DataFrame to store mean and covariances for each class.
I will now create a DataFrame which is dynamically dimensioned and made of nfacies rows, i.e. one row per each facies, and 1+n+m+1 columns, where n is the number of mean columns and m is the length of the linearized covariance matrix; for our sample case where we have only two properties this means
Step13: This is how the stat DataFrame looks like now
Step14: So it's like an empty box, made of four rows (because we have 4 classes
Step15: Now let's look back at stat and see how it has been filled up with all the information I need
Step16: I can also interrogate stat to know for example the average Ip for the litho-fluid class 2 (brine sands)
Step17: Obviously I need to remember that the first property is Ip, so that's why I am querying the column mean0 (mean1 holds the average values for the second property, in this case Vp/Vs).
If I were working with 3 properties, e.g. Ip, Vp/Vs and density, then the average density value for a hypothetical class 5 would be
Step18: creation of synthetic datasets
I can now use all this information to create a brand new synthetic dataset that will replicate the average behaviour of the reservoir complex and at the same time overcome typical problems when using real data like undersampling of a certain class, presence of outliers, spurious occurrence of anomalies.
To create the synthetic datasets I use a Monte Carlo simulation relying on multivariate normal distribution to draw samples that are random but correlated in the elastic domain of choice (Ip and Vp/Vs).
Step19: First I define how many samples per class I want (line 1), then I create an empty Pandas DataFrame (lines 3-5) dimensioned like this
Step20: And these are the results, comparing the original, augmented dataset (i.e. the results of fluid replacement merged with the insitu log, all stored in the DataFrame ww defined earlier when calculating the statistics) with the newly created synthetic data | Python Code:
def frm(vp1, vs1, rho1, rho_f1, k_f1, rho_f2, k_f2, k0, phi):
vp1 = vp1 / 1000.
vs1 = vs1 / 1000.
mu1 = rho1 * vs1**2.
k_s1 = rho1 * vp1**2 - (4./3.)*mu1
# The dry rock bulk modulus
kdry = (k_s1 * ((phi*k0)/k_f1+1-phi)-k0) / ((phi*k0)/k_f1+(k_s1/k0)-1-phi)
# Now we can apply Gassmann to get the new values
k_s2 = kdry + (1- (kdry/k0))**2 / ( (phi/k_f2) + ((1-phi)/k0) - (kdry/k0**2) )
rho2 = rho1-phi * rho_f1+phi * rho_f2
mu2 = mu1
vp2 = np.sqrt(((k_s2+(4./3)*mu2))/rho2)
vs2 = np.sqrt((mu2/rho2))
return vp2*1000, vs2*1000, rho2, k_s2
Explanation: Seismic petrophysics, part 2
In part 1 we loaded some logs and used a data framework called Pandas pandas to manage them. We made a lithology–fluid class (LFC) log, and used it to color a crossplot. This time, we take the workflow further with fluid replacement modeling based on Gassmann’s equation. This is just an introduction; see Wang (2001) and Smith et al. (2003) for comprehensive overviews.
Fluid replacement modeling
Fluid replacement modeling (FRM in short), based on Gassmann's equation, is one of the key activities for any kind of quantitative work (see Wang, 2001 and Smith et al., 2003, for an exhaustive overview).
It is used to model the response of elastic logs (i.e., Vp, Vs and density) in different fluid conditions, thus allowing to study how much a rock would change in terms of velocity (or impedances) if it was filled with gas instead of brine for example; but what it also does is to bring all elastic logs to a common fluid denominator to focus only on lithological variations, disregarding fluid effects.
The inputs to FRM are $k_s$ and $\mu_s$ (saturated bulk and shear moduli which we can get from recorded Vps, Vs and density logs), $k_d$ and $\mu_d$ (dry-rock bulk and shear moduli), $k_0$ (mineral bulk modulus), $k_f$ (fluid bulk modulus) and porosity, $\varphi$. Reasonable estimates of mineral and fluid bulk moduli and porosity are easily computed and are shown below. The real unknowns, what is arguably the core issue of rock physics, are the dry-rock moduli.
And here we come to Gassmann's equation; I can use it in its inverse form to calculate $k_d$:
$$ k_d = \frac{k_s \cdot ( \frac{\varphi k_0}{k_f} +1-\varphi) -k_0}{\frac {\varphi k_0}{k_f} + \frac{k_s}{k_0} -1-\varphi} $$
Then I use Gassmann's again in its direct form to calculate the saturated bulk modulus with the new fluid:
$$k_s = k_d + \frac { (1-\frac{k_d}{k_0})^2} { \frac{\varphi}{k_f} + \frac{1-\varphi}{k_0} - \frac{k_d}{k_0^2}}$$
Shear modulus is not affected by pore fluid so that it stays unmodified throughout the fluid replacement process.
$$\mu_s = \mu_d$$
Bulk density is defined via the following equation:
$$\rho = (1-\varphi) \cdot \rho_0 + \varphi \cdot \rho_f $$
We can put all this together into a Python function, calculating the elastic parameters for fluid 2, given the parameters for fluid 1, along with some other data about the fluids:
End of explanation
def vrh(volumes,k,mu):
f = np.array(volumes).T
k = np.resize(np.array(k),np.shape(f))
mu = np.resize(np.array(mu),np.shape(f))
k_u = np.sum(f*k, axis=1)
k_l = 1. / np.sum(f/k, axis=1)
mu_u = np.sum(f*mu, axis=1)
mu_l = 1. / np.sum(f/mu, axis=1)
k0 = (k_u+k_l) / 2.
mu0 = (mu_u+mu_l) / 2.
return k_u, k_l, mu_u, mu_l, k0, mu0
Explanation: What this function does is to get the relevant inputs which are:
vp1, vs1, rho1: measured Vp, Vs, and density (saturated with fluid 1)
rho_fl1, k_fl1: density and bulk modulus of fluid 1
rho_fl2, k_fl2: density and bulk modulus of fluid 2
k0: mineral bulk modulus
phi: porosity
And returns vp2,vs2, rho2, k2 which are respectively Vp, Vs, density and bulk modulus of rock with fluid 2. Velocities are in m/s and densities in g/cm3.
I have mentioned above the possibility to get estimates of mineral bulk modulus $k_0$; the thing to know is that another assumption in Gassmann's equation is that it works only on monomineralic rocks; an actual rock is always a mixture of different minerals. A good approximation to get a mixed mineralogy bulk modulus k0 is to use Voigt-Reuss-Hill averaging (check these notes by Jack Dvorkin for a more rigorous discussion or this wikipedia entry).
So I define another function, vrh, to do that:
End of explanation
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
logs = pd.read_csv('qsiwell2_lfc.csv')
rho_qz=2.65; k_qz=37; mu_qz=44 # mineral properties, quartz (i.e., sands)
rho_sh=2.81; k_sh=15; mu_sh=5 # mineral properties, clay (i.e., shales)
rho_b=1.09; k_b=2.8 # fluid properties, brine
rho_o=0.78; k_o=0.94 # fluid properties, oil
rho_g=0.25; k_g=0.06 # fluid properties, gas
Explanation: I can use the same function to also compute the fluid bulk modulus log which is usually done via Reuss average (the lower bound k_l in the vrh function above):
First I will load the Well 2 logs with the litho-fluid class log created in Part 1, then define the various mineral and fluid elastic constants:
End of explanation
# mineral mixture bulk and shear moduli, k0 and mu0
shale = logs.VSH.values
sand = 1 - shale - logs.PHI.values
shaleN = shale / (shale+sand) # normalized shale and sand volumes
sandN = sand / (shale+sand)
k_u, k_l, mu_u, mu_l, k0, mu0 = vrh([shaleN, sandN], [k_sh, k_qz], [mu_sh, mu_qz])
# fluid mixture bulk modulus, using the same vrh function but capturing the Reuss average (second output)
water = logs.SW.values
hc = 1 - logs.SW.values
tmp, k_fl, tmp, tmp, tmp, tmp = vrh([water, hc], [k_b, k_o], [0, 0])
# fluid mixture density
rho_fl = water*rho_b + hc*rho_o
Explanation: Then I calculate the original (insitu) fluid density rho_fl and bulk modulus k_fl, and the average mineral bulk modulus k0:
End of explanation
vpb, vsb, rhob, kb = frm(logs.VP, logs.VS, logs.RHO, rho_fl, k_fl, rho_b, k_b, k0, logs.PHI)
vpo, vso, rhoo, ko = frm(logs.VP, logs.VS, logs.RHO, rho_fl, k_fl, rho_o, k_o, k0, logs.PHI)
vpg, vsg, rhog, kg = frm(logs.VP, logs.VS, logs.RHO, rho_fl, k_fl, rho_g, k_g, k0, logs.PHI)
Explanation: ...and put it all together using the frm function defined above:
End of explanation
sand_cutoff = 0.20
brine_sand = ((logs.VSH <= sand_cutoff) & (logs.SW >= 0.9))
oil_sand = ((logs.VSH <= sand_cutoff) & (logs.SW < 0.9))
shale = (logs.VSH > sand_cutoff)
Explanation: Now I create 3 sets of copies of the original elastic logs stored in my DataFrame logs (logs.VP, logs.VSB, logs.RHO) for the three fluid scenarios investigated (and I will append an appropriate suffix to identify these 3 cases, i.e. _FRMB for brine, _FRMG for gas and _FRMO for oil, respectively). These three sets will be placeholders to store the values of the actual fluid-replaced logs (vpb, vsb, rhob, etc.).
To do that I need once more to define the flag logs brine_sand, oil_sand and shale as discussed in Part 1:
End of explanation
logs['VP_FRMB'] = logs.VP
logs['VS_FRMB'] = logs.VS
logs['RHO_FRMB'] = logs.RHO
logs['VP_FRMB'][brine_sand|oil_sand] = vpb[brine_sand|oil_sand]
logs['VS_FRMB'][brine_sand|oil_sand] = vsb[brine_sand|oil_sand]
logs['RHO_FRMB'][brine_sand|oil_sand] = rhob[brine_sand|oil_sand]
logs['IP_FRMB'] = logs.VP_FRMB*logs.RHO_FRMB
logs['IS_FRMB'] = logs.VS_FRMB*logs.RHO_FRMB
logs['VPVS_FRMB'] = logs.VP_FRMB/logs.VS_FRMB
logs['VP_FRMO'] = logs.VP
logs['VS_FRMO'] = logs.VS
logs['RHO_FRMO'] = logs.RHO
logs['VP_FRMO'][brine_sand|oil_sand] = vpo[brine_sand|oil_sand]
logs['VS_FRMO'][brine_sand|oil_sand] = vso[brine_sand|oil_sand]
logs['RHO_FRMO'][brine_sand|oil_sand] = rhoo[brine_sand|oil_sand]
logs['IP_FRMO'] = logs.VP_FRMO*logs.RHO_FRMO
logs['IS_FRMO'] = logs.VS_FRMO*logs.RHO_FRMO
logs['VPVS_FRMO'] = logs.VP_FRMO/logs.VS_FRMO
logs['VP_FRMG'] = logs.VP
logs['VS_FRMG'] = logs.VS
logs['RHO_FRMG'] = logs.RHO
logs['VP_FRMG'][brine_sand|oil_sand] = vpg[brine_sand|oil_sand]
logs['VS_FRMG'][brine_sand|oil_sand] = vsg[brine_sand|oil_sand]
logs['RHO_FRMG'][brine_sand|oil_sand] = rhog[brine_sand|oil_sand]
logs['IP_FRMG'] = logs.VP_FRMG*logs.RHO_FRMG
logs['IS_FRMG'] = logs.VS_FRMG*logs.RHO_FRMG
logs['VPVS_FRMG'] = logs.VP_FRMG/logs.VS_FRMG
Explanation: The syntax I use to do this is:
logs['VP_FRMB'][brine_sand|oil_sand]=vpb[brine_sand|oil_sand]
Which means, copy the values from the output of fluid replacement (vpb, vsb, rhob, etc.) only where there's sand (vpb[brine_sand|oil_sand]), i.e. only when either the flag logs brine_sand or oil_sand are True).
I also compute the additional elastic logs (acoustic and shear impedances IP, Is, and Vp/Vs ratio, VPVS) in their fluid-replaced version.
End of explanation
temp_lfc_b = np.zeros(np.shape(logs.VSH))
temp_lfc_b[brine_sand.values | oil_sand.values] = 1 # LFC is 1 when either brine_sand (brine sand flag) or oil_sand (oil) is True
temp_lfc_b[shale.values] = 4 # LFC 4=shale
logs['LFC_B'] = temp_lfc_b
temp_lfc_o = np.zeros(np.shape(logs.VSH))
temp_lfc_o[brine_sand.values | oil_sand.values] = 2 # LFC is now 2 when there's sand (brine_sand or oil_sand is True)
temp_lfc_o[shale.values] = 4 # LFC 4=shale
logs['LFC_O'] = temp_lfc_o
temp_lfc_g = np.zeros(np.shape(logs.VSH))
temp_lfc_g[brine_sand.values | oil_sand.values] = 3 # LFC 3=gas sand
temp_lfc_g[shale.values] = 4 # LFC 4=shale
logs['LFC_G'] = temp_lfc_g
Explanation: Finally, I will add three more LFC logs that will be companions to the new fluid-replaced logs.
The LFC log for brine-replaced logs will be always 1 whenever there's sand, because fluid replacement will have acted on all sand points and replaced whatever fluid we had originally with brine. Same thing for LFC_O and LFC_G (LFC for oil and gas-replaced logs): they will always be equal to 2 (oil) or 3 (gas) for all the sand samples. That translates into Python like:
End of explanation
import matplotlib.colors as colors
# 0=undef 1=bri 2=oil 3=gas 4=shale
ccc = ['#B3B3B3','blue','green','red','#996633',]
cmap_facies = colors.ListedColormap(ccc[0:len(ccc)], 'indexed')
ztop = 2150; zbot = 2200
ll = logs.ix[(logs.DEPTH>=ztop) & (logs.DEPTH<=zbot)]
cluster=np.repeat(np.expand_dims(ll['LFC'].values,1),100,1)
f, ax = plt.subplots(nrows=1, ncols=4, figsize=(8, 12))
ax[0].plot(ll.VSH, ll.DEPTH, '-g', label='Vsh')
ax[0].plot(ll.SW, ll.DEPTH, '-b', label='Sw')
ax[0].plot(ll.PHI, ll.DEPTH, '-k', label='phi')
ax[1].plot(ll.IP_FRMG, ll.DEPTH, '-r')
ax[1].plot(ll.IP_FRMB, ll.DEPTH, '-b')
ax[1].plot(ll.IP, ll.DEPTH, '-', color='0.5')
ax[2].plot(ll.VPVS_FRMG, ll.DEPTH, '-r')
ax[2].plot(ll.VPVS_FRMB, ll.DEPTH, '-b')
ax[2].plot(ll.VPVS, ll.DEPTH, '-', color='0.5')
im=ax[3].imshow(cluster, interpolation='none', aspect='auto',cmap=cmap_facies,vmin=0,vmax=4)
cbar=plt.colorbar(im, ax=ax[3])
# cbar.set_label('0=undef,1=brine,2=oil,3=gas,4=shale')
# cbar.set_ticks(range(0,4+1));
cbar.set_label((12*' ').join(['undef', 'brine', 'oil', 'gas', 'shale']))
cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')
for i in ax[:-1]:
i.set_ylim(ztop,zbot)
i.invert_yaxis()
i.grid()
i.locator_params(axis='x', nbins=4)
ax[0].legend(fontsize='small', loc='lower right')
ax[0].set_xlabel("Vcl/phi/Sw"), ax[0].set_xlim(-.1,1.1)
ax[1].set_xlabel("Ip [m/s*g/cc]"), ax[1].set_xlim(3000,9000)
ax[2].set_xlabel("Vp/Vs"), ax[2].set_xlim(1.5,3)
ax[3].set_xlabel('LFC')
ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([]); ax[3].set_xticklabels([]);
Explanation: And this is the same summary plot that I have used above, updated to show the fluid changes in the elastic logs Ip and Vp/Vs. It is also zoomed into the reservoir between 2150 and 2200 m, and the LFC log is the original one, i.e. it reflects the insitu case.
End of explanation
f, ax = plt.subplots(nrows=1, ncols=4, sharey=True, sharex=True, figsize=(16, 4))
ax[0].scatter(logs.IP,logs.VPVS,20,logs.LFC,marker='o',edgecolors='none',alpha=0.5,cmap=cmap_facies,vmin=0,vmax=4)
ax[1].scatter(logs.IP_FRMB,logs.VPVS_FRMB,20,logs.LFC_B,marker='o',edgecolors='none',alpha=0.5,cmap=cmap_facies,vmin=0,vmax=4)
ax[2].scatter(logs.IP_FRMO,logs.VPVS_FRMO,20,logs.LFC_O,marker='o',edgecolors='none',alpha=0.5,cmap=cmap_facies,vmin=0,vmax=4)
ax[3].scatter(logs.IP_FRMG,logs.VPVS_FRMG,20,logs.LFC_G,marker='o',edgecolors='none',alpha=0.5,cmap=cmap_facies,vmin=0,vmax=4)
ax[0].set_xlim(3000,9000); ax[0].set_ylim(1.5,3);
ax[0].set_title('original data');
ax[1].set_title('FRM to brine');
ax[2].set_title('FRM to oil');
ax[3].set_title('FRM to gas');
for i in ax: i.grid()
Explanation: Let's have a look at the results in the Ip versus Vp/Vs crossplot domain; I will now plot 4 different plots to compare the initial situation to the results of the 4 fluid replacements:
End of explanation
lognames0 = ['LFC','IP','VPVS']
lognames1 = ['LFC_B','IP_FRMB', 'VPVS_FRMB']
lognames2 = ['LFC_O','IP_FRMO', 'VPVS_FRMO']
lognames3 = ['LFC_G','IP_FRMG', 'VPVS_FRMG']
ww0 = logs[pd.notnull(logs.LFC)].ix[:,lognames0];
ww1 = logs[pd.notnull(logs.LFC)].ix[:,lognames1]; ww1.columns=[lognames0]
ww2 = logs[pd.notnull(logs.LFC)].ix[:,lognames2]; ww2.columns=[lognames0]
ww3 = logs[pd.notnull(logs.LFC)].ix[:,lognames3]; ww3.columns=[lognames0]
ww = pd.concat([ww0, ww1, ww2, ww3])
import itertools
list(itertools.product(['a', 'b'], ['a', 'b']))
Explanation: statistical analysis
generalities
After FRM I have created an augmented dataset. What I need to do now is to do a further abstraction, i.e. moving away from the intricacies and local irregularities of the real data with the final goal of creating a fully synthetic dataset representing an idealized version of a reservoir complex.
To do that I will do a statistical analysis to describe tendency, dispersion and correlation between certain elastic properties for each litho-fluid class.
Central tendency is simply described by calculating the mean values of some desired elastic property for all the existing classes; dispersion and correlation are summarised with the covariance matrix, which can be written like this (for two generic variables X and Y):
[ var_X cov_XY ]
[ cov_XY var_Y ]
if I had three variables instead:
[ var_X cov_XY cov_XZ ]
[ cov_XY var_Y cov_YZ ]
[ cov_XZ cov_YZ var_Z ]
Where var_X is the variance of property X, i.e. a measure of dispersion about the mean, while the covariance cov_XY is a measure of similarity between two properties X and Y. A detailed description of the covariance matrix can be found at this wikipedia entry.
Python allows me to easily perform these calculations, but what I need is a way to to store this matrix in a way that is easily accessible (which will be another Pandas DataFrame). To do this I first linearize the matrices removing the duplicates, to get something like this (example for the 2-variable scenario):
var_X cov_XY cov_XY var_Y
implementation in python
For the rest of this exercise I will work with the two variables used so far, i.e. Ip and Vp/Vs.
First I need to prepare a few things to make the procedure easily extendable to other situations (e.g., using more than two variables):
collect all the insitu and fluid-replaced logs together to create a megalog;
create a Pandas DataFrame to hold statistical information for all the litho-fluid classes.
Step 1 works like this:
End of explanation
nlfc = int(ww.LFC.max())
nlogs = len(ww.columns) - 1 # my merged data always contain a facies log...
# ...that needs to be excluded from the statistical analysis
means, covs = [], []
for col in ww.columns[1:]:
means.append(col + '_mean')
import itertools
covariances = list(itertools.product(ww.columns[1:], ww.columns[1:]))
print covariances
for element in covariances:
if element[0] == element[1]:
covs.append(element[0] + '_var')
else:
covs.append(element[0] + '-' + element[1] + '_cov')
covs
Explanation: What I have done here is to first define 3 lists containing the names of the logs we want to extract (lines 1-4). Then I extract into 4 separate temporary DataFrames (lines 5-8) different sets of logs, e.g. ww0 will contain only the logs LFC,IP,VPVS, and ww1 will hold only LFC_B,IP_FRMB, VPVS_FRMB. I will also rename the fluid-replaced logs to have the same name as my insitu logs using ww1.columns=[lognames0]. In this way, when I merge all these 3 DataFrame together (line 9) I will have created a megalog (ww) that includes all values of Ip and Vp/Vs that are both measured for a certain facies, and synthetically created through fluid substitution.
In other words, I have now a superpowered, data-augmented megalog.
Now, on to step 2:
End of explanation
stat = pd.DataFrame(data=None,
columns=['LFC']+means+covs+['SAMPLES'],
index=np.arange(nlfc))
stat['LFC'] = range(1, nlfc+1)
Explanation: With the code above I simply build the headers for a Pandas DataFrame to store mean and covariances for each class.
I will now create a DataFrame which is dynamically dimensioned and made of nfacies rows, i.e. one row per each facies, and 1+n+m+1 columns, where n is the number of mean columns and m is the length of the linearized covariance matrix; for our sample case where we have only two properties this means:
1 column to store the facies number;
2 columns to store mean values for each of the four logs;
3 columns to store variance and covariance values;
1 column to store the number of samples belonging to each facies as a way to control the robustness of our statistical analysis (i.e., undersampled classes could be taken out of the study).
End of explanation
stat
np.math.factorial(3)
Explanation: This is how the stat DataFrame looks like now:
End of explanation
for i in range(1, 1+nlfc):
temp = ww[ww.LFC==i].drop('LFC',1)
stat.ix[(stat.LFC==i),'SAMPLES'] = temp.count()[0]
stat.ix[stat.LFC==i,means[0]:means[-1]] = np.mean(temp.values,0)
stat.ix[stat.LFC==i,covs[0]:covs[-1]] = np.cov(temp,rowvar=0).flatten()
print (temp.describe().ix['mean':'std'])
print ("LFC=%d, number of samples=%d" % (i, temp.count()[0]))
Explanation: So it's like an empty box, made of four rows (because we have 4 classes: shale, brine, oil and gas sands), and for each row we have space to store the mean values of property n.1 (Ip) and n.2 (Vp/Vs), plus their covariances and the number of samples for each class (that will inform me on the robustness of the analysis, i.e. if I have too few samples then I need to consider how my statistical analysis will not be reliable).
The following snippet shows how to populate stat and get a few plots to control if everything makes sense; also note the use of .flatten() at line 5 that linearize the covariance matrix as discussed above, so that I can save it in a series of contiguous cells along a row of stat:
End of explanation
stat
Explanation: Now let's look back at stat and see how it has been filled up with all the information I need:
End of explanation
stat.ix[stat.LFC==2, 'VPVS_mean']
Explanation: I can also interrogate stat to know for example the average Ip for the litho-fluid class 2 (brine sands):
End of explanation
i = 2
pd.scatter_matrix(ww[ww.LFC==i].drop('LFC',1),
color='black',
diagonal='kde',
alpha=0.1,
density_kwds={'color':'#000000','lw':2})
plt.suptitle('LFC=%d' % i)
Explanation: Obviously I need to remember that the first property is Ip, so that's why I am querying the column mean0 (mean1 holds the average values for the second property, in this case Vp/Vs).
If I were working with 3 properties, e.g. Ip, Vp/Vs and density, then the average density value for a hypothetical class 5 would be: stat.ix[stat.LFC==5,'mean2']; remember that Python works with zero-based lists and vectors so the first one has always an index of 0.
To display graphically the same information I use Pandas' scatter_matrix:
End of explanation
NN = 300
mc = pd.DataFrame(data=None,
columns=lognames0,
index=np.arange(nlfc*NN),
dtype='float')
for i in range(1, nlfc+1):
mc.loc[NN*i-NN:NN*i-1, 'LFC'] = i
from numpy.random import multivariate_normal
for i in range(1, nlfc+1):
mean = stat.loc[i-1,
means[0]:means[-1]].values
sigma = np.reshape(stat.loc[i-1,
covs[0]:covs[-1]].values,
(nlogs,nlogs))
m = multivariate_normal(mean,sigma,NN)
mc.ix[mc.LFC==i,1:] = m
Explanation: creation of synthetic datasets
I can now use all this information to create a brand new synthetic dataset that will replicate the average behaviour of the reservoir complex and at the same time overcome typical problems when using real data like undersampling of a certain class, presence of outliers, spurious occurrence of anomalies.
To create the synthetic datasets I use a Monte Carlo simulation relying on multivariate normal distribution to draw samples that are random but correlated in the elastic domain of choice (Ip and Vp/Vs).
End of explanation
mc.describe()
Explanation: First I define how many samples per class I want (line 1), then I create an empty Pandas DataFrame (lines 3-5) dimensioned like this:
as many columns as the elastic logs I have chosen: in this case, 3 (LFC, IP and VPVS, stored in lognames0, previously used to dimension the stat DataFrame);
the total number of rows will be equal to number of samples (e.g., 100) multiplied by the number of classes (4).
In lines 7-8 I fill in the LFC column with the numbers assigned to each class. I use the nlfc variable that contains the total number of classes, introduced earlier when creating the stat DataFrame; then I loop over the range 1 to nlfc , and assign 1 to rows 1-299, 2 to 300-599, and so on.
Lines 10-13 are the core of the entire procedure. For each class (another loop over the range 1-nlfc) I extract the average value mean and the covariance matrix sigma from the stat DataFrame, then put them into the Numpy np.random.multivariate_normal method to draw randomly selected samples from the continuous and correlated distributions of the properties Ip and Vp/Vs.
So the mc DataFrame will be made of 3 columns (LFC, IP, VPVS) by 1200 rows (we can look at the dimensions with .shape and then have an overview of the matrix with .describe discussed earlier):
End of explanation
f, ax = plt.subplots(nrows=1, ncols=2, sharey=True, sharex=True, figsize=(8, 4))
scatt1 = ax[0].scatter(ww.IP, ww.VPVS,
s=20,
c=ww.LFC,
marker='o',
edgecolors='none',
alpha=0.2,
cmap=cmap_facies,
vmin=0,vmax=4)
scatt2 = ax[1].scatter(mc.IP, mc.VPVS,
s=20,
c=mc.LFC,
marker='o',
edgecolors='none',
alpha=0.5,
cmap=cmap_facies,
vmin=0,vmax=4)
ax[0].set_xlim(3000,9000); ax[0].set_ylim(1.5,3.0);
ax[0].set_title('augmented well data');
ax[1].set_title('synthetic data');
for i in ax: i.grid()
Explanation: And these are the results, comparing the original, augmented dataset (i.e. the results of fluid replacement merged with the insitu log, all stored in the DataFrame ww defined earlier when calculating the statistics) with the newly created synthetic data:
End of explanation |
15,076 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Intro to Data Wrangling
Data Wrangling is the concept of arranging your dataset into a workable format for analysis. When retreiving data from various sources, it is not always in a format that is ready to be analyzed. There are times when there are missing or incorrect values in the dataset which will reduce the integrity of the analysis performed on the dataset. The process of Data Wrangling has been known to take up as much as 70% of a data scientist's time. In this notebook, we will review a few different formats in which you may retrieve data and what are some of the initial techniques to ensure that the data is ready for analysis.
Data Retrieval from CSV files
One of the common formats for datasets is the comma seperated values or csv format. This format works well with both Microsoft Excel and Google Sheets as both platforms allow you to view them in a spreadsheet format. This format is also supported by the pandas DataFrame. In the code below, we read in a csv into a DataFrame, df, and display the first 5 entries.
Step1: From the output, you see that this dataset gives information about baseball players. Here is the last 10 entries of our Data Frame.
Step2: One of the first things that you can do is to analyze the DataFrame to see if there is anything that doesn't look correct, such as variation in the number of entries in each column, min, max value etc. This can be easily assess through the describe()
Step3: As we can see from this dataset, there is a lot of variation in the count across each column. Some of this is due to the fact that there are baseball players in this dataset who are still alive today, thus there is no information on their death date. Some of this is can just be due to incomplete information being provided. This is where Data Wrangling techniques must be done so that we can have a good dataset to work with.
Retrieving Data using SQL queries
SQL queries can be used on DataFrames to extract the necessary data through the SQLite syntax. The pandasql provides you with the necessary API for SQL queries. Below are a few examples. The first query retrieves the first 10 entries of the birthMonth and birthYear columns.
Step4: The query below retrieves data on players that are less than 200 lbs and organizes the entries by birthCountry
Step5: By using SQL quieries, it makes it simple to retrieve the data that you would like to analyze and then wrangle that data specifically if necessary.
Data Retrieval from JSON formats
Another common format is the JSON format. Data is commonly in this format when you are working with Relational Databases. The example below shows how one can retrieve data from a database using that website's REST API. Here we show to load JSON data into a Python dictionary. Here we use the requests library and then load the data using the json library. Pprint is used to print the JSON in a more readable format.
Step6: Once the data is in the dictonary, we can access the information as such
Step7: Here we can see that there are some entries that are not there. There is functiona called fillna that can be used to insert some value for the NaN entries. Here we insert a -1 for all NaN entries in the deathMonth column | Python Code:
import numpy as np
import pandas as pd
df = pd.read_csv('Master.csv')
df.head(5)
Explanation: Intro to Data Wrangling
Data Wrangling is the concept of arranging your dataset into a workable format for analysis. When retreiving data from various sources, it is not always in a format that is ready to be analyzed. There are times when there are missing or incorrect values in the dataset which will reduce the integrity of the analysis performed on the dataset. The process of Data Wrangling has been known to take up as much as 70% of a data scientist's time. In this notebook, we will review a few different formats in which you may retrieve data and what are some of the initial techniques to ensure that the data is ready for analysis.
Data Retrieval from CSV files
One of the common formats for datasets is the comma seperated values or csv format. This format works well with both Microsoft Excel and Google Sheets as both platforms allow you to view them in a spreadsheet format. This format is also supported by the pandas DataFrame. In the code below, we read in a csv into a DataFrame, df, and display the first 5 entries.
End of explanation
df.tail(10)
Explanation: From the output, you see that this dataset gives information about baseball players. Here is the last 10 entries of our Data Frame.
End of explanation
df.describe()
Explanation: One of the first things that you can do is to analyze the DataFrame to see if there is anything that doesn't look correct, such as variation in the number of entries in each column, min, max value etc. This can be easily assess through the describe()
End of explanation
import pandasql
q = 'SELECT birthMonth, birthYear FROM df LIMIT 10'
sql_sol = pandasql.sqldf(q.lower(),globals())
sql_sol
Explanation: As we can see from this dataset, there is a lot of variation in the count across each column. Some of this is due to the fact that there are baseball players in this dataset who are still alive today, thus there is no information on their death date. Some of this is can just be due to incomplete information being provided. This is where Data Wrangling techniques must be done so that we can have a good dataset to work with.
Retrieving Data using SQL queries
SQL queries can be used on DataFrames to extract the necessary data through the SQLite syntax. The pandasql provides you with the necessary API for SQL queries. Below are a few examples. The first query retrieves the first 10 entries of the birthMonth and birthYear columns.
End of explanation
q2 = 'SELECT playerID,birthCountry,bats,weight FROM df WHERE weight < 200 GROUP BY birthCountry LIMIT 10'
sql_sol2 = pandasql.sqldf(q2.lower(),globals())
sql_sol2
Explanation: The query below retrieves data on players that are less than 200 lbs and organizes the entries by birthCountry
End of explanation
import json
import requests
import pprint
url = 'http://ws.audioscrobbler.com/2.0/?method=album.getinfo&api_key=4beab33cc6d65b05800d51f5e83bde1b&artist=Cher&album=Believe&format=json'
data = requests.get(url).text
df3 = json.loads(data)
pp = pprint.PrettyPrinter(indent=4)
pp.pprint(df)
Explanation: By using SQL quieries, it makes it simple to retrieve the data that you would like to analyze and then wrangle that data specifically if necessary.
Data Retrieval from JSON formats
Another common format is the JSON format. Data is commonly in this format when you are working with Relational Databases. The example below shows how one can retrieve data from a database using that website's REST API. Here we show to load JSON data into a Python dictionary. Here we use the requests library and then load the data using the json library. Pprint is used to print the JSON in a more readable format.
End of explanation
df3['album']['playcount']
Explanation: Once the data is in the dictonary, we can access the information as such:
End of explanation
print df['deathMonth'].head()
df['deathMonth'].fillna(-1).head()
Explanation: Here we can see that there are some entries that are not there. There is functiona called fillna that can be used to insert some value for the NaN entries. Here we insert a -1 for all NaN entries in the deathMonth column
End of explanation |
15,077 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Displacement-based Earthquake Loss Assessment - Silva et al. 2013
In this fragility method, thousands of synthetic buildings can be produced considering probabilistic distributions for the variability in the geometrical and material properties. The nonlinear capacity can be estimated using the displacement-based earthquake loss assessment theory. The structures are subject to a large set of ground motion records and the performance is calculated. Global limit states are used to estimate the distribution of buildings in each damage state for different levels of ground motion, and a regression algorithm is applied to derive fragility curves for each limit state.
In the following figure, a fragility model developed using this method is presented
Step1: Load capacity curves
In order to use this methodology, it is necessary to provide one (or a group) of capacity curves, defined according to the format described in the RMTK manual.
Please provide the location of the file containing the capacity curves using the parameter capacity_curves_file.
Step2: Load ground motion records
Please indicate the path to the folder containing the ground motion records to be used in the analysis through the parameter gmrs_folder.
Note
Step3: Load damage state thresholds
Please provide the path to your damage model file using the parameter damage_model_file in the cell below.
Step4: Obtain the damage probability matrix
The parameter structure_type needs to be defined in the cell below in order to calculate the damage probability matrix. The valid options are "bare frame" and "infilled frame".
Step5: Fit lognormal CDF fragility curves
The following parameters need to be defined in the cell below in order to fit lognormal CDF fragility curves to the damage probability matrix obtained above
Step6: Plot fragility functions
The following parameters need to be defined in the cell below in order to plot the lognormal CDF fragility curves obtained above
Step7: Save fragility functions
The derived parametric fragility functions can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above
Step8: Obtain vulnerability function
A vulnerability model can be derived by combining the set of fragility functions obtained above with a consequence model. In this process, the fractions of buildings in each damage state are multiplied by the associated damage ratio from the consequence model, in order to obtain a distribution of loss ratio for each intensity measure level.
The following parameters need to be defined in the cell below in order to calculate vulnerability functions using the above derived fragility functions
Step9: Plot vulnerability function
Step10: Save vulnerability function
The derived parametric or nonparametric vulnerability function can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above | Python Code:
import DBELA
from rmtk.vulnerability.common import utils
%matplotlib inline
Explanation: Displacement-based Earthquake Loss Assessment - Silva et al. 2013
In this fragility method, thousands of synthetic buildings can be produced considering probabilistic distributions for the variability in the geometrical and material properties. The nonlinear capacity can be estimated using the displacement-based earthquake loss assessment theory. The structures are subject to a large set of ground motion records and the performance is calculated. Global limit states are used to estimate the distribution of buildings in each damage state for different levels of ground motion, and a regression algorithm is applied to derive fragility curves for each limit state.
In the following figure, a fragility model developed using this method is presented:
<img src="../../../../../figures/fragility_example.png" width="400" align="middle">
Note: To run the code in a cell:
Click on the cell to select it.
Press SHIFT+ENTER on your keyboard or press the play button (<button class='fa fa-play icon-play btn btn-xs btn-default'></button>) in the toolbar above.
End of explanation
capacity_curves_file = "../../../../../../rmtk_data/capacity_curves_dbela.csv"
capacity_curves = utils.read_capacity_curves(capacity_curves_file)
utils.plot_capacity_curves(capacity_curves)
Explanation: Load capacity curves
In order to use this methodology, it is necessary to provide one (or a group) of capacity curves, defined according to the format described in the RMTK manual.
Please provide the location of the file containing the capacity curves using the parameter capacity_curves_file.
End of explanation
gmrs_folder = "../../../../../../rmtk_data/GMRs"
minT, maxT = 0.1, 2.0
gmrs = utils.read_gmrs(gmrs_folder)
utils.plot_response_spectra(gmrs, minT, maxT)
Explanation: Load ground motion records
Please indicate the path to the folder containing the ground motion records to be used in the analysis through the parameter gmrs_folder.
Note: Each accelerogram needs to be in a separate CSV file as described in the RMTK manual.
The parameters minT and maxT are used to define the period bounds when plotting the spectra for the provided ground motion fields.
End of explanation
damage_model_file = "../../../../../../rmtk_data/damage_model_dbela_low_code.csv"
damage_model = utils.read_damage_model(damage_model_file)
Explanation: Load damage state thresholds
Please provide the path to your damage model file using the parameter damage_model_file in the cell below.
End of explanation
structure_type = "bare frame"
PDM = DBELA.calculate_fragility(capacity_curves, gmrs, damage_model, structure_type)
Explanation: Obtain the damage probability matrix
The parameter structure_type needs to be defined in the cell below in order to calculate the damage probability matrix. The valid options are "bare frame" and "infilled frame".
End of explanation
IMT = "Sa"
period = 2.0
damping_ratio = 0.05
regression_method = "least squares"
fragility_model = utils.calculate_mean_fragility(gmrs, PDM, period, damping_ratio,
IMT, damage_model, regression_method)
Explanation: Fit lognormal CDF fragility curves
The following parameters need to be defined in the cell below in order to fit lognormal CDF fragility curves to the damage probability matrix obtained above:
1. IMT: This parameter specifies the intensity measure type to be used. Currently supported options are "PGA", "Sd" and "Sa".
2. period: This parameter defines the time period of the fundamental mode of vibration of the structure.
3. damping_ratio: This parameter defines the damping ratio for the structure
4. regression_method: This parameter defines the regression method to be used for estimating the parameters of the fragility functions. The valid options are "least squares" and "max likelihood".
End of explanation
minIML, maxIML = 0.01, 2.00
utils.plot_fragility_model(fragility_model, minIML, maxIML)
Explanation: Plot fragility functions
The following parameters need to be defined in the cell below in order to plot the lognormal CDF fragility curves obtained above:
* minIML and maxIML: These parameters define the limits of the intensity measure level for plotting the functions
End of explanation
taxonomy = "RC"
minIML, maxIML = 0.01, 2.00
output_type = "nrml"
output_path = "../../../../../../rmtk_data/output/"
utils.save_mean_fragility(taxonomy, fragility_model, minIML, maxIML, output_type, output_path)
Explanation: Save fragility functions
The derived parametric fragility functions can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above:
1. taxonomy: This parameter specifies a taxonomy string for the the fragility functions.
2. minIML and maxIML: These parameters define the bounds of applicability of the functions.
3. output_type: This parameter specifies the file format to be used for saving the functions. Currently, the formats supported are "csv" and "nrml".
End of explanation
cons_model_file = "../../../../../../rmtk_data/cons_model_dbela.csv"
imls = [0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50,
0.60, 0.70, 0.80, 0.90, 1.00, 1.20, 1.40, 1.60, 1.80, 2.00]
distribution_type = "lognormal"
cons_model = utils.read_consequence_model(cons_model_file)
vulnerability_model = utils.convert_fragility_vulnerability(fragility_model, cons_model,
imls, distribution_type)
Explanation: Obtain vulnerability function
A vulnerability model can be derived by combining the set of fragility functions obtained above with a consequence model. In this process, the fractions of buildings in each damage state are multiplied by the associated damage ratio from the consequence model, in order to obtain a distribution of loss ratio for each intensity measure level.
The following parameters need to be defined in the cell below in order to calculate vulnerability functions using the above derived fragility functions:
1. cons_model_file: This parameter specifies the path of the consequence model file.
2. imls: This parameter specifies a list of intensity measure levels in increasing order at which the distribution of loss ratios are required to be calculated.
3. distribution_type: This parameter specifies the type of distribution to be used for calculating the vulnerability function. The distribution types currently supported are "lognormal", "beta", and "PMF".
End of explanation
utils.plot_vulnerability_model(vulnerability_model)
Explanation: Plot vulnerability function
End of explanation
taxonomy = "RC"
output_type = "nrml"
output_path = "../../../../../../rmtk_data/output/"
utils.save_vulnerability(taxonomy, vulnerability_model, output_type, output_path)
Explanation: Save vulnerability function
The derived parametric or nonparametric vulnerability function can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above:
1. taxonomy: This parameter specifies a taxonomy string for the the fragility functions.
3. output_type: This parameter specifies the file format to be used for saving the functions. Currently, the formats supported are "csv" and "nrml".
End of explanation |
15,078 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hash Tables (Open Hashing)
This notebook provides a simple implementation of a hash table that uses open hashing.
The class ListMap from the notebook ListMap.ipynb implements a map as a linked list.
Step1: The function ord maps characters to their ASCII code. $\mapsto$
Step2: Given a string $w$ and the size $n$ of the hash table, the function $\texttt{hash_code}(w, n)$ calculates the hash code of $w$. For a string
$w = c_0c_1\cdots c_{m-1}$ of length $m$, this function is defined as follows
Step3: Let us test this function.
Step4: Hash tables work best if their size is a prime number. Therefore, the variable Primes stores a list of prime numbers.
These numbers are organized so that Primes[i+1] is roughly twice as big as Primes[i]. | Python Code:
%run ListMap.ipynb
Explanation: Hash Tables (Open Hashing)
This notebook provides a simple implementation of a hash table that uses open hashing.
The class ListMap from the notebook ListMap.ipynb implements a map as a linked list.
End of explanation
import string
for c in string.ascii_letters:
print(f'{c} ↦ {ord(c)}')
Explanation: The function ord maps characters to their ASCII code. $\mapsto$
End of explanation
def hash_code(w, n):
m = len(w)
s = 0
for k in range(m-1, -1, -1):
s = (s * 128 + ord(w[k])) % n
return s
Explanation: Given a string $w$ and the size $n$ of the hash table, the function $\texttt{hash_code}(w, n)$ calculates the hash code of $w$. For a string
$w = c_0c_1\cdots c_{m-1}$ of length $m$, this function is defined as follows:
$$ \texttt{hash_code}(w, n) = \left(\sum\limits_{i=0}^{m-1} \texttt{ord}(c_i) \cdot 128^i\right) \;\texttt{%}\; n $$
In order to prevent overflows when computing the numbers $128^i$ we can define the partial sum $s_k$ for
$k=0,1,\cdots,m-1$ by induction:
- $s_{0} = \texttt{ord}(c_{m-1}) \;\texttt{%}\; n$,
- $s_{k+1} = \bigl(s_k \cdot 128 + \texttt{ord}(c_{k}) \bigr) \;\texttt{%}\; n$.
Then we have
$$ s_{m-1} = \left(\sum\limits_{i=0}^{m-1} \texttt{ord}(c_i) \cdot 128^i\right) \;\texttt{%}\; n. $$
End of explanation
hash_code('George W. Bush', 6761)
class HashTable:
def __init__(self, n, code=hash_code):
self.mSize = n
self.mEntries = 0 # number of entries
self.mArray = [ ListMap() for i in range(self.mSize) ] # array of empty ListMap objects
self.mAlpha = 2 # load factor
self.mCode = code
Explanation: Let us test this function.
End of explanation
HashTable.Primes = [ 3, 7, 13, 31, 61, 127, 251, 509, 1021, 2039, 4093,
8191, 16381, 32749, 65521, 131071, 262139, 524287,
1048573, 2097143, 4194301, 8388593, 16777213,
33554393, 67108859, 134217689, 268435399,
536870909, 1073741789, 2147483647
]
def find(self, key):
index = self.mCode(key, self.mSize)
aList = self.mArray[index]
return aList.find(key)
HashTable.find = find
del find
def insert(self, key, value):
if self.mEntries >= self.mSize * self.mAlpha:
self._rehash()
index = self.mCode(key, self.mSize)
aList = self.mArray[index]
self.mEntries += aList.insert(key, value)
HashTable.insert = insert
del insert
def _rehash(self):
for p in HashTable.Primes:
if p * self.mAlpha > self.mEntries:
prime = p
break
newTable = HashTable(prime, self.mCode)
for aList in self.mArray:
for k, v in aList:
newTable.insert(k, v)
self.mSize = prime
self.mArray = newTable.mArray
HashTable._rehash = _rehash
del _rehash
def delete(self, key):
if 2 * self.mEntries <= self.mSize * self.mAlpha:
self._rehash()
index = self.mCode(key, self.mSize)
aList = self.mArray[index]
self.mEntries -= aList.delete(key)
HashTable.delete = delete
del delete
def allKeys(self):
Result = []
for L in self.mArray:
for key, _ in L:
Result.append(key)
return Result
HashTable.allKeys = allKeys
del allKeys
def __repr__(self):
result = ''
for i, aList in enumerate(self.mArray):
result += f'{i}: {aList}\n'
return result
HashTable.__repr__ = __repr__
del __repr__
t = HashTable(3)
t.insert('Adrian', 8)
t
t.insert('Benjamin', 24)
t
t.insert('Bereket', 1)
t
t.insert('Christian', 13)
t
t.insert('Christian', 14)
t
t.find('Adrian'), t.find('Christian'), t.find('Benjamin')
t.insert('David', 22)
t
t.insert('Ephraim', 19)
t
t.insert('Erwin', 26)
t
t.insert('Felix', 4)
t
t.insert('Florian', 9)
t
t.insert('Giorgio', 20)
t
t.insert('Jan', 7)
t
t.insert('Janis', 16)
t
t.insert('Josia', 18)
t
t.insert('Kai', 3)
t
t.insert('Lars', 21)
t
t.insert('Lucas', 0)
t
t.insert('Marcel', 5)
t
t.insert('Marius', 6)
t
t.insert('Markus', 17)
t
t.insert('Matthias', 10)
t
t.insert('Nick', 11)
t
t.insert('Patrick', 23)
t
t.insert('Petra', 27)
t
t.insert('Rene', 15)
t
t.insert('Sebastian', 25)
t
t.insert('Stefan', 2)
t
t.find('Stefan')
t.delete('Adrian')
t
t.delete('Adrian')
t.delete('Benjamin')
t.delete('Bereket')
t.delete('Christian')
t.delete('Christian')
t.delete('David')
t.delete('Ephraim')
t.delete('Erwin')
t.delete('Felix')
t.delete('Florian')
t.delete('Giorgio')
t.delete('Jan')
t.delete('Janis')
t.delete('Josia')
t.delete('Kai')
t.delete('Lars')
t.delete('Lucas')
t.delete('Marcel')
t.delete('Marius')
t.delete('Markus')
t.delete('Matthias')
t.delete('Nick')
t.delete('Patrick')
t.delete('Petra')
t.delete('Rene')
t.delete('Sebastian')
t.delete('Stefan')
t
def primes(n=100):
S = HashTable(20, code=lambda x, n: x % n)
for i in range(2, n + 1):
S.insert(i, i)
for i in range(2, n // 2 + 1):
for j in range(i, n // i + 1):
S.delete(i * j)
return S.allKeys()
L = primes()
print(L)
print(sorted(L))
Explanation: Hash tables work best if their size is a prime number. Therefore, the variable Primes stores a list of prime numbers.
These numbers are organized so that Primes[i+1] is roughly twice as big as Primes[i].
End of explanation |
15,079 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MIMO Least Squares Detection
This code is provided as supplementary material of the lecture Machine Learning and Optimization in Communications (MLOC).<br>
This code illustrates
Step1: We want to transmit $x$ over a MIMO channel $H\in \mathbb{R}^{k \times n}$.
The receiver measures $y$, which is the result of
$y=Hx$. At the receiver side, we have channel state information (CSI) and therefore know $H$.
Specify the simulation paramters.
You can vary $k$ (number of receive antennas) but leave $n$ (number of transmit antennas) fixed
if you want to get a graphical output.
Step2: Now, we want to estimate $\boldsymbol{x}$ by using a Least-Square Detector
Step3: Plots
Step4: Now we use Newton's method. It reaches the minimum in one step,
because the objective function is quadratic (Least-Square).
Step5: A limitation of the transmit signal energy is known.
$\boldsymbol{x}^T\boldsymbol{x} \leq 1$.
We add this information as a constraint to the problem with the use
of a Lagrange multiplier.
Use gradient descent direction to find the optimal $\boldsymbol{x}$ of the new constrained
problem.
Step6: Plots | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: MIMO Least Squares Detection
This code is provided as supplementary material of the lecture Machine Learning and Optimization in Communications (MLOC).<br>
This code illustrates:
* Toy example of MIMO Detection with constrained least-squares
* Implementation of constrained least squares via gradient descent
End of explanation
n = 2 # Number of TX antennas. Leave n fixed to 2!
k = 3 # Number of RX antennas.You can play around with k.
x = np.random.rand(n) # Transmit data (random).
x = x/np.linalg.norm(x) * np.random.rand() # Normalize x to a transmit energy in [0,1].
H = np.random.randn(k, n) # MIMO channel (random).
y = np.dot(H, x) # Apply channel to data.
print("x =",x)
Explanation: We want to transmit $x$ over a MIMO channel $H\in \mathbb{R}^{k \times n}$.
The receiver measures $y$, which is the result of
$y=Hx$. At the receiver side, we have channel state information (CSI) and therefore know $H$.
Specify the simulation paramters.
You can vary $k$ (number of receive antennas) but leave $n$ (number of transmit antennas) fixed
if you want to get a graphical output.
End of explanation
delta = 1e-9 # Threshold for stopping criterion.
epsilon = 1e-4 # Step length.
max_iter = 100000
# Initial guess.
init_xg = np.random.rand(*x.shape)*1.4
xg = init_xg
# Gradient descent line search.
points = []
while len(points) < max_iter:
points.append(xg)
grad = 2*H.T.dot(H).dot(xg)-2*np.dot(H.T,y) # Calc gradient at current position.
if np.linalg.norm(grad) < delta:
break
xg = xg - 2*epsilon*grad
print("xg =",xg)
Explanation: Now, we want to estimate $\boldsymbol{x}$ by using a Least-Square Detector:
$\min\limits_{\boldsymbol{x}} ||\boldsymbol{H}\boldsymbol{x}-\boldsymbol{y}||_2^2$.
This is a minimization problem.
The first approach is a line search with gradient descent direction and fixed step length.
End of explanation
def obj_func(mesh):
return np.linalg.norm(np.tensordot(H, mesh, axes=1)-y[:, np.newaxis, np.newaxis], axis=0)**2
# Least-Square function.doing a matrix multiplication for a mesh
x_grid = np.arange(-1.5, 1.5, 0.02)
y_grid = np.arange(-1.5, 1.5, 0.02)
X, Y = np.meshgrid(x_grid, y_grid)
fZ = obj_func([X, Y])
# Line search trajectory.
trajectory_x = [points[i][0] for i in range(len(points))]
trajectory_y = [points[i][1] for i in range(len(points))]
font = {'size' : 20}
plt.rc('font', **font)
plt.rc('text', usetex=True)
params= {'text.latex.preamble' : [r'\usepackage{amsmath}']}
plt.rcParams.update(params)
plt.figure(1,figsize=(15,6))
plt.rcParams.update({'font.size': 15})
plt.subplot(121)
plt.contourf(X,Y,fZ,levels=20)
plt.colorbar()
plt.xlabel("$x_1$")
plt.ylabel("$x_2$")
plt.plot(trajectory_x, trajectory_y,marker='.',color='w',linewidth=2)
plt.plot(x[0], x[1], marker='x',color='r',markersize=12, markeredgewidth=2)
plt.plot(init_xg[0],init_xg[1], marker='x',color='g',markersize=12, markeredgewidth=2)
plt.subplot(122)
plt.plot(range(0,len(points)),[np.linalg.norm(p-x) for p in points])
plt.grid(True)
plt.xlabel("Step $i$")
plt.ylabel(r"$\Vert f(\boldsymbol{x}^{(i)})-\boldsymbol{x}\Vert_2$")
plt.show()
Explanation: Plots:
* [left subplot]: The function and the trajectory of the line search.
The minimum at $x$ is marked with a red cross and
the first guess with a green cross.
* [right subplot]: The euclidean distance of the trajectory
to the minimum at each iteration.
End of explanation
xh = np.linalg.inv(H.T.dot(H)).dot(H.T).dot(y)
print('xh = ', xh)
Explanation: Now we use Newton's method. It reaches the minimum in one step,
because the objective function is quadratic (Least-Square).
End of explanation
max_iter = 100000
lam = 5 # Init value for lambda.
init_xg = np.random.rand(*x.shape)*1.4 # Initial guess.
xg = init_xg
points = []
while len(points) < max_iter:
points.append(xg)
xg = np.linalg.inv(H.T.dot(H)+lam*np.identity(n)).dot(H.T).dot(y)
lam = lam - epsilon*(1-xg.T.dot(xg))
if np.abs(1-xg.T.dot(xg)) < delta or lam < delta:
break
print(xg)
Explanation: A limitation of the transmit signal energy is known.
$\boldsymbol{x}^T\boldsymbol{x} \leq 1$.
We add this information as a constraint to the problem with the use
of a Lagrange multiplier.
Use gradient descent direction to find the optimal $\boldsymbol{x}$ of the new constrained
problem.
End of explanation
trajectory_x = [points[i][0] for i in range(len(points))]
trajectory_y = [points[i][1] for i in range(len(points))]
x_grid = np.arange(-1.5, 1.5, 0.02)
y_grid = np.arange(-1.5, 1.5, 0.02)
X, Y = np.meshgrid(x_grid, y_grid)
fZ = obj_func([X, Y])
plt.figure(1,figsize=(15,6))
plt.subplot(121)
fig = plt.gcf()
ax = fig.gca()
plt.rcParams.update({'font.size': 14})
plt.contourf(X,Y,fZ,levels=20)
plt.colorbar()
plt.xlabel("$x_1$")
plt.ylabel("$x_2$")
circle = plt.Circle((0,0),radius=1, fill=False, color='r')
ax.add_artist(circle)
plt.plot(trajectory_x, trajectory_y,marker='.',color='w',linewidth=2)
plt.plot(x[0],x[1], marker='x',color='r',markersize=12, markeredgewidth=2)
plt.plot(init_xg[0],init_xg[1], marker='x',color='g',markersize=12, markeredgewidth=2)
plt.subplot(122)
plt.plot(range(0,len(points)),[np.linalg.norm(p-x) for p in points])
plt.grid(True)
plt.xlabel("Step $i$")
plt.ylabel(r"$\Vert f(\boldsymbol{x}^{(i)})-\boldsymbol{x}\Vert$")
plt.show()
Explanation: Plots:
* [left subplot]: The function and the trajectory of the line search.
The minimum at $x$ is marked with a red cross and
the first guess with a green cross. The constraint is displayed with a black line.
* [right subplot]: The euclidean distance of the trajectory
to the minimum at each iteration.
End of explanation |
15,080 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualising statistical significance thresholds on EEG data
MNE-Python provides a range of tools for statistical hypothesis testing
and the visualisation of the results. Here, we show a few options for
exploratory and confirmatory tests - e.g., targeted t-tests, cluster-based
permutation approaches (here with Threshold-Free Cluster Enhancement);
and how to visualise the results.
The underlying data comes from [1]; we contrast long vs. short words.
TFCE is described in [2].
References
.. [1] Dufau, S., Grainger, J., Midgley, KJ., Holcomb, PJ. A thousand
words are worth a picture
Step1: If we have a specific point in space and time we wish to test, it can be
convenient to convert the data into Pandas Dataframe format. In this case,
the
Step2: Absent specific hypotheses, we can also conduct an exploratory
mass-univariate analysis at all sensors and time points. This requires
correcting for multiple tests.
MNE offers various methods for this; amongst them, cluster-based permutation
methods allow deriving power from the spatio-temoral correlation structure
of the data. Here, we use TFCE.
Step3: The results of these mass univariate analyses can be visualised by plotting | Python Code:
import matplotlib.pyplot as plt
from scipy.stats import ttest_ind
import numpy as np
import mne
from mne.channels import find_layout, find_ch_connectivity
from mne.stats import spatio_temporal_cluster_test
np.random.seed(0)
# Load the data
path = mne.datasets.kiloword.data_path() + '/kword_metadata-epo.fif'
epochs = mne.read_epochs(path)
name = "NumberOfLetters"
# Split up the data by the median length in letters via the attached metadata
median_value = str(epochs.metadata[name].median())
long = epochs[name + " > " + median_value]
short = epochs[name + " < " + median_value]
Explanation: Visualising statistical significance thresholds on EEG data
MNE-Python provides a range of tools for statistical hypothesis testing
and the visualisation of the results. Here, we show a few options for
exploratory and confirmatory tests - e.g., targeted t-tests, cluster-based
permutation approaches (here with Threshold-Free Cluster Enhancement);
and how to visualise the results.
The underlying data comes from [1]; we contrast long vs. short words.
TFCE is described in [2].
References
.. [1] Dufau, S., Grainger, J., Midgley, KJ., Holcomb, PJ. A thousand
words are worth a picture: Snapshots of printed-word processing in an
event-related potential megastudy. Psychological Science, 2015
.. [2] Smith and Nichols 2009, "Threshold-free cluster enhancement:
addressing problems of smoothing, threshold dependence, and
localisation in cluster inference", NeuroImage 44 (2009) 83-98.
End of explanation
time_windows = ((200, 250), (350, 450))
elecs = ["Fz", "Cz", "Pz"]
# display the EEG data in Pandas format (first 5 rows)
print(epochs.to_data_frame()[elecs].head())
report = "{elec}, time: {tmin}-{tmax} msec; t({df})={t_val:.3f}, p={p:.3f}"
print("\nTargeted statistical test results:")
for (tmin, tmax) in time_windows:
for elec in elecs:
# extract data
time_win = "{} < time < {}".format(tmin, tmax)
A = long.to_data_frame().query(time_win)[elec].groupby("condition")
B = short.to_data_frame().query(time_win)[elec].groupby("condition")
# conduct t test
t, p = ttest_ind(A.mean(), B.mean())
# display results
format_dict = dict(elec=elec, tmin=tmin, tmax=tmax,
df=len(epochs.events) - 2, t_val=t, p=p)
print(report.format(**format_dict))
Explanation: If we have a specific point in space and time we wish to test, it can be
convenient to convert the data into Pandas Dataframe format. In this case,
the :class:mne.Epochs object has a convenient
:meth:mne.Epochs.to_data_frame method, which returns a dataframe.
This dataframe can then be queried for specific time windows and sensors.
The extracted data can be submitted to standard statistical tests. Here,
we conduct t-tests on the difference between long and short words.
End of explanation
# Calculate statistical thresholds
con = find_ch_connectivity(epochs.info, "eeg")
# Extract data: transpose because the cluster test requires channels to be last
# In this case, inference is done over items. In the same manner, we could
# also conduct the test over, e.g., subjects.
X = [long.get_data().transpose(0, 2, 1),
short.get_data().transpose(0, 2, 1)]
tfce = dict(start=.2, step=.2)
t_obs, clusters, cluster_pv, h0 = spatio_temporal_cluster_test(
X, tfce, n_permutations=100)
significant_points = cluster_pv.reshape(t_obs.shape).T < .05
print(str(significant_points.sum()) + " points selected by TFCE ...")
Explanation: Absent specific hypotheses, we can also conduct an exploratory
mass-univariate analysis at all sensors and time points. This requires
correcting for multiple tests.
MNE offers various methods for this; amongst them, cluster-based permutation
methods allow deriving power from the spatio-temoral correlation structure
of the data. Here, we use TFCE.
End of explanation
# We need an evoked object to plot the image to be masked
evoked = mne.combine_evoked([long.average(), -short.average()],
weights='equal') # calculate difference wave
time_unit = dict(time_unit="s")
evoked.plot_joint(title="Long vs. short words", ts_args=time_unit,
topomap_args=time_unit) # show difference wave
# Create ROIs by checking channel labels
pos = find_layout(epochs.info).pos
rois = dict()
for pick, channel in enumerate(epochs.ch_names):
last_char = channel[-1] # for 10/20, last letter codes the hemisphere
roi = ("Midline" if last_char in "z12" else
("Left" if int(last_char) % 2 else "Right"))
rois[roi] = rois.get(roi, list()) + [pick]
# sort channels from front to center
# (y-coordinate of the position info in the layout)
rois = {roi: np.array(picks)[pos[picks, 1].argsort()]
for roi, picks in rois.items()}
# Visualize the results
fig, axes = plt.subplots(nrows=3, figsize=(8, 8))
vmax = np.abs(evoked.data).max() * 1e6
# Iterate over ROIs and axes
axes = axes.ravel().tolist()
for roi_name, ax in zip(sorted(rois.keys()), axes):
picks = rois[roi_name]
evoked.plot_image(picks=picks, axes=ax, colorbar=False, show=False,
clim=dict(eeg=(-vmax, vmax)), mask=significant_points,
**time_unit)
evoked.nave = None
ax.set_yticks((np.arange(len(picks))) + .5)
ax.set_yticklabels([evoked.ch_names[idx] for idx in picks])
if not ax.is_last_row(): # remove xticklabels for all but bottom axis
ax.set(xlabel='', xticklabels=[])
ax.set(ylabel='', title=roi_name)
fig.colorbar(ax.images[-1], ax=axes, fraction=.1, aspect=20,
pad=.05, shrink=2 / 3, label="uV", orientation="vertical")
plt.show()
Explanation: The results of these mass univariate analyses can be visualised by plotting
:class:mne.Evoked objects as images (via :class:mne.Evoked.plot_image)
and masking points for significance.
Here, we group channels by Regions of Interest to facilitate localising
effects on the head.
End of explanation |
15,081 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Two-Level, Six-Factor Full Factorial Design
<br />
<br />
<br />
Table of Contents
Introduction
Factorial Experimental Design
Step1: <a name="fullfactorial"></a>
Two-Level Six-Factor Full Factorial Design
Let's start with our six-factor factorial design example. Six factors means there are six input variables; this is still a two-level experiment, so this is now a $2^6$-factorial experiment.
Additionally, there are now three response variables, $(y_1, y_2, y_3)$.
To generate a table of the 64 experiments to be run at each factor level, we will use the itertools.product function below. This is all put into a DataFrame.
This example generates some random response data, by multiplying a vector of random numbers by the vector of input variable values. (Nothing too complicated.)
Step2: <a name="varlablels"></a>
Defining Variables and Variable Labels
Next we'll define some containers for input variable labels, output variable labels, and any interaction terms that we'll be computing
Step3: Now that we have variable labels for each main effect and interaction effect, we can actually compute those effects.
<a name="computing_effects"></a>
Computing Main and Interaction Effects
We'll start by finding the constant effect, which is the mean of each response
Step4: Next, compute the main effect of each variable, which quantifies the amount the response changes by when the input variable is changed from the -1 to +1 level. That is, it computes the average effect of an input variable $x_i$ on each of the three response variables $y_1, y_2, y_3$.
Step5: Our next step is to crank through each variable interaction level
Step6: We've computed the main and interaction effects for every variable combination (whew!), but now we're at a point where we want to start doing things with these quantities.
<a name="analyzing_effects"></a>
Analyzing Effects
The first and most important question is, what variable, or combination of variables, has the strongest effect on the three responses $y_1$? $y_2$? $y_3$?
To figure this out, we'll need to use the data we computed above. Python makes it easy to slice and dice data. In this case, we've constructed a nested dictionary, with the outer keys mapping to the number of variables and inner keys mapping to particular combinations of input variables. Its pretty easy to convert this to a flat data structure that we can use to sort by variable effects. We've got six "levels" of variable combinations, so we'll flatten effects by looping through all six dictionaries of variable combinations (from main effects to six-variable interaction effects), and adding each entry to a master dictionary.
The master dictionary will be a flat dictionary, and once we've populated it, we can use it to make a DataFrame for easier sorting, printing, manipulating, aggregating, and so on.
Step7: If we were only to look at the list of rankings of each variable, we would see that each response is affected by different input variables, listed below in order of descending importance
Step8: Normally, we would use the main effects that were computed, and their rankings, to eliminate any variables that don't have a strong effect on any of our variables. However, this analysis shows that sometimes we can't eliminate any variables.
All six input variables are depicted as the effects that fall far from the red line - indicating all have a statistically meaningful (i.e., not normally distributed) effect on all three response variables. This means we should keep all six factors in our analysis.
There is also a point on the $y_3$ graph that appears significant on the bottom. Examining the output of the lists above, this point represents the effect for the six-way interaction of all input variables. High-order interactions are highly unlikely (and in this case it is a numerical artifact of the way the responses were generated), so we'll keep things simple and stick to a linear model.
Let's continue our analysis without eliminating any of the six factors, since they are important to all of our responses.
<a name="dof"></a>
Utilizing Degrees of Freedom
Our very expensive, 64-experiment full factorial design (the data for which maps $(x_1,x_2,\dots,x_6)$ to $(y_1,y_2,y_3)$) gives us 64 data points, and 64 degrees of freedom. What we do with those 64 degrees of freedom is up to us.
We could fit an empirical model, or response surface, that has 64 independent parameters, and account for many of the high-order interaction terms - all the way up to six-variable interaction effects. However, high-order effects are rarely important, and are a waste of our degrees of freedom.
Alternatively, we can fit an empirical model with fewer coefficients, using up fewer degrees of freedom, and use the remaining degrees of freedom to characterize the error introduced by our approximate model.
To describe a model with the 6 variables listed above and no other variable interaction effects would use only 6 degrees of freedom, plus 1 degree of freedom for the constant term, leaving 57 degrees of freedom available to quantify error, attribute variance, etc.
Our goal is to use least squares to compute model equations for $(y_1,y_2,y_3)$ as functions of $(x_1,x_2,x_3,x_4,x_5,x_6)$.
Step9: The first ordinary least squares linear model is created to predict values of the first variable, $y_1$, as a function of each of our input variables, the list of which are contained in the xlabs variable. When we perform the linear regression fitting, we see much of the same information that we found in the prior two-level three-factor full factorial design, but here, everything is done automatically.
The model is linear, meaning it's fitting the coefficients of the function
Step10: The StatsModel OLS object prints out quite a bit of useful information, in a nicely-formatted table. Starting at the top, we see a couple of important pieces of information
Step11: <a name="goodness_of_fit"></a>
Quantifying Model Goodness-of-Fit
We can now use these linear models to evaluate each set of inputs and compare the model response $\hat{y}$ to the actual observed response $y$. What we would expect to see, if our model does an adequate job of representing the underlying behavior of the model, is that in each of the 64 experiments, the difference between the model prediction $M$ and the measured data $d$, defined as the residual $r$,
$$
r = \left| d - M \right|
$$
should be comparable across all experiments. If the residuals appear to have functional dependence on the input variables, it is an indication that our model is missing important effects and needs more or different terms. The way we determine this, mathematically, is by looking at a quantile-quantile plot of our errors (that is, a ranked plot of our error magnitudes).
If the residuals are normally distributed, they will follow a straight line; if the plot shows the data have significant wiggle and do not follow a line, it is an indication that the errors are not normally distributed, and are therefore skewed (indicating terms missing from our OLS model).
Step12: Determining whether significant trends are being missed by the model depends on how many points deviate from the red line, and how significantly. If there is a single point that deviates, it does not necessarily indicate a problem; but if there is significant wiggle and most points deviate significantly from the red line, it means that there is something about the relationship between the inputs and the outputs that our model is missing.
There are only a few points deviating from the red line. We saw from the effect quantile for $y_3$ that there was an interaction variable that was important to modeling the response $y_3$, and it is likely this interaction that is leading to noise at the tail end of these residuals. This indicates residual errors (deviations of the model from data) that do not follow a natural, normal distribution, which indicates there is a pattern in the deviations - namely, the interaction effect.
The conclusion about the error from the quantile plots above is that there are only a few points deviation from the line, and no particularly significant outliers. Our model can use some improvement, but it's a pretty good first-pass model.
<a name="distribution_of_error"></a>
Distribution of Error
Another thing we can look at is the normalized error
Step13: Note that in these figures, the bumps at extreme value are caused by the fact that the interval containing the responses includes 0 and values close to 0, so the normalization factor is very tiny, leading to large values.
<a name="aggregating"></a>
Aggregating Results
Let's next aggregate experimental results, by taking the mean over various variables to compute the mean effect for regressed varables. For example, we may want to look at the effects of variables 2, 3, and 4, and take the mean over the other three variables.
This is simple to do with Pandas, by grouping the data by each variable, and applying the mean function on all of the results. The code looks like this
Step14: This functionality can also be used to determine the variance in all of the experimental observations being aggregated. For example, here we aggregate over $x_3 \dots x_6$ and show the variance broken down by $x_1, x_2$ vs $y_1, y_2, y_3$.
Step15: Or even the number of experimental observations being aggregated!
Step16: <a name="dist_variance"></a>
Distributions of Variance
We can convert these dataframes of averages, variances, and counts into data for plotting. For example, if we want to make a histogram of every value in the groupby dataframe, we can use the .values method, so that this
Step17: The distribution of variance looks mostly normal, with some outliers. These are the same outliers that showed up in our quantile-quantile plot, and they'll show up in the plots below as well.
<a name="residual"></a>
Residual vs. Response Plots
Another thing we can do, to look for uncaptured effects, is to look at our residuals vs. $\hat{y}$. This is a further effort to look for underlying functional relationships between $\hat{y}$ and the residuals, which would indicate that our system exhibits behavior not captured by our linear model. | Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
from numpy.random import rand, seed
import seaborn as sns
import scipy.stats as stats
from matplotlib.pyplot import *
seed(10)
Explanation: A Two-Level, Six-Factor Full Factorial Design
<br />
<br />
<br />
Table of Contents
Introduction
Factorial Experimental Design:
Two-Level Six-Factor Full Factorial Design
Variables and Variable Labels
Computing Main and Interaction Effects
Analysis of results:
Analyzing Effects
Quantile-Quantile Effects Plot
Utilizing Degrees of Freedom
Ordinary Least Squares Regression Model
Goodness of Fit
Distribution of Error
Aggregating Results
Distribution of Variance
Residual vs. Response Plots
<br />
<br />
<br />
<a name="intro"></a>
Introduction
This notebook roughly follows content from Box and Draper's Empirical Model-Building and Response Surfaces (Wiley, 1984). This content is covered by Chapter 4 of Box and Draper.
In this notebook, we'll carry out an anaylsis of a full factorial design, and show how we can obtain inforomation about a system and its responses, and a quantifiable range of certainty about those values. This is the fundamental idea behind empirical model-building and allows us to construct cheap and simple models to represent complex, nonlinear systems.
End of explanation
import itertools
# Create the inputs:
encoded_inputs = list( itertools.product([-1,1],[-1,1],[-1,1],[-1,1],[-1,1],[-1,1]) )
# Create the experiment design table:
doe = pd.DataFrame(encoded_inputs,columns=['x%d'%(i+1) for i in range(6)])
# "Manufacture" observed data y
doe['y1'] = doe.apply( lambda z : sum([ rand()*z["x%d"%(i)]+0.01*(0.5-rand()) for i in range(1,7) ]), axis=1)
doe['y2'] = doe.apply( lambda z : sum([ 5*rand()*z["x%d"%(i)]+0.01*(0.5-rand()) for i in range(1,7) ]), axis=1)
doe['y3'] = doe.apply( lambda z : sum([ 100*rand()*z["x%d"%(i)]+0.01*(0.5-rand()) for i in range(1,7) ]), axis=1)
print(doe[['y1','y2','y3']])
Explanation: <a name="fullfactorial"></a>
Two-Level Six-Factor Full Factorial Design
Let's start with our six-factor factorial design example. Six factors means there are six input variables; this is still a two-level experiment, so this is now a $2^6$-factorial experiment.
Additionally, there are now three response variables, $(y_1, y_2, y_3)$.
To generate a table of the 64 experiments to be run at each factor level, we will use the itertools.product function below. This is all put into a DataFrame.
This example generates some random response data, by multiplying a vector of random numbers by the vector of input variable values. (Nothing too complicated.)
End of explanation
labels = {}
labels[1] = ['x1','x2','x3','x4','x5','x6']
for i in [2,3,4,5,6]:
labels[i] = list(itertools.combinations(labels[1], i))
obs_list = ['y1','y2','y3']
for k in labels.keys():
print(str(k) + " : " + str(labels[k]))
Explanation: <a name="varlablels"></a>
Defining Variables and Variable Labels
Next we'll define some containers for input variable labels, output variable labels, and any interaction terms that we'll be computing:
End of explanation
effects = {}
# Start with the constant effect: this is $\overline{y}$
effects[0] = {'x0' : [doe['y1'].mean(),doe['y2'].mean(),doe['y3'].mean()]}
print(effects[0])
Explanation: Now that we have variable labels for each main effect and interaction effect, we can actually compute those effects.
<a name="computing_effects"></a>
Computing Main and Interaction Effects
We'll start by finding the constant effect, which is the mean of each response:
End of explanation
effects[1] = {}
for key in labels[1]:
effects_result = []
for obs in obs_list:
effects_df = doe.groupby(key)[obs].mean()
result = sum([ zz*effects_df.ix[zz] for zz in effects_df.index ])
effects_result.append(result)
effects[1][key] = effects_result
effects[1]
Explanation: Next, compute the main effect of each variable, which quantifies the amount the response changes by when the input variable is changed from the -1 to +1 level. That is, it computes the average effect of an input variable $x_i$ on each of the three response variables $y_1, y_2, y_3$.
End of explanation
for c in [2,3,4,5,6]:
effects[c] = {}
for key in labels[c]:
effects_result = []
for obs in obs_list:
effects_df = doe.groupby(key)[obs].mean()
result = sum([ np.prod(zz)*effects_df.ix[zz]/(2**(len(zz)-1)) for zz in effects_df.index ])
effects_result.append(result)
effects[c][key] = effects_result
def printd(d):
for k in d.keys():
print("%25s : %s"%(k,d[k]))
for i in range(1,7):
printd(effects[i])
Explanation: Our next step is to crank through each variable interaction level: two-variable, three-variable, and on up to six-variable interaction effects. We compute interaction effects for each two-variable combination, three-variable combination, etc.
End of explanation
print(len(effects))
master_dict = {}
for nvars in effects.keys():
effect = effects[nvars]
for k in effect.keys():
v = effect[k]
master_dict[k] = v
master_df = pd.DataFrame(master_dict).T
master_df.columns = obs_list
y1 = master_df['y1'].copy()
y1.sort_values(inplace=True,ascending=False)
print("Top 10 effects for observable y1:")
print(y1[:10])
y2 = master_df['y2'].copy()
y2.sort_values(inplace=True,ascending=False)
print("Top 10 effects for observable y2:")
print(y2[:10])
y3 = master_df['y3'].copy()
y3.sort_values(inplace=True,ascending=False)
print("Top 10 effects for observable y3:")
print(y3[:10])
Explanation: We've computed the main and interaction effects for every variable combination (whew!), but now we're at a point where we want to start doing things with these quantities.
<a name="analyzing_effects"></a>
Analyzing Effects
The first and most important question is, what variable, or combination of variables, has the strongest effect on the three responses $y_1$? $y_2$? $y_3$?
To figure this out, we'll need to use the data we computed above. Python makes it easy to slice and dice data. In this case, we've constructed a nested dictionary, with the outer keys mapping to the number of variables and inner keys mapping to particular combinations of input variables. Its pretty easy to convert this to a flat data structure that we can use to sort by variable effects. We've got six "levels" of variable combinations, so we'll flatten effects by looping through all six dictionaries of variable combinations (from main effects to six-variable interaction effects), and adding each entry to a master dictionary.
The master dictionary will be a flat dictionary, and once we've populated it, we can use it to make a DataFrame for easier sorting, printing, manipulating, aggregating, and so on.
End of explanation
# Quantify which effects are not normally distributed,
# to assist in identifying important variables
fig = figure(figsize=(14,4))
ax1 = fig.add_subplot(131)
ax2 = fig.add_subplot(132)
ax3 = fig.add_subplot(133)
stats.probplot(y1, dist="norm", plot=ax1)
ax1.set_title('y1')
stats.probplot(y2, dist="norm", plot=ax2)
ax2.set_title('y2')
stats.probplot(y3, dist="norm", plot=ax3)
ax3.set_title('y3')
Explanation: If we were only to look at the list of rankings of each variable, we would see that each response is affected by different input variables, listed below in order of descending importance:
* $y_1$: 136254
* $y_2$: 561234
* $y_3$: 453216
This is a somewhat mixed message that's hard to interpret - can we get rid of variable 2? We can't eliminate 1, 4, or 5, and probably not 3 or 6 either.
However, looking at the quantile-quantile plot of the effects answers the question in a more visual way.
<a name="quantile_effects"></a>
Quantile-Quantile Effects Plot
We can examine the distribution of the various input variable effects using a quantile-quantile plot of the effects. Quantile-quantile plots arrange the effects in order from least to greatest, and can be applied in several contexts (as we'll see below, when assessing model fits). If the quantities plotted on a quantile-qantile plot are normally distributed, they will fall on a straight line; data that do not fall on the straight line indicate significant deviations from normal behavior.
In the case of a quantile-quantile plot of effects, non-normal behavior means the effect is paticularly strong. By identifying the outlier points on thse quantile-quantile plots (they're ranked in order, so they correspond to the lists printed above), we can identify the input variables most likely to have a strong impact on the responses.
We need to look both at the top (the variables that have the largest overall positive effect) and the bottom (the variables that have the largest overall negative effect) for significant outliers. When we find outliers, we can add them to a list of variabls that we have decided are important and will keep in our analysis.
End of explanation
xlabs = ['x1','x2','x3','x4','x5','x6']
ylabs = ['y1','y2','y3']
ls_data = doe[xlabs+ylabs]
import statsmodels.api as sm
import numpy as np
x = ls_data[xlabs]
x = sm.add_constant(x)
Explanation: Normally, we would use the main effects that were computed, and their rankings, to eliminate any variables that don't have a strong effect on any of our variables. However, this analysis shows that sometimes we can't eliminate any variables.
All six input variables are depicted as the effects that fall far from the red line - indicating all have a statistically meaningful (i.e., not normally distributed) effect on all three response variables. This means we should keep all six factors in our analysis.
There is also a point on the $y_3$ graph that appears significant on the bottom. Examining the output of the lists above, this point represents the effect for the six-way interaction of all input variables. High-order interactions are highly unlikely (and in this case it is a numerical artifact of the way the responses were generated), so we'll keep things simple and stick to a linear model.
Let's continue our analysis without eliminating any of the six factors, since they are important to all of our responses.
<a name="dof"></a>
Utilizing Degrees of Freedom
Our very expensive, 64-experiment full factorial design (the data for which maps $(x_1,x_2,\dots,x_6)$ to $(y_1,y_2,y_3)$) gives us 64 data points, and 64 degrees of freedom. What we do with those 64 degrees of freedom is up to us.
We could fit an empirical model, or response surface, that has 64 independent parameters, and account for many of the high-order interaction terms - all the way up to six-variable interaction effects. However, high-order effects are rarely important, and are a waste of our degrees of freedom.
Alternatively, we can fit an empirical model with fewer coefficients, using up fewer degrees of freedom, and use the remaining degrees of freedom to characterize the error introduced by our approximate model.
To describe a model with the 6 variables listed above and no other variable interaction effects would use only 6 degrees of freedom, plus 1 degree of freedom for the constant term, leaving 57 degrees of freedom available to quantify error, attribute variance, etc.
Our goal is to use least squares to compute model equations for $(y_1,y_2,y_3)$ as functions of $(x_1,x_2,x_3,x_4,x_5,x_6)$.
End of explanation
y1 = ls_data['y1']
est1 = sm.OLS(y1,x).fit()
print(est1.summary())
Explanation: The first ordinary least squares linear model is created to predict values of the first variable, $y_1$, as a function of each of our input variables, the list of which are contained in the xlabs variable. When we perform the linear regression fitting, we see much of the same information that we found in the prior two-level three-factor full factorial design, but here, everything is done automatically.
The model is linear, meaning it's fitting the coefficients of the function:
$$
\hat{y} = a_0 + a_1 x_1 + a_2 x_2 + a_3 + x_3 + a_4 x_4 + a_5 x_5 + a_6 x_6
$$
(here, the variables $y$ and $x$ are vectors, with one component for each response; in our case, they are three-dimensional vectors.)
Because there are 64 observations and 7 coefficients, the 57 extra observations give us extra degrees of freedom with which to assess how good the model is. That analysis can be done with an ordinary least squares (OLS) model, available through the statsmodel library in Python.
<a name="ols"></a>
Ordinary Least Squares Regression Model
This built-in OLS model will fit an input vector $(x_1,x_2,x_3,x_4,x_5,x_6)$ to an output vector $(y_1,y_2,y_3)$ using a linear model; the OLS model is designed to fit the model with more observations than coefficients, and utilize the remaining data to quantify the fit of the model.
Let's run through one of these, and analyze the results:
End of explanation
y2 = ls_data['y2']
est2 = sm.OLS(y2,x).fit()
print(est2.summary())
y3 = ls_data['y3']
est3 = sm.OLS(y3,x).fit()
print(est3.summary())
Explanation: The StatsModel OLS object prints out quite a bit of useful information, in a nicely-formatted table. Starting at the top, we see a couple of important pieces of information: specifically, the name of the dependent variable (the response) that we're looking at, the number of observations, and the number of degrees of freedom.
We can see an $R^2$ statistic, which indicates how well this data is fit with our linear model, and an adjusted $R^2$ statistic, which accounts for the large nubmer of degrees of freedom. While an adjusted $R^2$ of 0.73 is not great, we have to remember that this linear model is trying to capture a wealth of complexity in six coefficients. Furthermore, the adjusted $R^2$ value is too broad to sum up how good our model actually is.
The table in the middle is where the most useful information is located. The coef column shows the coefficients $a_0, a_1, a_2, \dots$ for the model equation:
$$
\hat{y} = a_0 + a_1 x_1 + a_2 x_2 + a_3 + x_3 + a_4 x_4 + a_5 x_5 + a_6 x_6
$$
Using the extra degrees of freedom, an estime $s^2$ of the variance in the regression coefficients is also computed, and reported in the the std err column. Each linear term is attributed the same amount of variance, $\pm 0.082$.
End of explanation
%matplotlib inline
import seaborn as sns
import scipy.stats as stats
from matplotlib.pyplot import *
# Quantify goodness of fit
fig = figure(figsize=(14,4))
ax1 = fig.add_subplot(131)
ax2 = fig.add_subplot(132)
ax3 = fig.add_subplot(133)
r1 = y1 - est1.predict(x)
r2 = y2 - est2.predict(x)
r3 = y3 - est3.predict(x)
stats.probplot(r1, dist="norm", plot=ax1)
ax1.set_title('Residuals, y1')
stats.probplot(r2, dist="norm", plot=ax2)
ax2.set_title('Residuals, y2')
stats.probplot(r3, dist="norm", plot=ax3)
ax3.set_title('Residuals, y3')
Explanation: <a name="goodness_of_fit"></a>
Quantifying Model Goodness-of-Fit
We can now use these linear models to evaluate each set of inputs and compare the model response $\hat{y}$ to the actual observed response $y$. What we would expect to see, if our model does an adequate job of representing the underlying behavior of the model, is that in each of the 64 experiments, the difference between the model prediction $M$ and the measured data $d$, defined as the residual $r$,
$$
r = \left| d - M \right|
$$
should be comparable across all experiments. If the residuals appear to have functional dependence on the input variables, it is an indication that our model is missing important effects and needs more or different terms. The way we determine this, mathematically, is by looking at a quantile-quantile plot of our errors (that is, a ranked plot of our error magnitudes).
If the residuals are normally distributed, they will follow a straight line; if the plot shows the data have significant wiggle and do not follow a line, it is an indication that the errors are not normally distributed, and are therefore skewed (indicating terms missing from our OLS model).
End of explanation
fig = figure(figsize=(10,12))
ax1 = fig.add_subplot(311)
ax2 = fig.add_subplot(312)
ax3 = fig.add_subplot(313)
axes = [ax1,ax2,ax3]
colors = sns.xkcd_palette(["windows blue", "amber", "faded green", "dusty purple","aqua blue"])
#resids = [r1, r2, r3]
normed_resids = [r1/y1, r2/y2, r3/y3]
for (dataa, axx, colorr) in zip(normed_resids,axes,colors):
sns.kdeplot(dataa, bw=1.0, ax=axx, color=colorr, shade=True, alpha=0.5);
ax1.set_title('Probability Distribution: Normalized Residual Error, y1')
ax2.set_title('Normalized Residual Error, y2')
ax3.set_title('Normalized Residual Error, y3')
Explanation: Determining whether significant trends are being missed by the model depends on how many points deviate from the red line, and how significantly. If there is a single point that deviates, it does not necessarily indicate a problem; but if there is significant wiggle and most points deviate significantly from the red line, it means that there is something about the relationship between the inputs and the outputs that our model is missing.
There are only a few points deviating from the red line. We saw from the effect quantile for $y_3$ that there was an interaction variable that was important to modeling the response $y_3$, and it is likely this interaction that is leading to noise at the tail end of these residuals. This indicates residual errors (deviations of the model from data) that do not follow a natural, normal distribution, which indicates there is a pattern in the deviations - namely, the interaction effect.
The conclusion about the error from the quantile plots above is that there are only a few points deviation from the line, and no particularly significant outliers. Our model can use some improvement, but it's a pretty good first-pass model.
<a name="distribution_of_error"></a>
Distribution of Error
Another thing we can look at is the normalized error: what are the residual errors (differences between our model prediction and our data)? How are their values distributed?
A kernel density estimate (KDE) plot, which is a smoothed histogram, shows the probability distribution of the normalized residual errors. As expected, they're bunched pretty close to zero. There are some bumps far from zero, corresponding to the outliers on the quantile-quantile plot of the errors above. However, they're pretty close to randomly distributed, and therefore it doesn't look like there is any systemic bias there.
End of explanation
# Our original regression variables
xlabs = ['x2','x3','x4']
doe.groupby(xlabs)[ylabs].mean()
# If we decided to go for a different variable set
xlabs = ['x2','x3','x4','x6']
doe.groupby(xlabs)[ylabs].mean()
Explanation: Note that in these figures, the bumps at extreme value are caused by the fact that the interval containing the responses includes 0 and values close to 0, so the normalization factor is very tiny, leading to large values.
<a name="aggregating"></a>
Aggregating Results
Let's next aggregate experimental results, by taking the mean over various variables to compute the mean effect for regressed varables. For example, we may want to look at the effects of variables 2, 3, and 4, and take the mean over the other three variables.
This is simple to do with Pandas, by grouping the data by each variable, and applying the mean function on all of the results. The code looks like this:
End of explanation
xlabs = ['x1','x2']
doe.groupby(xlabs)[ylabs].var()
Explanation: This functionality can also be used to determine the variance in all of the experimental observations being aggregated. For example, here we aggregate over $x_3 \dots x_6$ and show the variance broken down by $x_1, x_2$ vs $y_1, y_2, y_3$.
End of explanation
doe.groupby(xlabs)[ylabs].count()
Explanation: Or even the number of experimental observations being aggregated!
End of explanation
# Histogram of means of response values, grouped by xlabs
xlabs = ['x1','x2','x3','x4']
print("Grouping responses by %s"%( "-".join(xlabs) ))
dat = np.ravel(doe.groupby(xlabs)[ylabs].mean().values) / np.ravel(doe.groupby(xlabs)[ylabs].var().values)
hist(dat, 10, normed=False, color=colors[3]);
xlabel(r'Relative Variance ($\mu$/$\sigma^2$)')
show()
# Histogram of variances of response values, grouped by xlabs
print("Grouping responses by %s"%( "-".join(xlabs) ))
dat = np.ravel(doe.groupby(xlabs)['y1'].var().values)
hist(dat, normed=True, color=colors[4])
xlabel(r'Variance in $y_{1}$ Response')
ylabel(r'Frequency')
show()
Explanation: <a name="dist_variance"></a>
Distributions of Variance
We can convert these dataframes of averages, variances, and counts into data for plotting. For example, if we want to make a histogram of every value in the groupby dataframe, we can use the .values method, so that this:
doe.gorupby(xlabs)[ylabs].mean()
becomes this:
doe.groupby(xlabs)[ylabs].mean().values
This $M \times N$ array can then be flattened into a vector using the ravel() method from numpy:
np.ravel( doe.groupby(xlabs)[ylabs].mean().values )
The resulting data can be used to generate histograms, as shown below:
End of explanation
# normal plot of residuals
fig = figure(figsize=(14,4))
ax1 = fig.add_subplot(131)
ax2 = fig.add_subplot(132)
ax3 = fig.add_subplot(133)
ax1.plot(y1,r1,'o',color=colors[0])
ax1.set_xlabel('Response value $y_1$')
ax1.set_ylabel('Residual $r_1$')
ax2.plot(y2,r2,'o',color=colors[1])
ax2.set_xlabel('Response value $y_2$')
ax2.set_ylabel('Residual $r_2$')
ax2.set_title('Response vs. Residual Plots')
ax3.plot(y1,r1,'o',color=colors[2])
ax3.set_xlabel('Response value $y_3$')
ax3.set_ylabel('Residual $r_3$')
show()
Explanation: The distribution of variance looks mostly normal, with some outliers. These are the same outliers that showed up in our quantile-quantile plot, and they'll show up in the plots below as well.
<a name="residual"></a>
Residual vs. Response Plots
Another thing we can do, to look for uncaptured effects, is to look at our residuals vs. $\hat{y}$. This is a further effort to look for underlying functional relationships between $\hat{y}$ and the residuals, which would indicate that our system exhibits behavior not captured by our linear model.
End of explanation |
15,082 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Spelling Bee
This notebook starts our deep dive (no pun intended) into NLP by introducing sequence-to-sequence learning on Spelling Bee.
Data Stuff
We take our data set from The CMU pronouncing dictionary
Step1: The CMU pronouncing dictionary consists of sounds/words and their corresponding phonetic description (American pronunciation).
The phonetic descriptions are a sequence of phonemes. Note that the vowels end with integers; these indicate where the stress is.
Our goal is to learn how to spell these words given the sequence of phonemes.
The preparation of this data set follows the same pattern we've seen before for NLP tasks.
Here we iterate through each line of the file and grab each word/phoneme pair that starts with an uppercase letter.
Step2: Next we're going to get a list of the unique phonemes in our vocabulary, as well as add a null "_" for zero-padding.
Step3: Then we create mappings of phonemes and letters to respective indices.
Our letters include the padding element "_", but also "*" which we'll explain later.
Step4: Let's create a dictionary mapping words to the sequence of indices corresponding to it's phonemes, and let's do it only for words between 5 and 15 characters long.
Step5: Aside on various approaches to python's list comprehension
Step6: Split lines into words, phonemes, convert to indexes (with padding), split into training, validation, test sets. Note we also find the max phoneme sequence length for padding.
Step7: Sklearn's <tt>train_test_split</tt> is an easy way to split data into training and testing sets.
Step8: Next we proceed to build our model.
Keras code
Step9: Without attention
Step10: The model has three parts
Step11: We can refer to the parts of the model before and after <tt>get_rnn(False)</tt> returns a vector as the encoder and decoder. The encoder has taken a sequence of embeddings and encoded it into a numerical vector that completely describes it's input, while the decoder transforms that vector into a new sequence.
Now we can fit our model
Step12: To evaluate, we don't want to know what percentage of letters are correct but what percentage of words are.
Step13: The accuracy isn't great.
Step14: We can see that sometimes the mistakes are completely reasonable, occasionally they're totally off. This tends to happen with the longer words that have large phoneme sequences.
That's understandable; we'd expect larger sequences to lose more information in an encoding.
Step15: Attention model
This graph demonstrates the accuracy decay for a nueral translation task. With an encoding/decoding technique, larger input sequences result in less accuracy.
<img src="https
Step16: The attentional model doesn't encode into a single vector, but rather a sequence of vectors. The decoder then at every point is passing through this sequence. For example, after the bi-directional RNN we have 16 vectors corresponding to each phoneme's output state. Each output state describes how each phoneme relates between the other phonemes before and after it. After going through more RNN's, our goal is to transform this sequence into a vector of length 15 so we can classify into characters.
A smart way to take a weighted average of the 16 vectors for each of the 15 outputs, where each set of weights is unique to the output. For example, if character 1 only needs information from the first phoneme vector, that weight might be 1 and the others 0; if it needed information from the 1st and 2nd equally, those two might be 0.5 each.
The weights for combining all the input states to produce specific outputs can be learned using an attentional model; we update the weights using SGD, and train it jointly with the encoder/decoder. Once we have the outputs, we can classify the character using softmax as usual.
Notice below we do not have an RNN that returns a flat vector as we did before; we have a sequence of vectors as desired. We can then pass a sequence of encoded states into the our custom <tt>Attention</tt> model.
This attention model also uses a technique called teacher forcing; in addition to passing the encoded hidden state, we also pass the correct answer for the previous time period. We give this information to the model because it makes it easier to train. In the beginning of training, the model will get most things wrong, and if your earlier character predictions are wrong then your later ones will likely be as well. Teacher forcing allows the model to still learn how to predict later characters, even if the earlier characters were all wrong.
Step17: We can now train, passing in the decoder inputs as well for teacher forcing.
Step18: Better accuracy!
Step19: This model is certainly performing better with longer words. The mistakes it's making are reasonable, and it even succesfully formed the word "partisanship".
Step20: Test code for the attention layer | Python Code:
%matplotlib inline
import importlib
import utils2; importlib.reload(utils2)
from utils2 import *
np.set_printoptions(4)
PATH = 'data/spellbee/'
limit_mem()
from sklearn.model_selection import train_test_split
Explanation: Spelling Bee
This notebook starts our deep dive (no pun intended) into NLP by introducing sequence-to-sequence learning on Spelling Bee.
Data Stuff
We take our data set from The CMU pronouncing dictionary
End of explanation
lines = [l.strip().split(" ") for l in open(PATH+"cmudict-0.7b", encoding='latin1')
if re.match('^[A-Z]', l)]
lines = [(w, ps.split()) for w, ps in lines]
lines[0], lines[-1]
Explanation: The CMU pronouncing dictionary consists of sounds/words and their corresponding phonetic description (American pronunciation).
The phonetic descriptions are a sequence of phonemes. Note that the vowels end with integers; these indicate where the stress is.
Our goal is to learn how to spell these words given the sequence of phonemes.
The preparation of this data set follows the same pattern we've seen before for NLP tasks.
Here we iterate through each line of the file and grab each word/phoneme pair that starts with an uppercase letter.
End of explanation
phonemes = ["_"] + sorted(set(p for w, ps in lines for p in ps))
phonemes[:5]
len(phonemes)
Explanation: Next we're going to get a list of the unique phonemes in our vocabulary, as well as add a null "_" for zero-padding.
End of explanation
p2i = dict((v, k) for k,v in enumerate(phonemes))
letters = "_abcdefghijklmnopqrstuvwxyz*"
l2i = dict((v, k) for k,v in enumerate(letters))
Explanation: Then we create mappings of phonemes and letters to respective indices.
Our letters include the padding element "_", but also "*" which we'll explain later.
End of explanation
maxlen=15
pronounce_dict = {w.lower(): [p2i[p] for p in ps] for w, ps in lines
if (5<=len(w)<=maxlen) and re.match("^[A-Z]+$", w)}
len(pronounce_dict)
Explanation: Let's create a dictionary mapping words to the sequence of indices corresponding to it's phonemes, and let's do it only for words between 5 and 15 characters long.
End of explanation
a=['xyz','abc']
[o.upper() for o in a if o[0]=='x'], [[p for p in o] for o in a], [p for o in a for p in o]
Explanation: Aside on various approaches to python's list comprehension:
* the first list is a typical example of a list comprehension subject to a conditional
* the second is a list comprehension inside a list comprehension, which returns a list of list
* the third is similar to the second, but is read and behaves like a nested loop
* Since there is no inner bracket, there are no lists wrapping the inner loop
End of explanation
maxlen_p = max([len(v) for k,v in pronounce_dict.items()])
pairs = np.random.permutation(list(pronounce_dict.keys()))
n = len(pairs)
input_ = np.zeros((n, maxlen_p), np.int32)
labels_ = np.zeros((n, maxlen), np.int32)
for i, k in enumerate(pairs):
for j, p in enumerate(pronounce_dict[k]): input_[i][j] = p
for j, letter in enumerate(k): labels_[i][j] = l2i[letter]
go_token = l2i["*"]
dec_input_ = np.concatenate([np.ones((n,1)) * go_token, labels_[:,:-1]], axis=1)
Explanation: Split lines into words, phonemes, convert to indexes (with padding), split into training, validation, test sets. Note we also find the max phoneme sequence length for padding.
End of explanation
(input_train, input_test, labels_train, labels_test, dec_input_train, dec_input_test
) = train_test_split(input_, labels_, dec_input_, test_size=0.1)
input_train.shape
labels_train.shape
input_vocab_size, output_vocab_size = len(phonemes), len(letters)
input_vocab_size, output_vocab_size
Explanation: Sklearn's <tt>train_test_split</tt> is an easy way to split data into training and testing sets.
End of explanation
parms = {'verbose': 0, 'callbacks': [TQDMNotebookCallback(leave_inner=True)]}
lstm_params = {}
dim = 240
Explanation: Next we proceed to build our model.
Keras code
End of explanation
def get_rnn(return_sequences= True):
return LSTM(dim, dropout_U= 0.1, dropout_W= 0.1,
consume_less= 'gpu', return_sequences=return_sequences)
Explanation: Without attention
End of explanation
inp = Input((maxlen_p,))
x = Embedding(input_vocab_size, 120)(inp)
x = Bidirectional(get_rnn())(x)
x = get_rnn(False)(x)
x = RepeatVector(maxlen)(x)
x = get_rnn()(x)
x = get_rnn()(x)
x = TimeDistributed(Dense(output_vocab_size, activation='softmax'))(x)
Explanation: The model has three parts:
* We first pass list of phonemes through an embedding function to get a list of phoneme embeddings. Our goal is to turn this sequence of embeddings into a single distributed representation that captures what our phonemes say.
* Turning a sequence into a representation can be done using an RNN. This approach is useful because RNN's are able to keep track of state and memory, which is obviously important in forming a complete understanding of a pronunciation.
* <tt>BiDirectional</tt> passes the original sequence through an RNN, and the reversed sequence through a different RNN and concatenates the results. This allows us to look forward and backwards.
* We do this because in language things that happen later often influence what came before (i.e. in Spanish, "el chico, la chica" means the boy, the girl; the word for "the" is determined by the gender of the subject, which comes after).
* Finally, we arrive at a vector representation of the sequence which captures everything we need to spell it. We feed this vector into more RNN's, which are trying to generate the labels. After this, we make a classification for what each letter is in the output sequence.
* We use <tt>RepeatVector</tt> to help our RNN remember at each point what the original word is that it's trying to translate.
End of explanation
model = Model(inp, x)
model.compile(Adam(), 'sparse_categorical_crossentropy', metrics=['acc'])
hist=model.fit(input_train, np.expand_dims(labels_train,-1),
validation_data=[input_test, np.expand_dims(labels_test,-1)],
batch_size=64, **parms, nb_epoch=3)
hist.history['val_loss']
Explanation: We can refer to the parts of the model before and after <tt>get_rnn(False)</tt> returns a vector as the encoder and decoder. The encoder has taken a sequence of embeddings and encoded it into a numerical vector that completely describes it's input, while the decoder transforms that vector into a new sequence.
Now we can fit our model
End of explanation
def eval_keras(input):
preds = model.predict(input, batch_size=128)
predict = np.argmax(preds, axis = 2)
return (np.mean([all(real==p) for real, p in zip(labels_test, predict)]), predict)
Explanation: To evaluate, we don't want to know what percentage of letters are correct but what percentage of words are.
End of explanation
acc, preds = eval_keras(input_test); acc
def print_examples(preds):
print("pronunciation".ljust(40), "real spelling".ljust(17),
"model spelling".ljust(17), "is correct")
for index in range(20):
ps = "-".join([phonemes[p] for p in input_test[index]])
real = [letters[l] for l in labels_test[index]]
predict = [letters[l] for l in preds[index]]
print (ps.split("-_")[0].ljust(40), "".join(real).split("_")[0].ljust(17),
"".join(predict).split("_")[0].ljust(17), str(real == predict))
Explanation: The accuracy isn't great.
End of explanation
print_examples(preds)
Explanation: We can see that sometimes the mistakes are completely reasonable, occasionally they're totally off. This tends to happen with the longer words that have large phoneme sequences.
That's understandable; we'd expect larger sequences to lose more information in an encoding.
End of explanation
import attention_wrapper; importlib.reload(attention_wrapper)
from attention_wrapper import Attention
Explanation: Attention model
This graph demonstrates the accuracy decay for a nueral translation task. With an encoding/decoding technique, larger input sequences result in less accuracy.
<img src="https://smerity.com/media/images/articles/2016/bahdanau_attn.png" width="600">
This can be mitigated using an attentional model.
End of explanation
inp = Input((maxlen_p,))
inp_dec = Input((maxlen,))
emb_dec = Embedding(output_vocab_size, 120)(inp_dec)
emb_dec = Dense(dim)(emb_dec)
x = Embedding(input_vocab_size, 120)(inp)
x = Bidirectional(get_rnn())(x)
x = get_rnn()(x)
x = get_rnn()(x)
x = Attention(get_rnn, 3)([x, emb_dec])
x = TimeDistributed(Dense(output_vocab_size, activation='softmax'))(x)
Explanation: The attentional model doesn't encode into a single vector, but rather a sequence of vectors. The decoder then at every point is passing through this sequence. For example, after the bi-directional RNN we have 16 vectors corresponding to each phoneme's output state. Each output state describes how each phoneme relates between the other phonemes before and after it. After going through more RNN's, our goal is to transform this sequence into a vector of length 15 so we can classify into characters.
A smart way to take a weighted average of the 16 vectors for each of the 15 outputs, where each set of weights is unique to the output. For example, if character 1 only needs information from the first phoneme vector, that weight might be 1 and the others 0; if it needed information from the 1st and 2nd equally, those two might be 0.5 each.
The weights for combining all the input states to produce specific outputs can be learned using an attentional model; we update the weights using SGD, and train it jointly with the encoder/decoder. Once we have the outputs, we can classify the character using softmax as usual.
Notice below we do not have an RNN that returns a flat vector as we did before; we have a sequence of vectors as desired. We can then pass a sequence of encoded states into the our custom <tt>Attention</tt> model.
This attention model also uses a technique called teacher forcing; in addition to passing the encoded hidden state, we also pass the correct answer for the previous time period. We give this information to the model because it makes it easier to train. In the beginning of training, the model will get most things wrong, and if your earlier character predictions are wrong then your later ones will likely be as well. Teacher forcing allows the model to still learn how to predict later characters, even if the earlier characters were all wrong.
End of explanation
model = Model([inp, inp_dec], x)
model.compile(Adam(), 'sparse_categorical_crossentropy', metrics=['acc'])
hist=model.fit([input_train, dec_input_train], np.expand_dims(labels_train,-1),
validation_data=[[input_test, dec_input_test], np.expand_dims(labels_test,-1)],
batch_size=64, **parms, nb_epoch=3)
hist.history['val_loss']
K.set_value(model.optimizer.lr, 1e-4)
hist=model.fit([input_train, dec_input_train], np.expand_dims(labels_train,-1),
validation_data=[[input_test, dec_input_test], np.expand_dims(labels_test,-1)],
batch_size=64, **parms, nb_epoch=5)
np.array(hist.history['val_loss'])
def eval_keras():
preds = model.predict([input_test, dec_input_test], batch_size=128)
predict = np.argmax(preds, axis = 2)
return (np.mean([all(real==p) for real, p in zip(labels_test, predict)]), predict)
Explanation: We can now train, passing in the decoder inputs as well for teacher forcing.
End of explanation
acc, preds = eval_keras(); acc
Explanation: Better accuracy!
End of explanation
print("pronunciation".ljust(40), "real spelling".ljust(17),
"model spelling".ljust(17), "is correct")
for index in range(20):
ps = "-".join([phonemes[p] for p in input_test[index]])
real = [letters[l] for l in labels_test[index]]
predict = [letters[l] for l in preds[index]]
print (ps.split("-_")[0].ljust(40), "".join(real).split("_")[0].ljust(17),
"".join(predict).split("_")[0].ljust(17), str(real == predict))
Explanation: This model is certainly performing better with longer words. The mistakes it's making are reasonable, and it even succesfully formed the word "partisanship".
End of explanation
nb_samples, nb_time, input_dim, output_dim = (64, 4, 32, 48)
x = tf.placeholder(np.float32, (nb_samples, nb_time, input_dim))
xr = K.reshape(x,(-1,nb_time,1,input_dim))
W1 = tf.placeholder(np.float32, (input_dim, input_dim)); W1.shape
W1r = K.reshape(W1, (1, input_dim, input_dim))
W1r2 = K.reshape(W1, (1, 1, input_dim, input_dim))
xW1 = K.conv1d(x,W1r,border_mode='same'); xW1.shape
xW12 = K.conv2d(xr,W1r2,border_mode='same'); xW12.shape
xW2 = K.dot(x, W1)
x1 = np.random.normal(size=(nb_samples, nb_time, input_dim))
w1 = np.random.normal(size=(input_dim, input_dim))
res = sess.run(xW1, {x:x1, W1:w1})
res2 = sess.run(xW2, {x:x1, W1:w1})
np.allclose(res, res2)
W2 = tf.placeholder(np.float32, (output_dim, input_dim)); W2.shape
h = tf.placeholder(np.float32, (nb_samples, output_dim))
hW2 = K.dot(h,W2); hW2.shape
hW2 = K.reshape(hW2,(-1,1,1,input_dim)); hW2.shape
Explanation: Test code for the attention layer
End of explanation |
15,083 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PyTorch Dataset and Dataloader Demo
We illustrate how to build a custom dataset and dataloader for object detection.
We will use our collected and labeled images for object detection. Over 1,000 640x480 RGB images were collected using an off-the-shelf USB camera (A4TECH PK-635G).
The images were labeled using VIA and the image filenames and labels are stored in a CSV file.
Before continuiing, please download the dataset from here. Extract the dataset on the same directory as this file. The directory structure is something like this.
--> datasets --> python --> config.py
--> dataloader_demo.ipynb
--> drinks/
--> label_utils.py
--> sample_labels.png
...
Note
Step1: Login to and initialize wandb. You will need to use your wandb API key to run this demo.
We will use the following dataset and dataloader configuration.
Step2: Dataset and Dataloader for Custom Object Detection
The dataset CSV file is a list of image filenames and their labels. The image filenames and their labels are stored in a CSV file using the following format.
frame,xmin,xmax,ymin,ymax,class_id
0001000.jpg,310,445,104,443,1
0000999.jpg,194,354,96,478,1
0000998.jpg,105,383,134,244,1
0000997.jpg,157,493,89,194,1
0000996.jpg,51,435,207,347,1
...
A label represents the coordinates of the object bounding box.
We will build a dictionary of path_to_image to label mapping. The label is a tensor of the form xmin,xmax,ymin,ymax,class_id. There can be multiple labels for an image since there can be multiple objects in an image.
The ImageDataset class is a custom dataset class that loads the images and labels using the dictionary. The ImageDataset class is a subclass of the abstract class torch.utils.data.Dataset that supports __len__() and __getitem__() methods. This is also known as map-style method. A dataset can also be iterable-style that supports the __iter__() method.
Our train and test dataloaders use the wandb configuration.
We also create a custom collate_fn function to handle the labels per image. collate_fn pads all labels in a mini-batch to the same size.
Step3: Visualizing sample data from train split
We visualize sample images from the train split by creating a wandb table with one column to visualize an image and the objects using bounding boxes. The annotation is stored in a list of dictionaries named dict. One dictionary per image using position, class_id, domain and box_caption as keys. Please check the wandb media documentation for more details. | Python Code:
import torch
import numpy as np
import wandb
import label_utils
from torch.utils.data import DataLoader
from torchvision import transforms
from PIL import Image
Explanation: PyTorch Dataset and Dataloader Demo
We illustrate how to build a custom dataset and dataloader for object detection.
We will use our collected and labeled images for object detection. Over 1,000 640x480 RGB images were collected using an off-the-shelf USB camera (A4TECH PK-635G).
The images were labeled using VIA and the image filenames and labels are stored in a CSV file.
Before continuiing, please download the dataset from here. Extract the dataset on the same directory as this file. The directory structure is something like this.
--> datasets --> python --> config.py
--> dataloader_demo.ipynb
--> drinks/
--> label_utils.py
--> sample_labels.png
...
Note: Before running this demo, please make sure that you have wandb.ai account. See our discussion on wandb.ai
Sample image annotation
A sample image annotation is shown below. There are only 3 categories: Water, Soda, and Juice. By default the backgroud is the first category. The bounding boxes and classs names as shown. Each bounding box is defined by 4 numbers. The numbers define 2 corners of the bounding box: xmin, xmax, ymin, and ymax in pixel coordinates.
<img src="sample_labels.png" width="640" height="480">
Import the required modules.
label_utils is a helper module for loading the CSV file and converting a label to class name. Basically, 0 is background, 1 is Water, 2 is Soda and 3 is Juice. It also contains helper functions to build the label dictionary from the CSV file.
End of explanation
wandb.login()
config = {
"num_workers": 4,
"pin_memory": True,
"batch_size": 32,
"dataset": "drinks",
"train_split": "drinks/labels_train.csv",
"test_split": "drinks/labels_test.csv",}
run = wandb.init(project="dataloader-project", entity="upeee", config=config)
Explanation: Login to and initialize wandb. You will need to use your wandb API key to run this demo.
We will use the following dataset and dataloader configuration.
End of explanation
test_dict, test_classes = label_utils.build_label_dictionary(
config['test_split'])
train_dict, train_classes = label_utils.build_label_dictionary(
config['train_split'])
class ImageDataset(torch.utils.data.Dataset):
def __init__(self, dictionary, transform=None):
self.dictionary = dictionary
self.transform = transform
def __len__(self):
return len(self.dictionary)
def __getitem__(self, idx):
# retrieve the image filename
key = list(self.dictionary.keys())[idx]
# retrieve all bounding boxes
boxes = self.dictionary[key]
# open the file as a PIL image
img = Image.open(key)
# apply the necessary transforms
# transforms like crop, resize, normalize, etc
if self.transform:
img = self.transform(img)
# return a list of images and corresponding labels
return img, boxes
train_split = ImageDataset(train_dict, transforms.ToTensor())
test_split = ImageDataset(test_dict, transforms.ToTensor())
# This is approx 95/5 split
print("Train split len:", len(train_split))
print("Test split len:", len(test_split))
# We do not have a validation split
def collate_fn(batch):
maxlen = max([len(x[1]) for x in batch])
images = []
boxes = []
for i in range(len(batch)):
img, box = batch[i]
images.append(img)
# pad with zeros if less than maxlen
if len(box) < maxlen:
box = np.concatenate(
(box, np.zeros((maxlen-len(box), box.shape[-1]))), axis=0)
box = torch.from_numpy(box)
boxes.append(box)
return torch.stack(images, 0), torch.stack(boxes, 0)
train_loader = DataLoader(train_split,
batch_size=config['batch_size'],
shuffle=True,
num_workers=config['num_workers'],
pin_memory=config['pin_memory'],
collate_fn=collate_fn)
test_loader = DataLoader(test_split,
batch_size=config['batch_size'],
shuffle=False,
num_workers=config['num_workers'],
pin_memory=config['pin_memory'],
collate_fn=collate_fn)
Explanation: Dataset and Dataloader for Custom Object Detection
The dataset CSV file is a list of image filenames and their labels. The image filenames and their labels are stored in a CSV file using the following format.
frame,xmin,xmax,ymin,ymax,class_id
0001000.jpg,310,445,104,443,1
0000999.jpg,194,354,96,478,1
0000998.jpg,105,383,134,244,1
0000997.jpg,157,493,89,194,1
0000996.jpg,51,435,207,347,1
...
A label represents the coordinates of the object bounding box.
We will build a dictionary of path_to_image to label mapping. The label is a tensor of the form xmin,xmax,ymin,ymax,class_id. There can be multiple labels for an image since there can be multiple objects in an image.
The ImageDataset class is a custom dataset class that loads the images and labels using the dictionary. The ImageDataset class is a subclass of the abstract class torch.utils.data.Dataset that supports __len__() and __getitem__() methods. This is also known as map-style method. A dataset can also be iterable-style that supports the __iter__() method.
Our train and test dataloaders use the wandb configuration.
We also create a custom collate_fn function to handle the labels per image. collate_fn pads all labels in a mini-batch to the same size.
End of explanation
# sample one mini-batch
images, boxes = next(iter(train_loader))
# map of label to class name
class_labels = {i: label_utils.index2class(i) for i in train_classes}
run.display(height=1000)
table = wandb.Table(columns=['Image'])
# we use wandb to visualize the objects and bounding boxes
for image, box in zip(images, boxes):
dict = []
for i in range(box.shape[0]):
if box[i, -1] == 0:
continue
dict_item = {}
dict_item["position"] = {
"minX": box[i, 0].item(),
"maxX": box[i, 1].item(),
"minY": box[i, 2].item(),
"maxY": box[i, 3].item(),
}
dict_item["domain"] = "pixel"
dict_item["class_id"] = (int)(box[i, 4].item())
dict_item["box_caption"] = label_utils.index2class(
dict_item["class_id"])
dict.append(dict_item)
img = wandb.Image(image, boxes={
"ground_truth": {
"box_data": dict,
"class_labels": class_labels
}
})
table.add_data(img)
wandb.log({"train_loader": table})
wandb.finish()
Explanation: Visualizing sample data from train split
We visualize sample images from the train split by creating a wandb table with one column to visualize an image and the objects using bounding boxes. The annotation is stored in a list of dictionaries named dict. One dictionary per image using position, class_id, domain and box_caption as keys. Please check the wandb media documentation for more details.
End of explanation |
15,084 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lightweight python components
Lightweight python components do not require you to build a new container image for every code change.
They're intended to use for fast iteration in notebook environment.
Building a lightweight python component
To build a component just define a stand-alone python function and then call kfp.components.func_to_container_op(func) to convert it to a component that can be used in a pipeline.
There are several requirements for the function
Step1: Simple function that just add two numbers
Step2: Convert the function to a pipeline operation
Step3: A bit more advanced function which demonstrates how to use imports, helper functions and produce multiple outputs.
Step4: Test running the python function directly
Step5: Convert the function to a pipeline operation
You can specify an alternative base container image (the image needs to have Python 3.5+ installed).
Step6: Define the pipeline
Pipeline function has to be decorated with the @dsl.pipeline decorator
Step7: Submit the pipeline for execution | Python Code:
# Install the SDK
#!pip3 install 'kfp>=0.1.31.2' --quiet
import kfp
import kfp.components as comp
Explanation: Lightweight python components
Lightweight python components do not require you to build a new container image for every code change.
They're intended to use for fast iteration in notebook environment.
Building a lightweight python component
To build a component just define a stand-alone python function and then call kfp.components.func_to_container_op(func) to convert it to a component that can be used in a pipeline.
There are several requirements for the function:
* The function should be stand-alone. It should not use any code declared outside of the function definition. Any imports should be added inside the main function. Any helper functions should also be defined inside the main function.
* The function can only import packages that are available in the base image. If you need to import a package that's not available you can try to find a container image that already includes the required packages. (As a workaround you can use the module subprocess to run pip install for the required package. There is an example below in my_divmod function.)
* If the function operates on numbers, the parameters need to have type hints. Supported types are [int, float, bool]. Everything else is passed as string.
* To build a component with multiple output values, use the typing.NamedTuple type hint syntax: NamedTuple('MyFunctionOutputs', [('output_name_1', type), ('output_name_2', float)])
End of explanation
#Define a Python function
def add(a: float, b: float) -> float:
'''Calculates sum of two arguments'''
return a + b
Explanation: Simple function that just add two numbers:
End of explanation
add_op = comp.func_to_container_op(add)
Explanation: Convert the function to a pipeline operation
End of explanation
#Advanced function
#Demonstrates imports, helper functions and multiple outputs
from typing import NamedTuple
def my_divmod(dividend: float, divisor:float) -> NamedTuple('MyDivmodOutput', [('quotient', float), ('remainder', float), ('mlpipeline_ui_metadata', 'UI_metadata'), ('mlpipeline_metrics', 'Metrics')]):
'''Divides two numbers and calculate the quotient and remainder'''
#Pip installs inside a component function.
#NOTE: installs should be placed right at the beginning to avoid upgrading a package
# after it has already been imported and cached by python
import sys, subprocess;
subprocess.run([sys.executable, '-m', 'pip', 'install', 'tensorflow==1.8.0'])
#Imports inside a component function:
import numpy as np
#This function demonstrates how to use nested functions inside a component function:
def divmod_helper(dividend, divisor):
return np.divmod(dividend, divisor)
(quotient, remainder) = divmod_helper(dividend, divisor)
from tensorflow.python.lib.io import file_io
import json
# Exports a sample tensorboard:
metadata = {
'outputs' : [{
'type': 'tensorboard',
'source': 'gs://ml-pipeline-dataset/tensorboard-train',
}]
}
# Exports two sample metrics:
metrics = {
'metrics': [{
'name': 'quotient',
'numberValue': float(quotient),
},{
'name': 'remainder',
'numberValue': float(remainder),
}]}
from collections import namedtuple
divmod_output = namedtuple('MyDivmodOutput', ['quotient', 'remainder', 'mlpipeline_ui_metadata', 'mlpipeline_metrics'])
return divmod_output(quotient, remainder, json.dumps(metadata), json.dumps(metrics))
Explanation: A bit more advanced function which demonstrates how to use imports, helper functions and produce multiple outputs.
End of explanation
my_divmod(100, 7)
Explanation: Test running the python function directly
End of explanation
divmod_op = comp.func_to_container_op(my_divmod, base_image='tensorflow/tensorflow:1.11.0-py3')
Explanation: Convert the function to a pipeline operation
You can specify an alternative base container image (the image needs to have Python 3.5+ installed).
End of explanation
import kfp.dsl as dsl
@dsl.pipeline(
name='Calculation pipeline',
description='A toy pipeline that performs arithmetic calculations.'
)
def calc_pipeline(
a='a',
b='7',
c='17',
):
#Passing pipeline parameter and a constant value as operation arguments
add_task = add_op(a, 4) #Returns a dsl.ContainerOp class instance.
#Passing a task output reference as operation arguments
#For an operation with a single return value, the output reference can be accessed using `task.output` or `task.outputs['output_name']` syntax
divmod_task = divmod_op(add_task.output, b)
#For an operation with a multiple return values, the output references can be accessed using `task.outputs['output_name']` syntax
result_task = add_op(divmod_task.outputs['quotient'], c)
Explanation: Define the pipeline
Pipeline function has to be decorated with the @dsl.pipeline decorator
End of explanation
#Specify pipeline argument values
arguments = {'a': '7', 'b': '8'}
#Submit a pipeline run
kfp.Client().create_run_from_pipeline_func(calc_pipeline, arguments=arguments)
# Run the pipeline on a separate Kubeflow Cluster instead
# (use if your notebook is not running in Kubeflow - e.x. if using AI Platform Notebooks)
# kfp.Client(host='<ADD KFP ENDPOINT HERE>').create_run_from_pipeline_func(calc_pipeline, arguments=arguments)
#vvvvvvvvv This link leads to the run information page. (Note: There is a bug in JupyterLab that modifies the URL and makes the link stop working)
Explanation: Submit the pipeline for execution
End of explanation |
15,085 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Notebook Analyzes Duplication in Motif Defintions
Step1: Import Motif Definitions
Step2: Convert to DataFrame for Analysis of Duplication
Step3: Move On | Python Code:
import venusar
import motif
import thresholds
import motifs
import activity
import tf_expression
import gene_expression
# to get code changes
import imp
imp.reload(motif)
Explanation: Notebook Analyzes Duplication in Motif Defintions
End of explanation
motif_f_base='../../data/HOCOMOCOv10.JASPAR_FORMAT.TF_IDS.txt'
pc = 0.1
th = 0
bp = [0.25, 0.25, 0.25, 0.25]
motif_set_base = motif.get_motifs(motif_f_base, pc, th, bp)
motif_set_base.length()
motif_dict = motif_set_base.motif_count(False)
len(motif_dict)
# drop to valid motifs only; do twice to see if any invalid
motif_dict = motif_set_base.motif_count(True)
len(motif_dict)
range(len(motif_set_base.motifs))
range(0, (len(motif_set_base.motifs) - 1))
Explanation: Import Motif Definitions
End of explanation
import pandas
# -- building data frame from dictionary
#df = pandas.DataFrame(motif_dict) # errors about 'you must pass an index'
# ref: http://stackoverflow.com/questions/17839973/construct-pandas-dataframe-from-values-in-variables#17840195
dfCounts = pandas.DataFrame(motif_dict, index=[0])
dfCounts = pandas.melt(dfCounts) # rotate
dfCounts.rename(columns = {'variable': 'TF', 'value': 'tfCount'}, inplace=True)
# -- handling types
# ref: http://stackoverflow.com/questions/15891038/pandas-change-data-type-of-columns
dfCounts.dtypes
#pandas.to_numeric(s, errors='ignore')
dfCounts.size
dfCounts[dfCounts.tfCount > 1]
# add additional information to the dataframe
msb_names,msb_lengths = motif_set_base.motif_lengths(False)
# define data frame by columns
dfc = pandas.DataFrame(
{
"TF" : msb_names,
"TFLength" : msb_lengths
})
print(dfc)
if False:
# define data frame by row (this is wrong; interlaced value sets)
dfr = pandas.melt(pandas.DataFrame(
[ msb_names,
msb_lengths])).rename(columns = {'variable': 'TF', 'value': 'tfCount'}, inplace=True)
dfr
dfCounts[dfCounts.tfCount > 1].TF
dfc
duplication_set = pandas.merge( dfCounts[dfCounts.tfCount > 1], dfc, how='inner', on='TF' ).sort_values('TF')
duplication_set
#duplication_set.select(['TF','TFLength']).groupby(by='TF').rank(method='min')
duplication_set.groupby(by='TF').rank(method='min')
# ref: http://stackoverflow.com/questions/23976176/ranks-within-groupby-in-pandas
dfRank = lambda x: pandas.Series(pandas.qcut(x,2,labels=False),index=x.index)
dfRank2 = lambda x: pandas.qcut(x,2,labels=False)
# this works: replacing x above with duplication_set['TFLength'] but fails when adding groupby, why? fails using apply too.
# duplication_set['ranks'] = duplication_set.groupby('TF')['TFLength'].apply(dfRank)
# duplication_set['ranks'] = duplication_set['TFLength'].apply(dfRank)
# duplication_set['ranks'] = pandas.qcut((duplication_set['TFLength']),2,labels=False)
duplication_set['ranks'] = dfRank2(duplication_set['TFLength'])
# adding rank to try to pivot multiple rows to columns but no dice
# df.pivot(columns='var', values='val')
#duplication_set[['TF','TFLength','ranks']].pivot(columns='ranks',values='TFLength') # stupidly keeps dropping TF column, why? also duplicating rows and not actually pivoting
# duplication_set.pivot_table(df,index=["TF","Ranks"]) # fails, 'grouper for TF not 1 dimensional
# hack tired of fighting odd outcome, dyplr is much better than pandas
duplication_set
pandas.qcut(duplication_set['TFLength'],2,labels=False) # doesn't error gives ranks
# this is wrong but not clear why?
duplication_set[['TF','TFLength','ranks']].pivot(index='TF', columns='ranks',values='Lengths') # this is wrong too
duplication_set
duplication_set.dtypes
# duplication_set.pivot(index='TF', columns='ranks',values='TFLength') # this is wrong too errors: pandas pivot ValueError: Index contains duplicate entries, cannot reshape
# led to
# ref: http://stackoverflow.com/questions/28651079/pandas-unstack-problems-valueerror-index-contains-duplicate-entries-cannot-re#28652153
# e.set_index(['id', 'date', 'location'], append=True)
# not this doesn't work either and just creates problems
#duplication_set.set_index(['TF', 'ranks', 'TFLength','tfCount'], append=True)#.pivot( columns='ranks',values='TFLength')
duplication_set[duplication_set.TF == 'RFX5']
Explanation: Convert to DataFrame for Analysis of Duplication
End of explanation
# repeating motifs.py sub code set
import vcf
import motifs
import sequence
from pyfaidx import Fasta
force_ref_match = False
file_motif='../../data/HOCOMOCOv10.JASPAR_FORMAT.TF_IDS.fpr_0p001.txt.bed_reduced.RFX5.txt'
pc = 0.1
th = 0
bp = [0.25, 0.25, 0.25, 0.25]
ws = 50
motif_set = motif.get_motifs(file_motif, pc, th, bp)
wing_l = max(motif_set.max_positions, ws)
file_reference_genome='../../data/genome_reference/reference_genome_hg19.fa'
fa_ind = Fasta(file_reference_genome) # XXX: need to check, if present skip
file_input = '../../data/FLDL_CCCB_RARE_VARIANTS.MERGED.RNA_DP10.RNA_NODUPS.CHIP_MULTIMARK.SORTED.subset.vcf'
with open(file_input) as vcf_handle:
variant_set = vcf.read_vcf_variant_lines(vcf_handle, False)
for index in range(variant_set.length()):
var_element = variant_set.seq[index] # XXX: WARNING: changes made to element not saved
for index in range(variant_set.length()):
var_element = variant_set.seq[index]
# 1. get reference sequence
var_element.get_surround_seq(wing_l, fa_ind, force_ref_match)
# 2. compute reverse complement
var_element.assign_rev_complement()
# 3. compute int version (faster to process as int)
var_element.assign_int_versions()
ref_seq = var_element.return_full_ref_seq_int(wing_l)
var_seq = var_element.return_full_var_seq_int(wing_l)
print("\tref int: " + format(ref_seq) +
"\n\tvar int: " + format(var_seq))
print("start motif_match_int")
plusmatch = motif_set.motif_match_int(bp, ref_seq, var_seq, wing_l)
print('## Positive Matches ##')
for match in plusmatch:
print( match.name + " vs=" + str(round(match.var_score, 4)) +
" rs = " + str(round(match.ref_score, 4)) )
# 6. Calculate motif matches to reverse complement
ref_seq_rc = var_element.return_full_ref_seq_reverse_complement_int(wing_l)
var_seq_rc = var_element.return_full_var_seq_reverse_complement_int(wing_l)
print("\tref rc int: " + format(ref_seq_rc) +
"\n\tvar rc int: " + format(var_seq_rc))
print("start motif_match_int reverse complement")
minusmatch = motif_set.motif_match_int(bp, ref_seq_rc, var_seq_rc, wing_l)
print('## Reverse Complement Matches ##')
for match in minusmatch:
print( match.name + " vs=" + str(round(match.var_score, 4)) +
" rs = " + str(round(match.ref_score, 4)) )
Explanation: Move On: Just focus on Direct Runs
Debugging the motifs.py code;
grep 'RFX5' output.motif.20170114.vcf | grep '5.4636' > temp_investigate.txt
duplication of RFX5 occurs in chr1, 145039922; chr22 25005718; and chr5 46363751
Build up Reduced Data Set
Built reduced Motif File:
only include RFX5
note the motif file must have an end blank line
name: HOCOMOCOv10.JASPAR_FORMAT.TF_IDS.fpr_0p001.RFX5.txt
Building Example VCF File [Code]
grep '^#' ../../data/FLDL_CCCB_RARE_VARIANTS.MERGED.RNA_DP10.RNA_NODUPS.CHIP_MULTIMARK.SORTED.vcf > ../../data/FLDL_CCCB_RARE_VARIANTS.MERGED.RNA_DP10.RNA_NODUPS.CHIP_MULTIMARK.SORTED.subset.vcf
grep 'chr1' ../../data/FLDL_CCCB_RARE_VARIANTS.MERGED.RNA_DP10.RNA_NODUPS.CHIP_MULTIMARK.SORTED.vcf | grep '145039922' >> ../../data/FLDL_CCCB_RARE_VARIANTS.MERGED.RNA_DP10.RNA_NODUPS.CHIP_MULTIMARK.SORTED.subset.vcf
grep 'chr22' ../../data/FLDL_CCCB_RARE_VARIANTS.MERGED.RNA_DP10.RNA_NODUPS.CHIP_MULTIMARK.SORTED.vcf | grep '25005718' >> ../../data/FLDL_CCCB_RARE_VARIANTS.MERGED.RNA_DP10.RNA_NODUPS.CHIP_MULTIMARK.SORTED.subset.vcf
grep 'chr5' ../../data/FLDL_CCCB_RARE_VARIANTS.MERGED.RNA_DP10.RNA_NODUPS.CHIP_MULTIMARK.SORTED.vcf | grep '46363751' >> ../../data/FLDL_CCCB_RARE_VARIANTS.MERGED.RNA_DP10.RNA_NODUPS.CHIP_MULTIMARK.SORTED.subset.vcf
building example vcf file for ZNF143 [Code]
grep '^#' ../../data/FLDL_CCCB_RARE_VARIANTS.MERGED.RNA_DP10.RNA_NODUPS.CHIP_MULTIMARK.SORTED.vcf > ../../data/FLDL_CCCB_RARE_VARIANTS.MERGED.RNA_DP10.RNA_NODUPS.CHIP_MULTIMARK.SORTED.subset2.vcf
grep 'chr1' ../../data/FLDL_CCCB_RARE_VARIANTS.MERGED.RNA_DP10.RNA_NODUPS.CHIP_MULTIMARK.SORTED.vcf | grep '762601' >> ../../data/FLDL_CCCB_RARE_VARIANTS.MERGED.RNA_DP10.RNA_NODUPS.CHIP_MULTIMARK.SORTED.subset2.vcf
Running the Replacement Code
python3 tf_expression.py -i ../../data/FLDL_CCCB_RARE_VARIANTS.MERGED.RNA_DP10.RNA_NODUPS.CHIP_MULTIMARK.SORTED.subset.vcf -o 1 -m ../../data/HOCOMOCOv10.JASPAR_FORMAT.TF_IDS.fpr_0p001.RFX5.txt -e ../../data/ALL_ARRAYS_NORMALIZED_MAXPROBE_LOG2_COORDS.sorted.txt -mo ../../data/HOCOMOCOv10.JASPAR_FORMAT.TF_IDS.fpr_0p001.txt.bed_reduced.RFX5.txt
Creating gene dictionary for expression data.
start read exp_file:../../data/ALL_ARRAYS_NORMALIZED_MAXPROBE_LOG2_COORDS.sorted.txt
Filtering motif info for TFs that don't meet the expression threshold of 5. Found 17225 genes. Start filtering motifs.
COMPLETE.
python3 motifs.py -i ../../data/FLDL_CCCB_RARE_VARIANTS.MERGED.RNA_DP10.RNA_NODUPS.CHIP_MULTIMARK.SORTED.subset.vcf -r ../../data/genome_reference/reference_genome_hg19.fa -m ../../data/HOCOMOCOv10.JASPAR_FORMAT.TF_IDS.fpr_0p001.txt.bed_reduced.RFX5.txt -o ../../data/output.motif.20170507.vcf -fm -fp -ci ../../data/GM12878.ENCODE.ALL_TFS.bed -co ../../data/output.chip_peaks_output.20170507.bed &> ../../data/0_run_logs/20170507_motifs_run_stdout.txt
Examining Logs
- Shows 2 matches, 1 positive, 1 negative (ie reverse complement)
End of explanation |
15,086 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
SciPy has three methods for doing 1D integrals over samples (trapz, simps, and romb) and one way to do a 2D integral over a function (dblquad), but it doesn't seem to have methods for doing a 2D integral over samples -- even ones on a rectangular grid. | Problem:
import numpy as np
x = np.linspace(0, 1, 20)
y = np.linspace(0, 1, 30)
from scipy.integrate import simpson
z = np.cos(x[:,None])**4 + np.sin(y)**2
result = simpson(simpson(z, y), x) |
15,087 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Probability Calibration using ML-Insights
On Example of Mortality Model Using MIMIC ICU Data*
This workbook is intended to demonstrate why probability calibration may be useful, and how to do it using the ML-Insights package. This is an abridged version of a longer workbook (which goes into more detail).
We build a random forest classifier to predict mortality. We show that the uncalibrated model performs much worse than the calibrated version on log-loss (and brier score).
We demonstrate the "isotonic" and "sigmoid" calibration functions of sklearn and show the they are not as effective.
Finally, we show that even methods like boosting, which we might expect to be well-calibrated, benefit significantly from the calibration capabilities provided by ML-Insights.
*MIMIC-III, a freely accessible critical care database. Johnson AEW, Pollard TJ, Shen L, Lehman L, Feng M, Ghassemi M, Moody B, Szolovits P, Celi LA, and Mark RG. Scientific Data (2016).
https
Step1: In the next few cells, we load in some data, inspect it, select columns for our features and outcome (mortality) and fill in missing values with the median of that column.
Step2: Now we divide the data into training, and test sets via a 70/30 split.
Step3: Next, we fit a Random Forest model to our training data using the SplineCalibratedClassifierCV function in ML-Insights. The resulting object is a model containing the usual "predict" and "predict_proba" methods. It also contains the uncalibrated classifier and the calibration function that "corrects" the output to give more accurate probabilities. This enables us to see exactly why and how the calibration helps us perform better on metrics such as log-loss and brier score.
Note that the percentage of trees that voted "yes" in a random forest are better understood as mere scores. A higher value should generally indicate a higher probability of mortality. However, there is no reason to expect these to be well-calibrated probabilities. The fact that, say, 60% of the trees voted "yes" on a particular case does not mean that that case has a 60% probability of mortality.
We will demonstrate this empirically later.
Step4: As you can see below, there is a significant improvement after calibration. (Lower loss is better)
Step5: The following shows that the predict_proba method of rfm_calib is the same as using the uncalibrated classifier and then applying the calibration function.
Step6: The default logistic option is intended to reduce the log-loss (aka deviance). But we see here that it improves the Brier Score as well.
Step7: This next plot shows the calibration function that was estimated. If the original model had been perfectly calibrated, we would expect the calibration function to be y=x. Instead, the plot shows that the Random Forest scores overestimate the probability between 0 and 0.1 and underestimate it after that.
Step8: Using histograms and bin counts, we can look at estimated empirical probabilities on the test set to see how well our calibration is working.
Step9: The below plot shows how well the calibrated probabilities fit the empirical (binned) probabilities on an independent test set.
Step10: Existing Sklearn Calibration Functionality
Note, sklearn has a CalibratedClassifierCV function, but it does not seem to work as well.
Step11: The CalibratedClassifierCV in sklearn averages the results of mutiple calibrated models, resulting in significant variance which hampers its performance. By contrast, ML-Insights refits the full model, and computes one calibration on the entire cross-validated answer set.
For this reason, the log-loss is significantly worse (and brier score is worse as well) using the isotonic variant (though both are improved from the uncalibrated version)
Step12: The sigmoid variant assumes a strict parametric form for the calibration, which is not accurate in many cases, therefore, its performance is not very good.
Calibrating with Boosting Models
It is perhaps unsurprising that random forest vote percentages would need calibration since they are not intended to estimate the true probabilities. However, as we see below, even methods like boosting perform better if calibration is applied. (Warning
Step13: It is interesting to see that the boosting model has a very different calibration function. | Python Code:
# "pip install ml_insights" in terminal if needed
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import ml_insights as mli
%matplotlib inline
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier
from sklearn.cross_validation import train_test_split, StratifiedKFold
from sklearn.metrics import roc_auc_score, log_loss, brier_score_loss
from sklearn import clone
from sklearn.calibration import CalibratedClassifierCV
Explanation: Probability Calibration using ML-Insights
On Example of Mortality Model Using MIMIC ICU Data*
This workbook is intended to demonstrate why probability calibration may be useful, and how to do it using the ML-Insights package. This is an abridged version of a longer workbook (which goes into more detail).
We build a random forest classifier to predict mortality. We show that the uncalibrated model performs much worse than the calibrated version on log-loss (and brier score).
We demonstrate the "isotonic" and "sigmoid" calibration functions of sklearn and show the they are not as effective.
Finally, we show that even methods like boosting, which we might expect to be well-calibrated, benefit significantly from the calibration capabilities provided by ML-Insights.
*MIMIC-III, a freely accessible critical care database. Johnson AEW, Pollard TJ, Shen L, Lehman L, Feng M, Ghassemi M, Moody B, Szolovits P, Celi LA, and Mark RG. Scientific Data (2016).
https://mimic.physionet.org
End of explanation
# Load dataset derived from the MMIC database
lab_aug_df = pd.read_csv("data/lab_vital_icu_table.csv")
lab_aug_df.head(10)
X = lab_aug_df.loc[:,['aniongap_min', 'aniongap_max',
'albumin_min', 'albumin_max', 'bicarbonate_min', 'bicarbonate_max',
'bilirubin_min', 'bilirubin_max', 'creatinine_min', 'creatinine_max',
'chloride_min', 'chloride_max',
'hematocrit_min', 'hematocrit_max', 'hemoglobin_min', 'hemoglobin_max',
'lactate_min', 'lactate_max', 'platelet_min', 'platelet_max',
'potassium_min', 'potassium_max', 'ptt_min', 'ptt_max', 'inr_min',
'inr_max', 'pt_min', 'pt_max', 'sodium_min', 'sodium_max', 'bun_min',
'bun_max', 'wbc_min', 'wbc_max','sysbp_max', 'sysbp_mean', 'diasbp_min', 'diasbp_max', 'diasbp_mean',
'meanbp_min', 'meanbp_max', 'meanbp_mean', 'resprate_min',
'resprate_max', 'resprate_mean', 'tempc_min', 'tempc_max', 'tempc_mean',
'spo2_min', 'spo2_max', 'spo2_mean']]
y = lab_aug_df['hospital_expire_flag']
# Impute the median for in each column to replace NA's
median_vec = [X.iloc[:,i].median() for i in range(len(X.columns))]
for i in range(len(X.columns)):
X.iloc[:,i].fillna(median_vec[i],inplace=True)
Explanation: In the next few cells, we load in some data, inspect it, select columns for our features and outcome (mortality) and fill in missing values with the median of that column.
End of explanation
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=942)
Explanation: Now we divide the data into training, and test sets via a 70/30 split.
End of explanation
rfm = RandomForestClassifier(n_estimators = 500, class_weight='balanced_subsample', random_state=942, n_jobs=-1 )
rfm_calib = mli.SplineCalibratedClassifierCV(rfm)
rfm_calib.fit(X_train,y_train)
test_res_uncalib = rfm_calib.uncalibrated_classifier.predict_proba(X_test)[:,1]
test_res_calib = rfm_calib.predict_proba(X_test)[:,1]
Explanation: Next, we fit a Random Forest model to our training data using the SplineCalibratedClassifierCV function in ML-Insights. The resulting object is a model containing the usual "predict" and "predict_proba" methods. It also contains the uncalibrated classifier and the calibration function that "corrects" the output to give more accurate probabilities. This enables us to see exactly why and how the calibration helps us perform better on metrics such as log-loss and brier score.
Note that the percentage of trees that voted "yes" in a random forest are better understood as mere scores. A higher value should generally indicate a higher probability of mortality. However, there is no reason to expect these to be well-calibrated probabilities. The fact that, say, 60% of the trees voted "yes" on a particular case does not mean that that case has a 60% probability of mortality.
We will demonstrate this empirically later.
End of explanation
log_loss(y_test,test_res_uncalib)
log_loss(y_test,test_res_calib)
Explanation: As you can see below, there is a significant improvement after calibration. (Lower loss is better)
End of explanation
test_res_calib_2 = rfm_calib.calib_func(test_res_uncalib)
log_loss(y_test,test_res_calib_2)
Explanation: The following shows that the predict_proba method of rfm_calib is the same as using the uncalibrated classifier and then applying the calibration function.
End of explanation
brier_score_loss(y_test,test_res_uncalib)
brier_score_loss(y_test,test_res_calib)
Explanation: The default logistic option is intended to reduce the log-loss (aka deviance). But we see here that it improves the Brier Score as well.
End of explanation
plt.plot(np.linspace(0,1,101),rfm_calib.calib_func(np.linspace(0,1,101)))
plt.plot(np.linspace(0,1,101),np.linspace(0,1,101),'k')
Explanation: This next plot shows the calibration function that was estimated. If the original model had been perfectly calibrated, we would expect the calibration function to be y=x. Instead, the plot shows that the Random Forest scores overestimate the probability between 0 and 0.1 and underestimate it after that.
End of explanation
# Side by side histograms showing scores of positive vs negative cases
fig, axis = plt.subplots(1,2, figsize = (8,4))
ax=axis.flatten()
countvec0_test = ax[0].hist(test_res_uncalib[np.where(y_test==0)],bins=20,range=[0,1]);
countvec1_test = ax[1].hist(test_res_uncalib[np.where(y_test==1)],bins=20,range=[0,1]);
emp_prob_vec_test = countvec1_test[0]/(countvec0_test[0]+countvec1_test[0])
Explanation: Using histograms and bin counts, we can look at estimated empirical probabilities on the test set to see how well our calibration is working.
End of explanation
plt.plot(np.linspace(0,1,101),rfm_calib.calib_func(np.linspace(0,1,101)))
plt.plot(np.linspace(0,1,101),np.linspace(0,1,101),'k')
plt.plot(np.linspace(.025,.975,20), emp_prob_vec_test, 'g+')
Explanation: The below plot shows how well the calibrated probabilities fit the empirical (binned) probabilities on an independent test set.
End of explanation
clf_isotonic_xval = CalibratedClassifierCV(rfm, method='isotonic', cv=5)
clf_isotonic_xval.fit(X_train,y_train)
prob_pos_isotonic_xval = clf_isotonic_xval.predict_proba(X_test)[:, 1]
log_loss(y_test,prob_pos_isotonic_xval), log_loss(y_test,test_res_calib)
brier_score_loss(y_test,prob_pos_isotonic_xval), brier_score_loss(y_test,test_res_calib)
plt.plot(np.linspace(0,1,101),np.linspace(0,1,101),'k')
plt.plot(test_res_uncalib,prob_pos_isotonic_xval,'c.')
plt.plot(np.linspace(.025,.975,20),emp_prob_vec_test,'g+')
plt.plot(np.linspace(0,1,101),rfm_calib.calib_func(np.linspace(0,1,101)),'b')
Explanation: Existing Sklearn Calibration Functionality
Note, sklearn has a CalibratedClassifierCV function, but it does not seem to work as well.
End of explanation
clf_sigmoid_xval = CalibratedClassifierCV(rfm, method='sigmoid', cv=5)
clf_sigmoid_xval.fit(X_train,y_train)
prob_pos_sigmoid_xval = clf_sigmoid_xval.predict_proba(X_test)[:, 1]
log_loss(y_test,prob_pos_sigmoid_xval), log_loss(y_test,test_res_calib)
brier_score_loss(y_test,prob_pos_sigmoid_xval), brier_score_loss(y_test,test_res_calib)
plt.plot(np.linspace(0,1,101),np.linspace(0,1,101),'k')
plt.plot(test_res_uncalib,prob_pos_sigmoid_xval,'c.')
plt.plot(np.linspace(.025,.975,20),emp_prob_vec_test,'g+')
Explanation: The CalibratedClassifierCV in sklearn averages the results of mutiple calibrated models, resulting in significant variance which hampers its performance. By contrast, ML-Insights refits the full model, and computes one calibration on the entire cross-validated answer set.
For this reason, the log-loss is significantly worse (and brier score is worse as well) using the isotonic variant (though both are improved from the uncalibrated version)
End of explanation
gbm = GradientBoostingClassifier(n_estimators = 1000, max_depth=7, learning_rate=.02, random_state=942)
gbm_calib = mli.SplineCalibratedClassifierCV(gbm)
gbm_calib.fit(X_train,y_train)
test_res_uncalib_gbm = gbm_calib.uncalibrated_classifier.predict_proba(X_test)[:,1]
test_res_calib_gbm = gbm_calib.predict_proba(X_test)[:,1]
log_loss(y_test,test_res_uncalib_gbm)
log_loss(y_test,test_res_calib_gbm)
# Side by side histograms showing scores of positive vs negative cases
fig, axis = plt.subplots(1,2, figsize = (8,4))
ax=axis.flatten()
countvec0_test_gbm = ax[0].hist(test_res_uncalib_gbm[np.where(y_test==0)],bins=20,range=[0,1]);
countvec1_test_gbm = ax[1].hist(test_res_uncalib_gbm[np.where(y_test==1)],bins=20,range=[0,1]);
emp_prob_vec_test_gbm = countvec1_test_gbm[0]/(countvec0_test_gbm[0]+countvec1_test_gbm[0])
Explanation: The sigmoid variant assumes a strict parametric form for the calibration, which is not accurate in many cases, therefore, its performance is not very good.
Calibrating with Boosting Models
It is perhaps unsurprising that random forest vote percentages would need calibration since they are not intended to estimate the true probabilities. However, as we see below, even methods like boosting perform better if calibration is applied. (Warning: next cell may take 30+ minutes to run). For a quicker demonstration, decrease the max_depth and/or the n_estimators)
End of explanation
plt.plot(np.linspace(0,1,101),gbm_calib.calib_func(np.linspace(0,1,101)))
plt.plot(np.linspace(0,1,101),np.linspace(0,1,101),'k')
plt.plot(np.linspace(.025,.975,20), emp_prob_vec_test_gbm, 'g+')
Explanation: It is interesting to see that the boosting model has a very different calibration function.
End of explanation |
15,088 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Now, I'm going to learn Beautiful Soup
Step1: A tag has a name (say, "someTag"). It contains a set of attribute
Step2: i.e. we can use .attrs to show the dictionary of a specified tag (attr, value).
Also, the things within the begin and the end tag (i.e. within $<b>$ and $</b>$) are strings
Step3: These strings are objects of the type NavigableString. i.e. we can do further actions such as to find its direct parent
Step4: or find its parent's parent(now we have the easiest situation, i.e. they have only one direct parent)
Step5: The most common multi-valued attribute is class
Step6: on the other hand, id is not a multi-valued attribute
Step8: Now, let's see a slightly complex situation
Step9: First of all, let's see the prettified structure
Step10: the "find_all" method, as well as tag.name, tag.attrs, tag.string, tag['attrName']
apparently, the above is not what I want. I actually would like to obtain all the tags which is labeled as "p". This can be achieved by taking the advantage of the method "find_all"
Step11: move from unicode to ascii
Step12: So many spaces there. The spaces can be removed via stripping the strings
Step13: Well, we could also put all the stripped strings into a list
Step14: parents and descendents
Step15: We've successfully found the parent of the specified tag. This can be verified by seeing the structure obtained from the method soup.prettify().
Step16: The above result is understandable since from the method soup.prettify() we know already that $<html>$ and $<p>$ are the direct children of the parent $[document]$.
Now, let's see its descendants
Step18: Say, I'd like to get all the strings of all the "p" tags. How to do this? Let's see
Step19: generator and iterator
I now have the problem about the type "generator". I'd like to understand both the types "generator" and "iterator" in Python better.
Step20: When Python executes the for loop, it first invokes the $iter()$ method of the container to get the iterator of the container. It then repeatedly calls the next() method $next()$ method in Python 3.x) of the iterator until the iterator raises a StopIteration exception. Once the exception is raised, the for loop ends.
which means that a list is iterable. More details see
Step21: Well, this I understand. The above is nothing but the concept of the iterator.
(https
Step22: I got it. In this way, the iterator can be built easier. That's it. I think 1) the design pattern of the generator is simpler than the design pattern of the iterator. 2) their behave should be the same.
Now, due to the string we have got has a special type (NavigableString), we can find the direct parent of these special strings.
Step23: Now, let's see if we can print out all the strings of the site
Step24: or the stripped strings(unnecessary spaces are removed)
Step25: Siblings
Step26: back to the soup example. Let's find the siblings of the tag "a" in an iterating way.
Step27: apparently, it is a generator. So, we can do the following
Step28: learn
Step29: learn
Step30: learn
Step31: some very basic exercises about the pandas dataframe
Step32: the "get_text" method from Beautifulsoup
Step33: get to know the Python built-in methods "strip", "splitlines" and "split"
Step34: understand the regular expression in Python (re.match, re.sub, re.findall)
Step35: recall
Step36: the use of the built-in functions "split" and "join"
Step37: Now, let's do something slightly more serious
Step38: 31.10.2016
the "urllib2" package seems buggy and will request pages which are out of date. Let's use the "requests" (which uses the "urllib3") package instead. | Python Code:
soup = BeautifulSoup('<b class="boldest">Extremely bold</b>',"html.parser")
tag = soup.b
type(tag)
Explanation: Now, I'm going to learn Beautiful Soup:
Tag:
End of explanation
print tag.name
print tag["class"]
print tag.attrs
Explanation: A tag has a name (say, "someTag"). It contains a set of attribute:value(s). The following is an example of the structure of a tag:
$\text{<someTag attr1="value" attr2="value1 value2"> A String </someTag>}$
End of explanation
print type(tag.string)
print tag.string
Explanation: i.e. we can use .attrs to show the dictionary of a specified tag (attr, value).
Also, the things within the begin and the end tag (i.e. within $<b>$ and $</b>$) are strings:
End of explanation
print tag.string.parent # a NavigableString obj
print unicode(tag.string.parent) # a unicode string
print repr(unicode(tag.string.parent)) # a unicode string (in repr(), if the string is "unicode" encoded, it will begin by u')
#check the types of the above stuffs:
print type(tag.string.parent)
print type(unicode(tag.string.parent))
print type(repr(unicode(tag.string.parent)))
Explanation: These strings are objects of the type NavigableString. i.e. we can do further actions such as to find its direct parent:
End of explanation
print tag.string.parent.parent
Explanation: or find its parent's parent(now we have the easiest situation, i.e. they have only one direct parent):
End of explanation
css_soup = BeautifulSoup('<p class="body strikeout"></p>')
print css_soup.p['class']
# ["body", "strikeout"]
css_soup = BeautifulSoup('<p class="body strikeout"></p>', "lxml")
print css_soup.p['class']
Explanation: The most common multi-valued attribute is class:
End of explanation
id_soup = BeautifulSoup('<p id="my id"></p>')
id_soup.p['id']
# 'my id'
Explanation: on the other hand, id is not a multi-valued attribute:
End of explanation
html_doc =
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were</p>
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
soup = BeautifulSoup(html_doc, 'html.parser')
Explanation: Now, let's see a slightly complex situation:
End of explanation
print soup.prettify()
print soup.p
Explanation: First of all, let's see the prettified structure:
End of explanation
for tag in soup.find_all("p"):
print tag
print tag.name
print tag.attrs
print tag["class"]
print type(tag["class"][0])
print tag.string
print "==================================================================================================="
Explanation: the "find_all" method, as well as tag.name, tag.attrs, tag.string, tag['attrName']
apparently, the above is not what I want. I actually would like to obtain all the tags which is labeled as "p". This can be achieved by taking the advantage of the method "find_all":
End of explanation
for string in soup.strings:
print(repr(string))
print repr(string.encode("ascii"))
print
Explanation: move from unicode to ascii:
End of explanation
for string in soup.stripped_strings:
print(repr(string.encode("ascii")))
Explanation: So many spaces there. The spaces can be removed via stripping the strings:
End of explanation
[repr(string.encode("ascii")) for string in soup.stripped_strings]
print soup.prettify()
Explanation: Well, we could also put all the stripped strings into a list:
End of explanation
link = soup.a
print link
print
# <a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>
for parent in link.parents:
if parent is None:
print(parent)
else:
print(parent.name)
Explanation: parents and descendents
End of explanation
print soup.name
for child in soup.children:
print child
Explanation: We've successfully found the parent of the specified tag. This can be verified by seeing the structure obtained from the method soup.prettify().
End of explanation
print soup.body.name
print type(soup.body.descendants)
for child in soup.body.descendants:
print child
Explanation: The above result is understandable since from the method soup.prettify() we know already that $<html>$ and $<p>$ are the direct children of the parent $[document]$.
Now, let's see its descendants:
End of explanation
for single_tag in soup.find_all("p"):
for string in single_tag:
print string
html_doc =
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_doc, 'html.parser')
print soup.find_all("p")[0]
print soup.find_all("p")[0].get("class")
print soup.find_all('a')
for link in soup.find_all('a'):
print type(link)
print(link.get('href'))
Explanation: Say, I'd like to get all the strings of all the "p" tags. How to do this? Let's see:
End of explanation
a = [1, 2, 3, 4]
b=123
print type(a.__iter__)
print type(a.__init__)
print type(b.__init__)
print type(b.__iter__)
Explanation: generator and iterator
I now have the problem about the type "generator". I'd like to understand both the types "generator" and "iterator" in Python better.
End of explanation
# Using the generator pattern (an iterable)
class firstn(object):
def __init__(self, n):
self.n = n
self.num, self.nums = 0, []
def __iter__(self):
return self
# Python 3 compatibility
def __next__(self): # Okay, I knew this. In Python3 one should be using __next__.
return self.next()
def next(self):
if self.num < self.n:
cur, self.num = self.num, self.num+1
return cur
else:
raise StopIteration()
print type(firstn(3))
for j in firstn(3):
print j
print
a=firstn(3)
for _ in range(3):
print a.next()
Explanation: When Python executes the for loop, it first invokes the $iter()$ method of the container to get the iterator of the container. It then repeatedly calls the next() method $next()$ method in Python 3.x) of the iterator until the iterator raises a StopIteration exception. Once the exception is raised, the for loop ends.
which means that a list is iterable. More details see: http://www.shutupandship.com/2012/01/understanding-python-iterables-and.html
Let's quote the summary from that site (written by Praveen Gollakota):
If you define a custom container class, think about whether it should also be an iterable.
It is quite easy to make a class support the iterator protocol.
Doing so will make the syntax more natural.
If I can't recall what the above summary says or how to make a class iterable in the future, I'll visit that website again.
Now, let's continue. What is a generator in Python?
End of explanation
def firstn(n):
num = 0
while num < n:
yield num
num += 1
a=firstn(3)
for a in firstn(3):
print a
Explanation: Well, this I understand. The above is nothing but the concept of the iterator.
(https://wiki.python.org/moin/Generators)
Python provides generator functions as a convenient shortcut to building iterators. Lets us rewrite the above iterator as a generator function:
End of explanation
print [soup.find_all("p")[j].string for j in range(3)][1].parent.parent
Explanation: I got it. In this way, the iterator can be built easier. That's it. I think 1) the design pattern of the generator is simpler than the design pattern of the iterator. 2) their behave should be the same.
Now, due to the string we have got has a special type (NavigableString), we can find the direct parent of these special strings.
End of explanation
for string in soup.strings:
print(repr(string))
print((string))
print(type(string))
print"======================================"
Explanation: Now, let's see if we can print out all the strings of the site:
End of explanation
for string in soup.stripped_strings:
print(repr(string))
Explanation: or the stripped strings(unnecessary spaces are removed)
End of explanation
sibling_soup = BeautifulSoup("<a><b>text1</b><c>text2</c></b></a>","lxml")
print(sibling_soup.prettify())
print sibling_soup.b.next_sibling
print sibling_soup.b.previous_sibling
print sibling_soup.c.next_sibling
print sibling_soup.c.previous_sibling
Explanation: Siblings:
End of explanation
print type(soup.a.next_siblings)
Explanation: back to the soup example. Let's find the siblings of the tag "a" in an iterating way.
End of explanation
for sibling in soup.a.next_siblings:
print(repr(sibling))
Explanation: apparently, it is a generator. So, we can do the following:
End of explanation
a=["aaa","bbb","aac","caa","def"]
print 'aa' in 'aaa'
print map(lambda x:'aa' in x, a)
print filter(lambda x:'aa' in x, a)
Explanation: learn: the function map and filter
End of explanation
print reduce(lambda x, y: x+y, [1, 2, 3, 4, 5])
final_site_list = ['http://www.indeed.com/jobs?q=%22','data+scientist', '%22&l=', 'New+York']
print reduce(lambda x,y: x+y,final_site_list)
print "".join(final_site_list)
Explanation: learn: the function reduce
End of explanation
a=Counter({"a":1,"b":3})
print a
print a["b"]
cnt=Counter()
cnt.update(['red', 'blue', 'red', 'green', 'blue', 'blue'])
print cnt
cnt = Counter()
for word in ['red', 'blue', 'red', 'green', 'blue', 'blue']:
cnt[word] += 1
print cnt
cnt.update(['red', 'blue', 'red', 'green', 'blue', 'blue'])
print cnt
print cnt.items()
print type(cnt.items())
Explanation: learn: the usage of Counter:
End of explanation
frame=pd.DataFrame(cnt.items(), columns = ['color', 'numColor'])
ax=frame.plot(x = 'color', kind = 'bar', legend = None,color='green')
ax.set_xlabel("color")
ax.set_ylabel("color counts")
ax.set_ylim(0,7)
np.asarray([[1,2,3,4],[5,6,7,8]])
frame=pd.DataFrame(np.asarray([[1,2,3,4],[5,6,7,8]]), columns = ['01', '02','03','04'])
ax=frame.plot( kind = 'bar')
ax.set_ylim(0,10)
fig = ax.get_figure()
fig.savefig("tmp.svg")
frame.tail()
Explanation: some very basic exercises about the pandas dataframe:
it accepts numpy array or dictionary-like inputs. Say, let's import data from a set of dictionary:|
End of explanation
soup.get_text("|",strip=True)
Explanation: the "get_text" method from Beautifulsoup:
End of explanation
# splitlines:
# (code example from "https://www.tutorialspoint.com/python/string_splitlines.htm)
str = "Line1-a b c d e f\nLine2- a b c\n\nLine4- a b c d";
print str.splitlines( ) # the line will break into lines according to the line break \n
print str.splitlines(1) # line breaks will be included within the splitted string
# strip:
# (code example from "https://www.tutorialspoint.com/python/string_strip.htm")
print repr("0000000this is string example....\nwow!!!0000000".strip('0')) # the chars"0" at the both ends
# of the string will be removed.
print repr(" 0000000this is string example....\nwow!!!0000000 ".strip()) # the empty spaces will be removed.
print '1,,2'.split(',')
print '1,,2 345'.split()
Explanation: get to know the Python built-in methods "strip", "splitlines" and "split":
End of explanation
print "-----------------------------------"
print "tests of 're.match':"
print "-----------------------------------"
m=re.match(r'(bcd){2}',"bcdbcd")
print "re:(bcd){2} string: bcdbcd","match:",repr(m.group())
m=re.match(r'[a-zA-Z][3]{2}',"a33")
print "re:[a-zA-Z][3]{2} string: a33","match:",repr(m.group())
m=re.match(r'[a-zA-Z].+3',"f42312d")
print repr(m.group())
print "re:[a-zA-Z].+3 string: f42312d","match:",repr(m.group())
m = re.match(r"(\d+b)(\d{3})", "24b1632")
print "re:(\d+b)(\d{3}) string: 24b1632","match:",repr(m.group())
print "m.groups():",m.groups() # according to the parenthesis in re, the string will be split into different groups.
print "-----------------------------------"
print "tests of 're.match' with try&catch:"
print "-----------------------------------"
try:
m=re.match(r'(d3.js)',">")
print repr(m.group())
except AttributeError:
print "the re and the string does not match!"
except Exception: # catch Exception if AttributeError is not the cause
print "what's happening there?"
try:
m=re.match(r'(d3.js)',">","123454321")
print repr(m.group())
except AttributeError:
print "the re and the string does not match!"
except Exception: # catch Exception if AttributeError is not the cause
print "Oops, something wrong!"
print "-----------------------------------"
print "tests of 're.sub':"
print "-----------------------------------"
print "re:\d{2}.* string: 11 2 3 123 abc cde replacement: 00","\nresult:",re.sub(r"\d{2}.*","00", "11 2 3 123 abc cde\n")
print "re:\d{2} string: 11 2 3 123 abc cde replacement: 00","\nresult:",re.sub(r"\d{2}","00", "11 2 3 123 abc cde\n")
# the following line will remove any element of the string
# which is not within this list: [any alphabets(case irrelevant), ., 3, +]
print "re:[^a-zA-Z.3+] string: #c--d++e**1234.5 replacement: '' ","\nresult:",re.sub(r'[^a-zA-Z.3+]',"", "#c--d++e**1234.5\n")
print "-----------------------------------"
print "tests of 're.findall':"
print "-----------------------------------"
print repr(re.findall(r'\d+',"Jobs 1 to 10 of 382"))
Explanation: understand the regular expression in Python (re.match, re.sub, re.findall):
(https://www.tutorialspoint.com/python/python_reg_expressions.htm)
End of explanation
foo = [2, 18, 9, 22, 17, 24, 8, 12, 27]
print filter(lambda x: x % 3 == 0, foo) # from python official doc:
# filter(function, iterable)
# is equivalent to [item for item in iterable if function(item)]
print map(lambda x: x * 2 + 10, foo)
print reduce(lambda x, y: x + y, foo)
print sum(foo)
Explanation: recall:lambda function (http://www.secnetix.de/olli/Python/lambda_functions.hawk)
End of explanation
a = "Free your mind."
b = "Welcome to the desert... of the real."
c = "What is real? How do you define real?"
print(a)
print(a.split())
print
print(b)
print(b.split("o"))
print
print(c)
print(c.split(" ", 4))
print
print '+'.join("abc")
print '+'.join(["a","b","c"])
Explanation: the use of the built-in functions "split" and "join":
End of explanation
def skills_dict(doc_frequency):
prog_lang_dict = Counter({'R':doc_frequency['r'], 'Python':doc_frequency['python'],
'Java':doc_frequency['java'], 'C++':doc_frequency['c++'],
'Ruby':doc_frequency['ruby'],
'Perl':doc_frequency['perl'], 'Matlab':doc_frequency['matlab'],
'JavaScript':doc_frequency['javascript'], 'Scala': doc_frequency['scala']})
analysis_tool_dict = Counter({'Excel':doc_frequency['excel'], 'Tableau':doc_frequency['tableau'],
'D3.js':doc_frequency['d3.js'], 'SAS':doc_frequency['sas'],
'SPSS':doc_frequency['spss'], 'D3':doc_frequency['d3']})
hadoop_dict = Counter({'Hadoop':doc_frequency['hadoop'], 'MapReduce':doc_frequency['mapreduce'],
'Spark':doc_frequency['spark'], 'Pig':doc_frequency['pig'],
'Hive':doc_frequency['hive'], 'Shark':doc_frequency['shark'],
'Oozie':doc_frequency['oozie'], 'ZooKeeper':doc_frequency['zookeeper'],
'Flume':doc_frequency['flume'], 'Mahout':doc_frequency['mahout']})
database_dict = Counter({'SQL':doc_frequency['sql'], 'NoSQL':doc_frequency['nosql'],
'HBase':doc_frequency['hbase'], 'Cassandra':doc_frequency['cassandra'],
'MongoDB':doc_frequency['mongodb']})
overall_total_skills = prog_lang_dict + analysis_tool_dict + hadoop_dict + database_dict # Combine our Counter objects
return overall_total_skills
def text_cleaner(url):
try:
session = requests.Session()
soup = BeautifulSoup(session.get(url, timeout=5).content, 'lxml') # let our beautiful soup to parse the site
except:
print "connection error or something wrong. URL=",url
return
for script in soup(["script", "style"]): # Remove these two unnecessary tags: "script" and "style"
_=script.extract()
stopwords = nltk.corpus.stopwords.words('english') # a list of words which are not important
# we will ignore these words if they show up in the context
text=soup.get_text(" ",strip=True)
text=re.sub(r"[^a-zA-Z.3+]"," ",text) # preserve . and 3 for "d3.js". Also, preserve "+" for "c++"
content=[w.strip(".") for w in text.lower().split() if w not in stopwords] # remove any "." if it's contained
# at the begin or the end of the string
return content
def skills_info(city = None, state = None):
city_title = city
if city is None:
city_title = 'Nationwide'
final_site_list = ['http://www.indeed.com/jobs?q=%22','data+scientist', '%22&l=', city_title,
'%2C+', state]
final_site = "".join(final_site_list)
base_URL = "http://www.indeed.com"
print final_site
try:
session = requests.Session()
soup = BeautifulSoup(session.get(final_site).content,"lxml")
except:
print "connection error or something wrong. URL=",final_site
return
print soup.find(id = "searchCount")
num_jobs_area=soup.find(id = "searchCount").string
job_numbers = re.findall("\d+", num_jobs_area)
if len(job_numbers) > 3: # Have a total number of jobs greater than 1000
total_num_jobs = (int(job_numbers[2])*1000) + int(job_numbers[3])
else:
total_num_jobs = int(job_numbers[2])
if(total_num_jobs%10==0):
num_pages = total_num_jobs/10
else:
num_pages = 1+total_num_jobs/10
print "num_pages=",num_pages
job_descriptions = [] # store all our descriptions in this list
for i in range(num_pages): # loop through all of our search result pages
#for i in (0,):
start_num = str(i*10) # assign the multiplier of 10 to view the pages we want
current_page = "".join([final_site, "&start=", start_num])
print "Getting page", i,"start_num=",start_num
print current_page
job_link_area = BeautifulSoup(session.get(current_page).content,"lxml") # locate all of the job links within the <body> area
#join the URL base and the tail part of the URL using urlparse package:
job_URLs=[urlparse.urljoin(base_URL,link.a.get('href')) for link in job_link_area.select( 'h2[class="jobtitle"]')]
print job_URLs,len(job_URLs)
for URL in job_URLs:
final_description = text_cleaner(URL)
job_descriptions.append(final_description)
sleep(1) # so that we don't be jerks. If you have a very fast internet connection you could hit the server a lot!
doc_frequency=Counter()
for item in job_descriptions:
doc_frequency.update(item) # add all the words to the counter table and count the frequency of each words
#print doc_frequency.most_common(10)
print 'Done with collecting the job postings!'
print 'There were', len(job_descriptions), 'jobs successfully found.'
# Obtain our key terms and store them in a dict. These are the key data science skills we are looking for
overall_total_skills=skills_dict(doc_frequency)
final_frame = pd.DataFrame(overall_total_skills.items(), columns = ['Term', 'NumPostings']) # Convert these terms to a
# dataframe
# Change the values to reflect a percentage of the postings
final_frame.NumPostings = (final_frame.NumPostings)*100/len(job_descriptions) # Gives percentage of job postings
# having that term
# Sort the data for plotting purposes
final_frame.sort_values('NumPostings', ascending = False, inplace = True)
print final_frame
today = datetime.date.today()
# Get it ready for a bar plot
final_plot = final_frame.plot(x = 'Term', kind = 'bar', legend = None,
title = 'Percentage of Data Scientist Job Ads with a Key Skill, '+city_title+', '+str(today))
final_plot.set_ylabel('Percentage Appearing in Job Ads')
fig = final_plot.get_figure() # Have to convert the pandas plot object to a matplotlib object
fig.savefig(city_title+".pdf")
#return fig,final_frame
def skills_info_TW104():
final_site_list = ['https://www.104.com.tw/jobbank/joblist/joblist.cfm?jobsource=n104bank1&ro=0&keyword=','data+scientist',
'&excludeCompanyKeyword=醫藥+生物+生技+微脂體','&order=2&asc=0','&page=','1']
final_site = "".join(final_site_list)
print final_site
base_URL = "https://www.104.com.tw/"
country="Taiwan"
try:
session = requests.Session()
soup = BeautifulSoup(session.get(final_site).content,"lxml")
except:
print "connection error or something wrong. URL=",final_site
return
#print soup.find(class_="joblist_bar")
num_jobs_area=soup.select('li[class="right"]')
#print type(num_jobs_area)
#print num_jobs_area[0]
total_num_jobs = int( re.findall("\d+", str(num_jobs_area[0]))[0] )
print "num_jobs=",total_num_jobs
if(total_num_jobs%20)==0:
num_pages = total_num_jobs/20
else:
num_pages=1+total_num_jobs/20
print "num_pages=",num_pages
job_descriptions = [] # store all our descriptions in this list
for i in range(1,num_pages+1): # loop through all of our search result pages
#for i in (1,):
start_num = str(i)
final_site_list = final_site_list[:-1]
final_site = "".join(final_site_list)
current_page = "".join([final_site, start_num])
print "Getting page", i
print current_page
job_link_area = BeautifulSoup(session.get(current_page).content,"lxml") # locate all of the job links within the <body> area
#join the URL base and the tail part of the URL using urlparse package:
job_URLs=[urlparse.urljoin(base_URL,link.a.get('href')) for link in job_link_area.select('div[class="jobname_summary job_name"]')]
print job_URLs,len(job_URLs)
for URL in job_URLs:
final_description = text_cleaner(URL)
job_descriptions.append(final_description)
sleep(1) # so that we don't be jerks. If you have a very fast internet connection you could hit the server a lot!
doc_frequency=Counter()
for item in job_descriptions:
doc_frequency.update(item) # add all the words to the counter table and count the frequency of each words
#print doc_frequency.most_common(10)
print 'Done with collecting the job postings!'
print 'There were', len(job_descriptions), 'jobs successfully found.'
# Obtain our key terms and store them in a dict. These are the key data science skills we are looking for
overall_total_skills=skills_dict(doc_frequency)
final_frame = pd.DataFrame(overall_total_skills.items(), columns = ['Term', 'NumPostings']) # Convert these terms to a
# dataframe
# Change the values to reflect a percentage of the postings
final_frame.NumPostings = (final_frame.NumPostings)*100/len(job_descriptions) # Gives percentage of job postings
# having that term
# Sort the data for plotting purposes
final_frame.sort_values('NumPostings', ascending = False, inplace = True)
print final_frame
today = datetime.date.today()
# Get it ready for a bar plot
final_plot = final_frame.plot(x = 'Term', kind = 'bar', legend = None,
title = 'Percentage of Data Scientist Job Ads with a Key Skill, '+country+', '+str(today))
final_plot.set_ylabel('Percentage Appearing in Job Ads')
fig = final_plot.get_figure() # Have to convert the pandas plot object to a matplotlib object
fig.savefig(country+".pdf")
skills_info_TW104()
Explanation: Now, let's do something slightly more serious:
Remark: the following functions for web crawling were originally written by Dr. Steinweg-Woods (https://jessesw.com/Data-Science-Skills/). His code was not up to date and some issues exist. I have cured some problems.
End of explanation
skills_info(city = 'San Francisco', state = 'CA')
skills_info(city = 'New York', state = 'NY')
Explanation: 31.10.2016
the "urllib2" package seems buggy and will request pages which are out of date. Let's use the "requests" (which uses the "urllib3") package instead.
End of explanation |
15,089 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ordinary Least Squares
Step1: OLS estimation
Artificial data
Step2: Our model needs an intercept so we add a column of 1s
Step3: Fit and summary
Step4: Quantities of interest can be extracted directly from the fitted model. Type dir(results) for a full list. Here are some examples
Step5: OLS non-linear curve but linear in parameters
We simulate artificial data with a non-linear relationship between x and y
Step6: Fit and summary
Step7: Extract other quantities of interest
Step8: Draw a plot to compare the true relationship to OLS predictions. Confidence intervals around the predictions are built using the wls_prediction_std command.
Step9: OLS with dummy variables
We generate some artificial data. There are 3 groups which will be modelled using dummy variables. Group 0 is the omitted/benchmark category.
Step10: Inspect the data
Step11: Fit and summary
Step12: Draw a plot to compare the true relationship to OLS predictions
Step13: Joint hypothesis test
F test
We want to test the hypothesis that both coefficients on the dummy variables are equal to zero, that is, $R \times \beta = 0$. An F test leads us to strongly reject the null hypothesis of identical constant in the 3 groups
Step14: You can also use formula-like syntax to test hypotheses
Step15: Small group effects
If we generate artificial data with smaller group effects, the T test can no longer reject the Null hypothesis
Step16: Multicollinearity
The Longley dataset is well known to have high multicollinearity. That is, the exogenous predictors are highly correlated. This is problematic because it can affect the stability of our coefficient estimates as we make minor changes to model specification.
Step17: Fit and summary
Step18: Condition number
One way to assess multicollinearity is to compute the condition number. Values over 20 are worrisome (see Greene 4.9). The first step is to normalize the independent variables to have unit length
Step19: Then, we take the square root of the ratio of the biggest to the smallest eigen values.
Step20: Dropping an observation
Greene also points out that dropping a single observation can have a dramatic effect on the coefficient estimates
Step21: We can also look at formal statistics for this such as the DFBETAS -- a standardized measure of how much each coefficient changes when that observation is left out.
Step22: In general we may consider DBETAS in absolute value greater than $2/\sqrt{N}$ to be influential observations | Python Code:
%matplotlib inline
from __future__ import print_function
import numpy as np
import statsmodels.api as sm
import matplotlib.pyplot as plt
from statsmodels.sandbox.regression.predstd import wls_prediction_std
np.random.seed(9876789)
Explanation: Ordinary Least Squares
End of explanation
nsample = 100
x = np.linspace(0, 10, 100)
X = np.column_stack((x, x**2))
beta = np.array([1, 0.1, 10])
e = np.random.normal(size=nsample)
Explanation: OLS estimation
Artificial data:
End of explanation
X = sm.add_constant(X)
y = np.dot(X, beta) + e
Explanation: Our model needs an intercept so we add a column of 1s:
End of explanation
model = sm.OLS(y, X)
results = model.fit()
print(results.summary())
Explanation: Fit and summary:
End of explanation
print('Parameters: ', results.params)
print('R2: ', results.rsquared)
Explanation: Quantities of interest can be extracted directly from the fitted model. Type dir(results) for a full list. Here are some examples:
End of explanation
nsample = 50
sig = 0.5
x = np.linspace(0, 20, nsample)
X = np.column_stack((x, np.sin(x), (x-5)**2, np.ones(nsample)))
beta = [0.5, 0.5, -0.02, 5.]
y_true = np.dot(X, beta)
y = y_true + sig * np.random.normal(size=nsample)
Explanation: OLS non-linear curve but linear in parameters
We simulate artificial data with a non-linear relationship between x and y:
End of explanation
res = sm.OLS(y, X).fit()
print(res.summary())
Explanation: Fit and summary:
End of explanation
print('Parameters: ', res.params)
print('Standard errors: ', res.bse)
print('Predicted values: ', res.predict())
Explanation: Extract other quantities of interest:
End of explanation
prstd, iv_l, iv_u = wls_prediction_std(res)
fig, ax = plt.subplots(figsize=(8,6))
ax.plot(x, y, 'o', label="data")
ax.plot(x, y_true, 'b-', label="True")
ax.plot(x, res.fittedvalues, 'r--.', label="OLS")
ax.plot(x, iv_u, 'r--')
ax.plot(x, iv_l, 'r--')
ax.legend(loc='best');
Explanation: Draw a plot to compare the true relationship to OLS predictions. Confidence intervals around the predictions are built using the wls_prediction_std command.
End of explanation
nsample = 50
groups = np.zeros(nsample, int)
groups[20:40] = 1
groups[40:] = 2
#dummy = (groups[:,None] == np.unique(groups)).astype(float)
dummy = sm.categorical(groups, drop=True)
x = np.linspace(0, 20, nsample)
# drop reference category
X = np.column_stack((x, dummy[:,1:]))
X = sm.add_constant(X, prepend=False)
beta = [1., 3, -3, 10]
y_true = np.dot(X, beta)
e = np.random.normal(size=nsample)
y = y_true + e
Explanation: OLS with dummy variables
We generate some artificial data. There are 3 groups which will be modelled using dummy variables. Group 0 is the omitted/benchmark category.
End of explanation
print(X[:5,:])
print(y[:5])
print(groups)
print(dummy[:5,:])
Explanation: Inspect the data:
End of explanation
res2 = sm.OLS(y, X).fit()
print(res2.summary())
Explanation: Fit and summary:
End of explanation
prstd, iv_l, iv_u = wls_prediction_std(res2)
fig, ax = plt.subplots(figsize=(8,6))
ax.plot(x, y, 'o', label="Data")
ax.plot(x, y_true, 'b-', label="True")
ax.plot(x, res2.fittedvalues, 'r--.', label="Predicted")
ax.plot(x, iv_u, 'r--')
ax.plot(x, iv_l, 'r--')
legend = ax.legend(loc="best")
Explanation: Draw a plot to compare the true relationship to OLS predictions:
End of explanation
R = [[0, 1, 0, 0], [0, 0, 1, 0]]
print(np.array(R))
print(res2.f_test(R))
Explanation: Joint hypothesis test
F test
We want to test the hypothesis that both coefficients on the dummy variables are equal to zero, that is, $R \times \beta = 0$. An F test leads us to strongly reject the null hypothesis of identical constant in the 3 groups:
End of explanation
print(res2.f_test("x2 = x3 = 0"))
Explanation: You can also use formula-like syntax to test hypotheses
End of explanation
beta = [1., 0.3, -0.0, 10]
y_true = np.dot(X, beta)
y = y_true + np.random.normal(size=nsample)
res3 = sm.OLS(y, X).fit()
print(res3.f_test(R))
print(res3.f_test("x2 = x3 = 0"))
Explanation: Small group effects
If we generate artificial data with smaller group effects, the T test can no longer reject the Null hypothesis:
End of explanation
from statsmodels.datasets.longley import load_pandas
y = load_pandas().endog
X = load_pandas().exog
X = sm.add_constant(X)
Explanation: Multicollinearity
The Longley dataset is well known to have high multicollinearity. That is, the exogenous predictors are highly correlated. This is problematic because it can affect the stability of our coefficient estimates as we make minor changes to model specification.
End of explanation
ols_model = sm.OLS(y, X)
ols_results = ols_model.fit()
print(ols_results.summary())
Explanation: Fit and summary:
End of explanation
norm_x = X.values
for i, name in enumerate(X):
if name == "const":
continue
norm_x[:,i] = X[name]/np.linalg.norm(X[name])
norm_xtx = np.dot(norm_x.T,norm_x)
Explanation: Condition number
One way to assess multicollinearity is to compute the condition number. Values over 20 are worrisome (see Greene 4.9). The first step is to normalize the independent variables to have unit length:
End of explanation
eigs = np.linalg.eigvals(norm_xtx)
condition_number = np.sqrt(eigs.max() / eigs.min())
print(condition_number)
Explanation: Then, we take the square root of the ratio of the biggest to the smallest eigen values.
End of explanation
ols_results2 = sm.OLS(y.iloc[:14], X.iloc[:14]).fit()
print("Percentage change %4.2f%%\n"*7 % tuple([i for i in (ols_results2.params - ols_results.params)/ols_results.params*100]))
Explanation: Dropping an observation
Greene also points out that dropping a single observation can have a dramatic effect on the coefficient estimates:
End of explanation
infl = ols_results.get_influence()
Explanation: We can also look at formal statistics for this such as the DFBETAS -- a standardized measure of how much each coefficient changes when that observation is left out.
End of explanation
2./len(X)**.5
print(infl.summary_frame().filter(regex="dfb"))
Explanation: In general we may consider DBETAS in absolute value greater than $2/\sqrt{N}$ to be influential observations
End of explanation |
15,090 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Type
Is Required
Step7: 1.4. Elemental Stoichiometry
Is Required
Step8: 1.5. Elemental Stoichiometry Details
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 1.7. Diagnostic Variables
Is Required
Step11: 1.8. Damping
Is Required
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required
Step13: 2.2. Timestep If Not From Ocean
Is Required
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required
Step15: 3.2. Timestep If Not From Ocean
Is Required
Step16: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required
Step17: 4.2. Scheme
Is Required
Step18: 4.3. Use Different Scheme
Is Required
Step19: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required
Step20: 5.2. River Input
Is Required
Step21: 5.3. Sediments From Boundary Conditions
Is Required
Step22: 5.4. Sediments From Explicit Model
Is Required
Step23: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required
Step24: 6.2. CO2 Exchange Type
Is Required
Step25: 6.3. O2 Exchange Present
Is Required
Step26: 6.4. O2 Exchange Type
Is Required
Step27: 6.5. DMS Exchange Present
Is Required
Step28: 6.6. DMS Exchange Type
Is Required
Step29: 6.7. N2 Exchange Present
Is Required
Step30: 6.8. N2 Exchange Type
Is Required
Step31: 6.9. N2O Exchange Present
Is Required
Step32: 6.10. N2O Exchange Type
Is Required
Step33: 6.11. CFC11 Exchange Present
Is Required
Step34: 6.12. CFC11 Exchange Type
Is Required
Step35: 6.13. CFC12 Exchange Present
Is Required
Step36: 6.14. CFC12 Exchange Type
Is Required
Step37: 6.15. SF6 Exchange Present
Is Required
Step38: 6.16. SF6 Exchange Type
Is Required
Step39: 6.17. 13CO2 Exchange Present
Is Required
Step40: 6.18. 13CO2 Exchange Type
Is Required
Step41: 6.19. 14CO2 Exchange Present
Is Required
Step42: 6.20. 14CO2 Exchange Type
Is Required
Step43: 6.21. Other Gases
Is Required
Step44: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required
Step45: 7.2. PH Scale
Is Required
Step46: 7.3. Constants If Not OMIP
Is Required
Step47: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required
Step48: 8.2. Sulfur Cycle Present
Is Required
Step49: 8.3. Nutrients Present
Is Required
Step50: 8.4. Nitrous Species If N
Is Required
Step51: 8.5. Nitrous Processes If N
Is Required
Step52: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required
Step53: 9.2. Upper Trophic Levels Treatment
Is Required
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required
Step55: 10.2. Pft
Is Required
Step56: 10.3. Size Classes
Is Required
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required
Step58: 11.2. Size Classes
Is Required
Step59: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required
Step60: 12.2. Lability
Is Required
Step61: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required
Step62: 13.2. Types If Prognostic
Is Required
Step63: 13.3. Size If Prognostic
Is Required
Step64: 13.4. Size If Discrete
Is Required
Step65: 13.5. Sinking Speed If Prognostic
Is Required
Step66: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required
Step67: 14.2. Abiotic Carbon
Is Required
Step68: 14.3. Alkalinity
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cams', 'sandbox-2', 'ocnbgchem')
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: CAMS
Source ID: SANDBOX-2
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:43
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation |
15,091 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bucles
Los bucles permiten iterar rápidamente una y otra vez sobre una estructura de datos. En Python hay dos tipos de bucles
Step1: Si necesitamos recorrer todos y cada uno de los elementos de la lista, es más sencillo usar un bucle for.
Bucles for.
La instrucción for permite recorrer los elementos de cualquier secuencia ordenada de uno en uno y por orden. La sintaxis de un bloque de código for-in es la siguiente
Step2: Bucles while.
La instrucción while permite establecer un bucle que repita un conjunto de instrucciones siempre que se dé determinada condición. La sintaxis de un bloque de código while es la siguiente | Python Code:
# antes de nada, creo unas cuantas variables con listas para jugar
numeros = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
print(numeros)
# fíjate en esta forma nueva de crear una lista a partir de una cadena
# el método .split() me permite "romper" una cadena en una lista de cadenas
# cuando usamos .split() sin más, estamos tokenizando una cadena de texto
semana = "lunes martes miercoles jueves viernes sabado domingo".split()
print(semana)
oracion = "Green colorless ideas sleep furiously".split()
print(oracion)
# sin embargo, puedo especificar una subcadena como separador
emails = "[email protected]; [email protected]; [email protected]".split("; ")
print(emails)
# puedo acceder a los elementos de la lista a través de índices
print("El tercer número de mi lista es", numeros[2])
print("Y el último email es", emails[-1])
Explanation: Bucles
Los bucles permiten iterar rápidamente una y otra vez sobre una estructura de datos. En Python hay dos tipos de bucles: for y while.
End of explanation
for numero in numeros:
print("Voy por el numero", numero)
for dia in semana:
print("Me gusta el", dia)
cajon = ["una bicicleta", 234, "el número pi", 23, "un libro", "otro libro"]
# añado un café a mi lista de cosas
cajon.append("un café")
# elimino el segundo elemento
cajon.pop(1)
cajon.remove("un libro")
print("Tengo", len(cajon), "cosas guardadas en un cajón:")
for elemento in cajon:
print("-", elemento)
alumnos = "Pepito:Raúl:Ana:Antonio:María".split(":")
print(alumnos)
for alumno in alumnos:
if alumno == "Paco":
print("Paco ha venido a clase")
else:
print("Pero", alumno, "sí ha venido")
alumnos = "Pepito:Raúl:Ana:Antonio:María".split(":")
#haVenidoPaco = False
for alumno in alumnos:
if alumno == "Paco":
haVenidoPaco = True
#if haVenidoPaco == True:
if haVenidoPaco:
print("Es verdad que Paco ha venido")
else:
print("Parece que Paco NO ha venido hoy")
if "Paco" in alumnos:
print("Es verdad que Paco ha venido")
else:
print("Parece que Paco NO ha venido hoy")
for letra in "abcdefghijklmnopqrstuvwxyz":
print(letra)
d = {"clave1": 1, "clave2":2, "clave3":3 }
for elemento in d:
print(elemento)
print("------------------------")
for clave in d.keys():
print(clave)
print("------------------------")
for valor in d.values():
print(valor)
print("------------------------")
for k, v in d.items():
print(k, "guarda el valor", v)
Explanation: Si necesitamos recorrer todos y cada uno de los elementos de la lista, es más sencillo usar un bucle for.
Bucles for.
La instrucción for permite recorrer los elementos de cualquier secuencia ordenada de uno en uno y por orden. La sintaxis de un bloque de código for-in es la siguiente:
for ELEMENTO in SECUENCIA:
# ejecuta las instrucciones tantas veces como elementos tenga la secuencia
INSTRUCCIONES
Las palabras reservadas for y in son obligatorias. SECUENCIA puede ser cualquier estructuras de datos que sea una secuencia ordenada (por ejemplo, cadenas, las listas y las tuplas). ELEMENTO es el con el que designamos cada uno de los elementos de SECUENCIA. Ten en cuenta de ELEMENTO adopta, en cada paso del bucle, un valor diferente.
End of explanation
numero = 1
limite = 5
while numero <= limite:
print("El número", numero, "es menor o igual que", limite)
numero = numero + 1
Explanation: Bucles while.
La instrucción while permite establecer un bucle que repita un conjunto de instrucciones siempre que se dé determinada condición. La sintaxis de un bloque de código while es la siguiente:
while CONDICIÓN:
# ejecuta las instrucciones a continuación mientras CONDICIÓN sea verdadera
INSTRUCCIONES
Apenas la vamos a utilizar, pero sirva el siguiente ejemplo para ilustrar su uso.
End of explanation |
15,092 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Runtime Cythonize in QuTiP
Prepared for EuroSciPy 2019
Alex Pitchford ([email protected])
Two types of time-dependent dynamics solving
'function' type and 'string' type
pros and cons to both
string type is a key differentiating feature of QuTiP
recently much improved
scope for further enhancement
Ordinary differential equation
of the form
$$ \frac{\textrm{d} x}{\textrm{d} t} = L(t) \; x $$
$x$ -- vector
Step1: Time dependent control functions
Step2: Hamiltonians, initial state and measurements
Step3: Solving the dynamics
Step4: Function type
Step5: String type
Step6: Comparing execution times | Python Code:
# Imports and utility functions
import time
import numpy as np
import matplotlib.pyplot as plt
from qutip.sesolve import sesolve
from qutip.solver import Options, solver_safe
from qutip import sigmax, sigmay, sigmaz, identity, tensor, basis, Bloch
def timing_val(func):
def wrapper(*arg, **kw):
'''source: http://www.daniweb.com/code/snippet368.html'''
t1 = time.time()
res = func(*arg, **kw)
t2 = time.time()
return res, (t2 - t1), func.__name__
return wrapper
def plot_exp(tlist, expects, lbls, title):
fig = plt.figure()
ax = fig.add_subplot(111)
for i, e in enumerate(expects):#
ax.plot(tlist, e, label=lbls[i])
ax.set_xlabel(r"$t$")
ax.set_title(title)
ax.legend()
Explanation: Runtime Cythonize in QuTiP
Prepared for EuroSciPy 2019
Alex Pitchford ([email protected])
Two types of time-dependent dynamics solving
'function' type and 'string' type
pros and cons to both
string type is a key differentiating feature of QuTiP
recently much improved
scope for further enhancement
Ordinary differential equation
of the form
$$ \frac{\textrm{d} x}{\textrm{d} t} = L(t) \; x $$
$x$ -- vector:
state $\newcommand{\ket}[1]{\left|{#1}\right\rangle} \ket{\psi}$
or vectorised density matrix $\vec{\rho}$
or $x$ -- matrix:
unitary transformation $U$
or dynamical map $M$
and $L$ -- sparse matrix that drives the dynamics.
In Python these are Qobj
wrapper to fast_csr_matrix -- custom to QuTiP.
Time dependent dynamics generator
$$ L(t) = L_0 + g_1(t) \, L_1 + g_2(t) \, L_2 + \ldots $$
$L(t)$ defined in Python as list of matrices
and (optionally) time-dependent functions
L = [L0, [L1, g1], [L2, g2] ....]
L -- Qobj
g -- Python function, e.g. def g(t, args): return np.cos(args['w']*t)
or string e.g. "cos(w*t)"
Function vs string type time-dependence
| Python function | string type |
| :-: | :-: |
| Python interpreter | Compiled C++ |
| slower execution | Compile time overhead |
Note: C++ compiler required at runtime for string type. Non-standard in MS Windows
String type compilation
Dynamic generation of Cython code .pyx file
right hand side of ODE cqobjevo_compiled_coeff_<hash>.pyx
Dynamic Python to import time-dependent QobjEvo
exec(compile('from ' + filename + ' import CompiledStrCoeff, locals()))
Triggers compilation
Temporary file .pyx deleted
Compiled C++ time-dependent QobjEvo can be reused
Example - control coupled qubits
Unitary dynamics - Schrödinger's equation
$$ \frac{\textrm{d} \ket{\psi}}{\textrm{d} t} = -\textrm{i} \, H(t) \, \ket{\psi} $$
$H(t)$ -- combined Hamiltonian of the system
$$ H(t) = \omega_1 \, H_{\textrm{d}}^{(1)} + \omega_2 \, H_{\textrm{d}}^{(2)} + \omega_{\textrm{i}} \, H_{\textrm{i}} + g_1(t) \, H_{\textrm{c}}^{(1)} + g_2(t) \, H_{\textrm{c}}^{(2)} $$
End of explanation
def g_sine(t, args):
return args['A_s']*np.sin(args['w_s']*t)
def g_decay(t, args):
return args['A_d']*np.exp(-t/args['t_d'])
g_sine_str = "A_s*sin(w_s*t)"
g_decay_str = "A_d*exp(-t/t_d)"
t_tot = 10.0
w_1 = 0.3
w_2 = 0.3
w_i = 0.02
A_s = 0.3
A_d = 0.3
w_s = np.pi/t_tot
t_d = 5.0
tlist = np.linspace(0.0, t_tot, 200)
args = {'A_s': A_s, 'A_d': A_s, 'w_s': w_s, 't_d': t_d}
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(tlist, g_sine(tlist, args), label=r"$g_{\rm{sin}}$") #$")
ax.plot(tlist, g_decay(tlist, args), label=r"$g_{\rm{decay}}$")
ax.set_xlabel(r"$t$")
ax.set_title("Control functions")
ax.legend()
Explanation: Time dependent control functions
End of explanation
Id2 = identity(2)
Sz1 = tensor(sigmaz(), Id2)
Sz2 = tensor(Id2, sigmaz())
Sx1 = tensor(sigmax(), Id2)
Sy1 = tensor(sigmay(), Id2)
Sx2 = tensor(Id2, sigmax())
Sy2 = tensor(Id2, sigmay())
H_d1 = w_1*Sz1
H_d2 = w_2*Sz2
H_c1 = Sx1
H_c2 = Sy2
H_i = w_i*tensor(sigmaz(), sigmaz())
H_func_type = [H_d1, H_d2, [H_c1, g_sine], [H_c2, g_decay], H_i]
H_str_type = [H_d1, H_d2, [H_c1, g_sine_str], [H_c2, g_decay_str], H_i]
up_state = basis(2, 0)
b = Bloch()
b.add_states(up_state)
b.show()
init_state = tensor(up_state, up_state)
meas = [Sz1, Sx1, Sz2, Sx2]
Explanation: Hamiltonians, initial state and measurements
End of explanation
@timing_val
def repeat_solve(H, init_state, tlist, num_reps=1,
e_ops=None, args=None, options=None):
if options is None:
options = Options()
out = sesolve(H, init_state, tlist,
e_ops=meas, args=args, options=options)
if num_reps > 1:
options.rhs_reuse = True
tl = np.array([0, tlist[-1]])
for i in range(num_reps - 1):
sesolve(H, init_state, tl,
e_ops=meas, args=args, options=options)
return out
Explanation: Solving the dynamics
End of explanation
n_reps = 1
out, t_func, fname = repeat_solve(H_func_type, init_state, tlist, num_reps=n_reps,
e_ops=meas, args=args)
print("{} execution of func type took {} seconds.".format(n_reps, t_func))
# Plot qubit 1 expectations
plot_exp(tlist, out.expect[:2], lbls=["E[Z]", "E[X]"],
title="Qubit 1 - func type")
Explanation: Function type
End of explanation
n_reps = 1
out, t_func, fname = repeat_solve(H_str_type, init_state, tlist, num_reps=n_reps,
e_ops=meas, args=args)
print("{} execution of string type took {} seconds.".format(n_reps, t_func))
# Plot qubit 1 expectations
plot_exp(tlist, out.expect[:2], lbls=["E[Z]", "E[X]"],
title="Qubit 1 - string type")
Explanation: String type
End of explanation
n_rep_list = [1, 2, 5, 10, 20] #, 100] #, 1000, 20000, 100000]
# n_rep_list = [100, 1000, 20000, 100000, 500000]
t_per_exec_f = []
t_per_exec_s = []
H_zero = H_i*0.0
for i, n_reps in enumerate(n_rep_list):
out, t_func, fname = repeat_solve(H_func_type, init_state, tlist,
num_reps=n_reps,
e_ops=meas, args=args)
t_per_exec_f.append(t_func / n_reps)
#print("{} execution of func type took {} seconds.".format(n_reps, t_func))
# twisted method of making the code change to force new hash and
# hence recompile
key = 'nreps{}'.format(i)
args[key] = n_reps
H = list(H_str_type)
H.append([H_zero, key])
out, t_func, fname = repeat_solve(H, init_state, tlist,
num_reps=n_reps,
e_ops=meas, args=args)
#print("{} execution of string type took {} seconds.".format(n_reps, t_func))
t_per_exec_s.append(t_func / n_reps)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(n_rep_list, t_per_exec_f, 'o', label="func type")
ax.plot(n_rep_list, t_per_exec_s, 'x', label="string type")
ax.set_xlabel(r"$N_{\rm{reps}}$")
ax.set_ylabel("exec time per rep")
ax.set_title("Comparing Method Exec Time")
ax.legend()
n_rep_list = [1000, 5000, 10000, 15000, 20000, 50000, 100000]
t_per_exec_f = []
t_per_exec_s = []
H_zero = H_i*0.0
for i, n_reps in enumerate(n_rep_list):
out, t_func, fname = repeat_solve(H_func_type, init_state, tlist,
num_reps=n_reps,
e_ops=meas, args=args)
t_per_exec_f.append(t_func / n_reps)
#print("{} execution of func type took {} seconds.".format(n_reps, t_func))
# twisted method of making the code change to force new hash and
# hence recompile
key = 'nreps{}'.format(i)
args[key] = n_reps
H = list(H_str_type)
H.append([H_zero, key])
out, t_func, fname = repeat_solve(H, init_state, tlist,
num_reps=n_reps,
e_ops=meas, args=args)
#print("{} execution of string type took {} seconds.".format(n_reps, t_func))
t_per_exec_s.append(t_func / n_reps)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(n_rep_list, t_per_exec_f, 'o', label="func type")
ax.plot(n_rep_list, t_per_exec_s, 'x', label="string type")
ax.set_xlabel(r"$N_{\rm{reps}}$")
ax.set_ylabel("exec time per rep")
ax.set_title("Comparing Method Exec Time")
ax.legend()
Explanation: Comparing execution times
End of explanation |
15,093 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Skip-gram word2vec
In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like translations.
Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
A really good conceptual overview of word2vec from Chris McCormick
First word2vec paper from Mikolov et al.
NIPS paper with improvements for word2vec also from Mikolov et al.
An implementation of word2vec from Thushan Ganegedara
TensorFlow word2vec tutorial
Word embeddings
When you're dealing with language and words, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.
<img src="assets/word2vec_architectures.png" width="500">
In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
First up, importing packages.
Step1: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
Step2: Preprocessing
Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to <PERIOD>. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.
Step3: And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
Step4: Subsampling
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
I'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it.
Exercise
Step5: Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$.
From Mikolov et al.
Step6: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
Step7: Building the graph
From Chris McCormick's blog, we can see the general structure of our network.
The input words are passed in as one-hot encoded vectors. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.
The idea here is to train the hidden layer weight matrix to find efficient representations for our words. This weight matrix is usually called the embedding matrix or embedding look-up table. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.
I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.
Exercise
Step8: Embedding
The embedding matrix has a size of the number of words by the number of neurons in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using one-hot encoded vectors for our inputs. When you do the matrix multiplication of the one-hot vector with the embedding matrix, you end up selecting only one row out of the entire matrix
Step9: Negative sampling
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.
Exercise
Step10: Validation
This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
Step11: Training
Below is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words.
Step12: Restore the trained network if you need to
Step13: Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data. | Python Code:
import time
import numpy as np
import tensorflow as tf
import utils
Explanation: Skip-gram word2vec
In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like translations.
Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
A really good conceptual overview of word2vec from Chris McCormick
First word2vec paper from Mikolov et al.
NIPS paper with improvements for word2vec also from Mikolov et al.
An implementation of word2vec from Thushan Ganegedara
TensorFlow word2vec tutorial
Word embeddings
When you're dealing with language and words, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.
<img src="assets/word2vec_architectures.png" width="500">
In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
First up, importing packages.
End of explanation
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import zipfile
dataset_folder_path = 'data'
dataset_filename = 'text8.zip'
dataset_name = 'Text8 Dataset'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(dataset_filename):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar:
urlretrieve(
'http://mattmahoney.net/dc/text8.zip',
dataset_filename,
pbar.hook)
if not isdir(dataset_folder_path):
with zipfile.ZipFile(dataset_filename) as zip_ref:
zip_ref.extractall(dataset_folder_path)
with open('data/text8') as f:
text = f.read()
Explanation: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
End of explanation
words = utils.preprocess(text)
print(words[:30])
print("Total words: {}".format(len(words)))
print("Unique words: {}".format(len(set(words))))
Explanation: Preprocessing
Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to <PERIOD>. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.
End of explanation
vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)
int_words = [vocab_to_int[word] for word in words]
Explanation: And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
End of explanation
## Your code here
freqs = np.zeros(len(int_to_vocab))
for w in int_words:
freqs[w] += 1.0
freqs /= len(int_words)
header = "{:5} {:10} {:11}".format("rank", "word", "freq")
print(header)
print('-'*len(header))
for i in range(0,1000,50):
v = int_to_vocab[i]
f = freqs[i]
print("{:5} {:10} {:0.9f}".format(i,v,f))
keep = [r>(1 - np.sqrt(1e-5/freqs[w])) for w,r in zip(int_words, np.random.uniform(size=len(int_words)))]
train_words = [w for w,k in zip(int_words, keep) if k]
start = 1000
length = 100
print('Original')
print('--------')
print(' '.join(w for w in words[start:start+length]))
print('Discarded')
print('--------')
print(' '.join(w if k else 'x'*len(w) for w,k in zip(words[start:start+length], keep[start:start+length])))
Explanation: Subsampling
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
I'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it.
Exercise: Implement subsampling for the words in int_words. That is, go through int_words and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is the probability that a word is discarded. Assign the subsampled data to train_words.
End of explanation
def get_target(words, idx, window_size=5):
''' Get a list of words in a window around an index. '''
w = np.random.randint(window_size+1)
return words[max(0,idx-w):idx] + words[idx+1:min(len(words)-1, idx+w+1)]
Explanation: Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$.
From Mikolov et al.:
"Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels."
Exercise: Implement a function get_target that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you choose a random number of words from the window.
End of explanation
def get_batches(words, batch_size, window_size=5):
''' Create a generator of word batches as a tuple (inputs, targets) '''
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
for idx in range(0, len(words), batch_size):
x, y = [], []
batch = words[idx:idx+batch_size]
for ii in range(len(batch)):
batch_x = batch[ii]
batch_y = get_target(batch, ii, window_size)
y.extend(batch_y)
x.extend([batch_x]*len(batch_y))
yield x, y
Explanation: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
End of explanation
train_graph = tf.Graph()
with train_graph.as_default():
inputs = tf.placeholder(tf.int32, shape=(None))
labels = tf.placeholder(tf.int32, shape=(None, 1))
Explanation: Building the graph
From Chris McCormick's blog, we can see the general structure of our network.
The input words are passed in as one-hot encoded vectors. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.
The idea here is to train the hidden layer weight matrix to find efficient representations for our words. This weight matrix is usually called the embedding matrix or embedding look-up table. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.
I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.
Exercise: Assign inputs and labels using tf.placeholder. We're going to be passing in integers, so set the data types to tf.int32. The batches we're passing in will have varying sizes, so set the batch sizes to [None]. To make things work later, you'll need to set the second dimension of labels to None or 1.
End of explanation
n_vocab = len(int_to_vocab)
n_embedding = 200
with train_graph.as_default():
embedding = tf.Variable(tf.random_uniform(shape=(n_vocab, n_embedding), minval=-1, maxval=1))
# create embedding weight matrix here
embed = tf.nn.embedding_lookup(embedding, inputs)# use tf.nn.embedding_lookup to get the hidden layer output
#sess = tf.InteractiveSession()
#params = tf.constant(np.eye(4))
#ids = tf.constant([0,1,2,2])
#print(tf.nn.embedding_lookup(params,ids).eval())
Explanation: Embedding
The embedding matrix has a size of the number of words by the number of neurons in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using one-hot encoded vectors for our inputs. When you do the matrix multiplication of the one-hot vector with the embedding matrix, you end up selecting only one row out of the entire matrix:
You don't actually need to do the matrix multiplication, you just need to select the row in the embedding matrix that corresponds to the input word. Then, the embedding matrix becomes a lookup table, you're looking up a vector the size of the hidden layer that represents the input word.
<img src="assets/word2vec_weight_matrix_lookup_table.png" width=500>
Exercise: Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform. This TensorFlow tutorial will help if you get stuck.
End of explanation
# Number of negative labels to sample
n_sampled = 100
with train_graph.as_default():
softmax_w = tf.Variable(tf.truncated_normal(shape=(n_vocab, n_embedding), stddev=0.1))
softmax_b = tf.Variable(tf.zeros(shape=n_vocab))
# Calculate the loss using negative sampling
loss = tf.nn.sampled_softmax_loss(softmax_w, softmax_b, labels, embed, n_sampled, n_vocab)
cost = tf.reduce_mean(loss)
optimizer = tf.train.AdamOptimizer().minimize(cost)
Explanation: Negative sampling
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.
Exercise: Below, create weights and biases for the softmax layer. Then, use tf.nn.sampled_softmax_loss to calculate the loss. Be sure to read the documentation to figure out how it works.
End of explanation
import random
with train_graph.as_default():
## From Thushan Ganegedara's implementation
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100
# pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent
valid_examples = np.array(random.sample(range(valid_window), valid_size//2))
valid_examples = np.append(valid_examples,
random.sample(range(1000,1000+valid_window), valid_size//2))
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True))
normalized_embedding = embedding / norm
valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset)
similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding))
# If the checkpoints directory doesn't exist:
!mkdir checkpoints
Explanation: Validation
This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
End of explanation
epochs = 10
batch_size = 1000
window_size = 10
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
iteration = 1
loss = 0
sess.run(tf.global_variables_initializer())
for e in range(1, epochs+1):
batches = get_batches(train_words, batch_size, window_size)
start = time.time()
for x, y in batches:
feed = {inputs: x,
labels: np.array(y)[:, None]}
train_loss, _ = sess.run([cost, optimizer], feed_dict=feed)
loss += train_loss
if iteration % 100 == 0:
end = time.time()
print("Epoch {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Avg. Training loss: {:.4f}".format(loss/100),
"{:.4f} sec/batch".format((end-start)/100))
loss = 0
start = time.time()
if iteration % 1000 == 0:
## From Thushan Ganegedara's implementation
# note that this is expensive (~20% slowdown if computed every 500 steps)
sim = similarity.eval()
for i in range(valid_size):
valid_word = int_to_vocab[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in range(top_k):
close_word = int_to_vocab[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
iteration += 1
save_path = saver.save(sess, "checkpoints/text8.ckpt")
embed_mat = sess.run(normalized_embedding)
Explanation: Training
Below is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words.
End of explanation
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
embed_mat = sess.run(embedding)
Explanation: Restore the trained network if you need to:
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
viz_words = 500
tsne = TSNE()
embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :])
fig, ax = plt.subplots(figsize=(14, 14))
for idx in range(viz_words):
plt.scatter(*embed_tsne[idx, :], color='steelblue')
plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)
Explanation: Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data.
End of explanation |
15,094 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Detecting Encrypted TOR Traffic with Boosting and Topological Data Analysis
HJ van Veen - MLWave
We establish strong baselines for both supervised and unsupervised detection of encrypted TOR traffic.
**Note
Step1: Data Prep
There are string values "Infinity" inside the data, causing mixed types. We need to label-encode the target column. We turn the IP addresses into floats by removing the dots.
We also create a subset of features by removing Source Port, Source IP, Destination Port, Destination IP, and Protocol. This to avoid overfitting/improve future generalization and focus only on the time-based features, like most other researchers have done.
Step2: Local evaluation setup
We create a stratified holdout set of 20%. Any modeling choices (such as parameter tuning) are guided by 5-fold stratified cross-validation on the remaining dataset.
Step3: 5-fold non-tuned XGBoost
Step4: Hyper parameter tuning
We found the below parameters by running a random gridsearch on the first fold in ~50 iterations (minimizing log loss). We use an AWS distributed closed-source auto-tuning library called "Cher" with the following parameter ranges
Step5: 5-fold tuned XGBoost
Step6: Holdout set evaluation
Step7: Results
|Model|Precision|Recall|F1-Score
|---|---|---|---|
|Logistic Regression (Singh et al., 2018) <a href="#References">[34]</a>|0.87|0.87|0.87|
|SVM (Singh et al., 2018)|0.9|0.9|0.9|
|Naïve Bayes (Singh et al., 2018)|0.91|0.6|0.7|
|C4.5 Decision Tree + Feature Selection (Lashkari et al., 2017) <a href="#References">[29]</a>|0.948|0.934|-|
|Deep Learning (Singh et al., 2018)|0.95|0.95|0.95|
|Random Forest (Singh et al., 2018)|0.96|0.96|0.96|
|XGBoost + Tuning|0.974|0.977|0.976|
Holdout evaluation with all the available features
Using all the features results in near perfect performance, suggesting "leaky" features (These features are not to be used for predictive modeling, but are there for completeness). Nevertheless we show how using all features also results in a strong baseline over previous research.
Step8: Results
|Model|Precision|Recall|Accuracy
|---|---|---|---|
|ANN (Hodo et al., 2017) <a href="References">[35]</a>|0.983|0.937|0.991|
|SVM (Hodo et al., 2017)|0.79|0.67|0.94|
|ANN + Feature Selection (Hodo et al., 2017)|0.998|0.988|0.998|
|SVM + Feature Selection (Hodo et al., 2017)|0.8|0.984|0.881|
|XGBoost + Tuning|0.999|1.|0.999|
Topological Data Analysis | Python Code:
import numpy as np
import pandas as pd
import xgboost
from sklearn import model_selection, metrics
Explanation: Detecting Encrypted TOR Traffic with Boosting and Topological Data Analysis
HJ van Veen - MLWave
We establish strong baselines for both supervised and unsupervised detection of encrypted TOR traffic.
**Note: This article uses the 5-second lag dataset. For better comparison we will use the 15-second lag dataset in the near future.
Introduction
Gradient Boosted Decision Trees (GBDT) is a very powerful learning algorithm for supervised learning on tabular data <a href="#References">[1]</a>. Modern implementations include XGBoost <a href="#References">[2]</a>, Catboost <a href="#References">[3]</a>, LightGBM <a href="#References">[4]</a> and scikit-learn's GradientBoostingClassifier <a href="#References">[5]</a>. Of these, especially XGBoost has seen tremendous successes in machine learning competitions <a href="#References">[6]</a>, starting with its introduction during the Higgs Boson Detection challenge in 2014 <a href="#References">[7]</a>. The success of XGBoost can be explained on multiple dimensions: It is a robust implementation of the original algorithms, it is very fast -- allowing data scientists to quickly find better parameters <a href="#References">[8]</a>, it does not suffer much from overfit, is scale-invariant, and it has an active community providing constant improvements, such as early stopping <a href="#References">[9]</a> and GPU support <a href="#References">[10]</a>.
Anomaly detection algorithms automatically find samples that are different from regular samples. Many methods exist. We use the Isolation Forest in combination with nearest neighbor distances. The Isolation Forest works by randomly splitting up the data <a href="#References">[11]</a>. Outliers, on average, are easier to isolate through splitting. Nearest neighbor distance looks at the summed distances for a sample and its five nearest neighbors. Outliers, on average, have a larger distance between their nearest neighbors than regular samples <a href="#References">[12]</a>.
Topological Data Analysis (TDA) is concerned with the meaning, shape, and connectedness of data <a href="#References">[13]</a>. Benefits of TDA include: Unsupervised data exploration / automatic hypothesis generation, ability to deal with noise and missing values, invariance, and the generation of meaningful compressed summaries. TDA has shown efficient applications in a number of diverse fields: healthcare <a href="#References">[14]</a>, computational biology <a href="#References">[15]</a>, control theory <a href="#References">[16]</a>, community detection <a href="#References">[17]</a>, machine learning <a href="#References">[18]</a>, sports analysis <a href="#References">[19]</a>, and information security <a href="#References">[20]</a>. One tool from TDA is the $MAPPER$ algorithm. $MAPPER$ turns data and data projections into a graph by covering it with overlapping intervals and clustering <a href="#References">[21]</a>. To guide exploration, the nodes of the graph may be colored with a function of interest <a href="#References">[22]</a>. There are an increasing number of implementations of $MAPPER$. We use the open source implementation KeplerMapper from scikit-TDA <a href="#References">[23]</a>.
The TOR network allows users to communicate and host content while preserving privacy and anonimity <a href="#References">[24]</a>. As such, it can be used by dissidents and other people who prefer not to be tracked by commercial companies or governments. But these strong privacy and anonimity features are also attractive to criminals. A 2016 study in 'Survival - Global Politics and Strategy' found at least 57% of TOR websites are involved in illicit behavior, ranging from the trade in illegal arms, counterfeit ID documents, pornography, and drugs, money laundering & credit card fraud, and the sharing of violent material, such as bomb making tutorials and terrorist propaganda <a href="#References">[25]</a>.
Network Intrusion Detection Systems are a first line of defense for governments and companies <a href="#References">[26]</a>. An undetected hacker will try to elevate their priviledges, moving from the weakest link to more hardened system-critical network nodes <a href="#References">[27]</a>. If the hacker's goal is to get access to sensitive data (for instance: for resale -, industrial espionage -, or extortion purposes) then any stolen data needs to be exfiltrated. Similarly, cryptolockers often need to communicate with a command & control server outside the network. Depending on the level of sophistication of the malware or hackers, exfiltration may be open and visible, run encrypted through the TOR network in an effort to hide the destination, or use advanced DNS tunneling techniques.
Motivation
Current Network Intrusion Detection Systems, much like the old spam detectors, rely mostly on rules, signatures, and anomaly detection. Labeled data is scarce. Writing rules is a very costly task requiring domain expertise. Signatures may fail to catch new types of attacks until they are updated. Anomalous/unusual behavior is not necessarily suspicous/adversarial behavior.
Machine Learning for Information Security suffers a lot from poor false positive rates. False positives lead to alarm fatigue and can swamp an intelligence analyst with irrelevant work.
Despite the possibility of false positives, it is often better to be safe than sorry. Suspicious network behavior, such as outgoing connections to the TOR network, require immediate attention. A network node can be shut down remotely, after which a security engineer can investigate the machine. The best practice of a multi-layered security makes this possible <a href="#References">[28]</a>: Instead of a single firewall to rule them all, hackers can be detected in various stages of their network intrusion, up to the final step of data exfilitration.
Data
We use a dataset written for the paper "Characterization of Tor Traffic Using Time Based Features" (Lashkari et al.) <a href="#References">[29]</a>, graciously provided by the Canadian Institute for Cybersecurity <a href="#References">[30]</a>. This dataset combines older research on nonTOR network traffic with more recently captured TOR traffic (both were created on the same network) <a href="#References">[31]</a>. The data includes features that are more specific to the network used, such as the source and destination IP/Port, and a range of time-based features with a 5 second lag.
|Feature|Type|Description|Time-based|
|---|---|---|
|'Source IP'|Object|Source IP4 Address. String with dots.|No|
|' Source Port'|Float|Source Port sending packets.|No|
|' Destination IP'|Object|Destination IP4 Address.|No|
|' Destination Port'|Float|Destination Port receiving packets.|No|
|' Protocol'|Float|Integer [5-17] denoting protocol used.|No|
|' Flow Duration'|Float|Length of connection in seconds|Yes|
|' Flow Bytes/s'|Float|Bytes per seconds send|Yes|
|' Flow Packets/s'|Object|Packets per second send. Contains "infinity" strings.|Yes|
|' Flow IAT Mean'|Float|Flow Inter Arrival Time.|Yes|
|' Flow IAT Std'|Float||Yes|
|' Flow IAT Max'|Float||Yes|
|' Flow IAT Min'|Float||Yes|
|'Fwd IAT Mean'|Float|Forward Inter Arrival Time.|Yes|
|' Fwd IAT Std'|Float||Yes|
|' Fwd IAT Max'|Float||Yes|
|' Fwd IAT Min'|Float||Yes|
|'Bwd IAT Mean'|Float|Backwards Inter Arrival Time.|Yes|
|' Bwd IAT Std'|Float||Yes|
|' Bwd IAT Max'|Float||Yes|
|' Bwd IAT Min'|Float||Yes|
|'Active Mean'|Float|Average amount of time in seconds before connection went idle.|Yes|
|' Active Std'|Float||Yes|
|' Active Max'|Float||Yes|
|' Active Min'|Float||Yes|
|'Idle Mean'|Float|Average amount of time in seconds before connection became active.|Yes|
|' Idle Std'|Float|Zero variance feature.|Yes|
|' Idle Max'|Float||Yes|
|' Idle Min'|Float||Yes|
|'label'|Object|Either "nonTOR" or "TOR". ~17% TOR signal.|-|
Experimental setup
Supervised ML. We establish a strong baseline with XGBoost on the full data and on a subset (only time-based features, which generalize better to new domains). We follow the dataset standard of creating a 20% holdout validation set, and use 5-fold stratified cross-validation for parameter tuning <a href="#References">[32]</a>. For tuning we use random search on sane parameter ranges, as random search is easy to implement and given enough time, will equal or beat more sophisticated methods <a href="#References">[33]</a>. We do not use feature selection, but opt to let our learning algorithm deal with those. Missing values are also handled by XGBoost and not manually imputed or hardcoded.
Unsupervised ML. We use $MAPPER$ in combination with the Isolation Forest and the summed distances to the five nearest neighbors. We use an overlap percentage of 150% and 40 intervals per dimension for a total of 1600 hypercubes. Clustering is done with agglomerative clustering using the euclidean distance metric and 3 clusters per interval. For these experiments we use only the time-based features. We don't scale the data, despite only Isolation Forest being scale-invariant.
End of explanation
df = pd.read_csv("CSV/Scenario-A/merged_5s.csv")
df.replace('Infinity', -1, inplace=True)
df["label"] = df["label"].map({"nonTOR": 0, "TOR": 1})
df["Source IP"] = df["Source IP"].apply(lambda x: float(x.replace(".", "")))
df[" Destination IP"] = df[" Destination IP"].apply(lambda x: float(x.replace(".", "")))
features_all = [c for c in df.columns if c not in
['label']]
features = [c for c in df.columns if c not in
['Source IP',
' Source Port',
' Destination IP',
' Destination Port',
' Protocol',
'label']]
features
X = np.array(df[features])
X_all = np.array(df[features_all])
y = np.array(df.label)
print(X.shape, np.mean(y))
Explanation: Data Prep
There are string values "Infinity" inside the data, causing mixed types. We need to label-encode the target column. We turn the IP addresses into floats by removing the dots.
We also create a subset of features by removing Source Port, Source IP, Destination Port, Destination IP, and Protocol. This to avoid overfitting/improve future generalization and focus only on the time-based features, like most other researchers have done.
End of explanation
splitter = model_selection.StratifiedShuffleSplit(
n_splits=1,
test_size=0.2,
random_state=0)
for train_index, test_index in splitter.split(X, y):
X_train, X_holdout = X[train_index], X[test_index]
X_train_all, X_holdout_all = X_all[train_index], X_all[test_index]
y_train, y_holdout = y[train_index], y[test_index]
print(X_train.shape, X_holdout.shape)
Explanation: Local evaluation setup
We create a stratified holdout set of 20%. Any modeling choices (such as parameter tuning) are guided by 5-fold stratified cross-validation on the remaining dataset.
End of explanation
model = xgboost.XGBClassifier(seed=0)
print(model)
skf = model_selection.StratifiedKFold(
n_splits=5,
shuffle=True,
random_state=0)
for i, (train_index, test_index) in enumerate(skf.split(X_train, y_train)):
X_train_fold, X_test_fold = X_train[train_index], X_train[test_index]
y_train_fold, y_test_fold = y_train[train_index], y_train[test_index]
model.fit(X_train_fold, y_train_fold)
probas = model.predict_proba(X_test_fold)[:,1]
preds = (probas > 0.5).astype(int)
print("-"*60)
print("Fold: %d (%s/%s)" %(i, X_train_fold.shape, X_test_fold.shape))
print(metrics.classification_report(y_test_fold, preds, target_names=["nonTOR", "TOR"]))
print("Confusion Matrix: \n%s\n"%metrics.confusion_matrix(y_test_fold, preds))
print("Log loss : %f" % (metrics.log_loss(y_test_fold, probas)))
print("AUC : %f" % (metrics.roc_auc_score(y_test_fold, probas)))
print("Accuracy : %f" % (metrics.accuracy_score(y_test_fold, preds)))
print("Precision: %f" % (metrics.precision_score(y_test_fold, preds)))
print("Recall : %f" % (metrics.recall_score(y_test_fold, preds)))
print("F1-score : %f" % (metrics.f1_score(y_test_fold, preds)))
Explanation: 5-fold non-tuned XGBoost
End of explanation
model = xgboost.XGBClassifier(base_score=0.5, colsample_bylevel=0.68, colsample_bytree=0.84,
gamma=0.1, learning_rate=0.1, max_delta_step=0, max_depth=11,
min_child_weight=1, missing=None, n_estimators=1122, nthread=-1,
objective='binary:logistic', reg_alpha=0.0, reg_lambda=4,
scale_pos_weight=1, seed=189548, silent=True, subsample=0.98)
Explanation: Hyper parameter tuning
We found the below parameters by running a random gridsearch on the first fold in ~50 iterations (minimizing log loss). We use an AWS distributed closed-source auto-tuning library called "Cher" with the following parameter ranges:
"XGBClassifier": {
"max_depth": (2,12),
"n_estimators": (20, 2500),
"objective": ["binary:logistic"],
"missing": np.nan,
"gamma": [0, 0, 0, 0, 0, 0.01, 0.1, 0.2, 0.3, 0.5, 1., 10., 100.],
"learning_rate": [0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.07, 0.08, 0.09, 0.1, 0.15, 0.2, 0.1 ,0.1],
"min_child_weight": [1, 1, 1, 1, 2, 3, 4, 5, 1, 6, 7, 8, 9, 10, 11, 15, 30, 60, 100, 1, 1, 1],
"max_delta_step": [0, 0, 0, 0, 0, 1, 2, 5, 8],
"nthread": -1,
"subsample": [i/100. for i in range(20,100)],
"colsample_bytree": [i/100. for i in range(20,100)],
"colsample_bylevel": [i/100. for i in range(20,100)],
"reg_alpha": [0, 0, 0, 0, 0, 0.00000001, 0.00000005, 0.0000005, 0.000005],
"reg_lambda": [1, 1, 1, 1, 2, 3, 4, 5, 1],
"scale_pos_weight": 1,
"base_score": 0.5,
"seed": (0,999999)
}
End of explanation
print(model)
for i, (train_index, test_index) in enumerate(skf.split(X_train, y_train)):
X_train_fold, X_test_fold = X_train[train_index], X_train[test_index]
y_train_fold, y_test_fold = y_train[train_index], y_train[test_index]
model.fit(X_train_fold, y_train_fold)
probas = model.predict_proba(X_test_fold)[:,1]
preds = (probas > 0.5).astype(int)
print("-"*60)
print("Fold: %d (%s/%s)" %(i, X_train_fold.shape, X_test_fold.shape))
print(metrics.classification_report(y_test_fold, preds, target_names=["nonTOR", "TOR"]))
print("Confusion Matrix: \n%s\n"%metrics.confusion_matrix(y_test_fold, preds))
print("Log loss : %f" % (metrics.log_loss(y_test_fold, probas)))
print("AUC : %f" % (metrics.roc_auc_score(y_test_fold, probas)))
print("Accuracy : %f" % (metrics.accuracy_score(y_test_fold, preds)))
print("Precision: %f" % (metrics.precision_score(y_test_fold, preds)))
print("Recall : %f" % (metrics.recall_score(y_test_fold, preds)))
print("F1-score : %f" % (metrics.f1_score(y_test_fold, preds)))
Explanation: 5-fold tuned XGBoost
End of explanation
model.fit(X_train, y_train)
probas = model.predict_proba(X_holdout)[:,1]
preds = (probas > 0.5).astype(int)
print(metrics.classification_report(y_holdout, preds, target_names=["nonTOR", "TOR"]))
print("Confusion Matrix: \n%s\n"%metrics.confusion_matrix(y_holdout, preds))
print("Log loss : %f" % (metrics.log_loss(y_holdout, probas)))
print("AUC : %f" % (metrics.roc_auc_score(y_holdout, probas)))
print("Accuracy : %f" % (metrics.accuracy_score(y_holdout, preds)))
print("Precision: %f" % (metrics.precision_score(y_holdout, preds)))
print("Recall : %f" % (metrics.recall_score(y_holdout, preds)))
print("F1-score : %f" % (metrics.f1_score(y_holdout, preds)))
Explanation: Holdout set evaluation
End of explanation
model.fit(X_train_all, y_train)
probas = model.predict_proba(X_holdout_all)[:,1]
preds = (probas > 0.5).astype(int)
print(metrics.classification_report(y_holdout, preds, target_names=["nonTOR", "TOR"]))
print("Confusion Matrix: \n%s\n"%metrics.confusion_matrix(y_holdout, preds))
print("Log loss : %f" % (metrics.log_loss(y_holdout, probas)))
print("AUC : %f" % (metrics.roc_auc_score(y_holdout, probas)))
print("Accuracy : %f" % (metrics.accuracy_score(y_holdout, preds)))
print("Precision: %f" % (metrics.precision_score(y_holdout, preds)))
print("Recall : %f" % (metrics.recall_score(y_holdout, preds)))
Explanation: Results
|Model|Precision|Recall|F1-Score
|---|---|---|---|
|Logistic Regression (Singh et al., 2018) <a href="#References">[34]</a>|0.87|0.87|0.87|
|SVM (Singh et al., 2018)|0.9|0.9|0.9|
|Naïve Bayes (Singh et al., 2018)|0.91|0.6|0.7|
|C4.5 Decision Tree + Feature Selection (Lashkari et al., 2017) <a href="#References">[29]</a>|0.948|0.934|-|
|Deep Learning (Singh et al., 2018)|0.95|0.95|0.95|
|Random Forest (Singh et al., 2018)|0.96|0.96|0.96|
|XGBoost + Tuning|0.974|0.977|0.976|
Holdout evaluation with all the available features
Using all the features results in near perfect performance, suggesting "leaky" features (These features are not to be used for predictive modeling, but are there for completeness). Nevertheless we show how using all features also results in a strong baseline over previous research.
End of explanation
import kmapper as km
import pandas as pd
import numpy as np
from sklearn import ensemble, cluster
df = pd.read_csv("CSV/Scenario-A/merged_5s.csv")
df.replace('Infinity', -1, inplace=True)
df[" Flow Bytes/s"] = df[" Flow Bytes/s"].apply(lambda x: float(x))
df[" Flow Packets/s"] = df[" Flow Packets/s"].apply(lambda x: float(x))
df["label"] = df["label"].map({"nonTOR": 0, "TOR": 1})
df["Source IP"] = df["Source IP"].apply(lambda x: float(x.replace(".", "")))
df[" Destination IP"] = df[" Destination IP"].apply(lambda x: float(x.replace(".", "")))
df.fillna(-2, inplace=True)
features = [c for c in df.columns if c not in
['Source IP',
' Source Port',
' Destination IP',
' Destination Port',
' Protocol',
'label']]
X = np.array(df[features])
y = np.array(df.label)
projector = ensemble.IsolationForest(random_state=0, n_jobs=-1)
projector.fit(X)
lens1 = projector.decision_function(X)
mapper = km.KeplerMapper(verbose=3)
lens2 = mapper.fit_transform(X, projection="knn_distance_5")
lens = np.c_[lens1, lens2]
G = mapper.map(
lens,
X,
nr_cubes=40,
overlap_perc=1.5,
clusterer=cluster.AgglomerativeClustering(3))
_ = mapper.visualize(
G,
custom_tooltips=y,
color_function=y,
path_html="tor-tda.html",
inverse_X=X,
inverse_X_names=list(df[features].columns),
projected_X=lens,
projected_X_names=["IsolationForest", "KNN-distance 5"],
title="Detecting encrypted Tor Traffic with Isolation Forest and Nearest Neighbor Distance"
)
Explanation: Results
|Model|Precision|Recall|Accuracy
|---|---|---|---|
|ANN (Hodo et al., 2017) <a href="References">[35]</a>|0.983|0.937|0.991|
|SVM (Hodo et al., 2017)|0.79|0.67|0.94|
|ANN + Feature Selection (Hodo et al., 2017)|0.998|0.988|0.998|
|SVM + Feature Selection (Hodo et al., 2017)|0.8|0.984|0.881|
|XGBoost + Tuning|0.999|1.|0.999|
Topological Data Analysis
End of explanation |
15,095 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Neural networks with PyTorch
Deep learning networks tend to be massive with dozens or hundreds of layers, that's where the term "deep" comes from. You can build one of these deep networks using only weight matrices as we did in the previous notebook, but in general it's very cumbersome and difficult to implement. PyTorch has a nice module nn that provides a nice way to efficiently build large neural networks.
Step1: Now we're going to build a larger network that can solve a (formerly) difficult problem, identifying text in an image. Here we'll use the MNIST dataset which consists of greyscale handwritten digits. Each image is 28x28 pixels, you can see a sample below
<img src='assets/mnist.png'>
Our goal is to build a neural network that can take one of these images and predict the digit in the image.
First up, we need to get our dataset. This is provided through the torchvision package. The code below will download the MNIST dataset, then create training and test datasets for us. Don't worry too much about the details here, you'll learn more about this later.
Step2: We have the training data loaded into trainloader and we make that an iterator with iter(trainloader). Later, we'll use this to loop through the dataset for training, like
python
for image, label in trainloader
Step3: This is what one of the images looks like.
Step4: First, let's try to build a simple network for this dataset using weight matrices and matrix multiplications. Then, we'll see how to do it using PyTorch's nn module which provides a much more convenient and powerful method for defining network architectures.
The networks you've seen so far are called fully-connected or dense networks. Each unit in one layer is connected to each unit in the next layer. In fully-connected networks, the input to each layer must be a one-dimensional vector (which can be stacked into a 2D tensor as a batch of multiple examples). However, our images are 28x28 2D tensors, so we need to convert them into 1D vectors. Thinking about sizes, we need to convert the batch of images with shape (64, 1, 28, 28) to a have a shape of (64, 784), 784 is 28 times 28. This is typically called flattening, we flattened the 2D images into 1D vectors.
Previously you built a network with one output unit. Here we need 10 output units, one for each digit. We want our network to predict the digit shown in an image, so what we'll do is calculate probabilities that the image is of any one digit or class. This ends up being a discrete probability distribution over the classes (digits) that tells us the most likely class for the image. That means we need 10 output units for the 10 classes (digits). We'll see how to convert the network output into a probability distribution next.
Exercise
Step5: Now we have 10 outputs for our network. We want to pass in an image to our network and get out a probability distribution over the classes that tells us the likely class(es) the image belongs to. Something that looks like this
Step6: Building networks with PyTorch
PyTorch provides a module nn that makes building networks much simpler. Here I'll show you how to build the same one as above with 784 inputs, 256 hidden units, 10 output units and a softmax output.
Step7: Let's go through this bit by bit.
python
class Network(nn.Module)
Step8: You can define the network somewhat more concisely and clearly using the torch.nn.functional module. This is the most common way you'll see networks defined as many operations are simple element-wise functions. We normally import this module as F, import torch.nn.functional as F.
Step9: Activation functions
So far we've only been looking at the softmax activation, but in general any function can be used as an activation function. The only requirement is that for a network to approximate a non-linear function, the activation functions must be non-linear. Here are a few more examples of common activation functions
Step10: Initializing weights and biases
The weights and such are automatically initialized for you, but it's possible to customize how they are initialized. The weights and biases are tensors attached to the layer you defined, you can get them with model.fc1.weight for instance.
Step11: For custom initialization, we want to modify these tensors in place. These are actually autograd Variables, so we need to get back the actual tensors with model.fc1.weight.data. Once we have the tensors, we can fill them with zeros (for biases) or random normal values.
Step12: Forward pass
Now that we have a network, let's see what happens when we pass in an image.
Step13: As you can see above, our network has basically no idea what this digit is. It's because we haven't trained it yet, all the weights are random!
Using nn.Sequential
PyTorch provides a convenient way to build networks like this where a tensor is passed sequentially through operations, nn.Sequential (documentation). Using this to build the equivalent network
Step14: The operations are availble by passing in the appropriate index. For example, if you want to get first Linear operation and look at the weights, you'd use model[0].
Step15: You can also pass in an OrderedDict to name the individual layers and operations, instead of using incremental integers. Note that dictionary keys must be unique, so each operation must have a different name.
Step16: Now you can access layers either by integer or the name | Python Code:
# Import necessary packages
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import torch
import helper
import matplotlib.pyplot as plt
Explanation: Neural networks with PyTorch
Deep learning networks tend to be massive with dozens or hundreds of layers, that's where the term "deep" comes from. You can build one of these deep networks using only weight matrices as we did in the previous notebook, but in general it's very cumbersome and difficult to implement. PyTorch has a nice module nn that provides a nice way to efficiently build large neural networks.
End of explanation
### Run this cell
from torchvision import datasets, transforms
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
])
# Download and load the training data
trainset = datasets.MNIST('MNIST_data/',
download=True,
train=True,
transform=transform)
trainloader = torch.utils.data.DataLoader(trainset,
batch_size=64,
shuffle=True)
Explanation: Now we're going to build a larger network that can solve a (formerly) difficult problem, identifying text in an image. Here we'll use the MNIST dataset which consists of greyscale handwritten digits. Each image is 28x28 pixels, you can see a sample below
<img src='assets/mnist.png'>
Our goal is to build a neural network that can take one of these images and predict the digit in the image.
First up, we need to get our dataset. This is provided through the torchvision package. The code below will download the MNIST dataset, then create training and test datasets for us. Don't worry too much about the details here, you'll learn more about this later.
End of explanation
dataiter = iter(trainloader)
images, labels = dataiter.next()
print(type(images))
print(images.shape)
print(labels.shape)
Explanation: We have the training data loaded into trainloader and we make that an iterator with iter(trainloader). Later, we'll use this to loop through the dataset for training, like
python
for image, label in trainloader:
## do things with images and labels
You'll notice I created the trainloader with a batch size of 64, and shuffle=True. The batch size is the number of images we get in one iteration from the data loader and pass through our network, often called a batch. And shuffle=True tells it to shuffle the dataset every time we start going through the data loader again. But here I'm just grabbing the first batch so we can check out the data. We can see below that images is just a tensor with size (64, 1, 28, 28). So, 64 images per batch, 1 color channel, and 28x28 images.
End of explanation
plt.imshow(images[1].numpy().squeeze(),
cmap='Greys_r');
Explanation: This is what one of the images looks like.
End of explanation
def sigmoid_activation(x: torch.tensor):
return 1 / (1 + torch.exp(-x))
# Flattening the input image to have size 784 x 1
inputs = images.view(images.shape[0], -1)
inputs.shape
# Network parameters
num_hidden_units = 256
W1 = torch.randn(784, num_hidden_units)
b1 = torch.randn(num_hidden_units)
# 10 output units
W2 = torch.randn(num_hidden_units, 10)
b2 = torch.randn(10)
hidden = sigmoid_activation(torch.mm(inputs, W1) + b1)
out = torch.mm(hidden, W2) + b2
out.shape
Explanation: First, let's try to build a simple network for this dataset using weight matrices and matrix multiplications. Then, we'll see how to do it using PyTorch's nn module which provides a much more convenient and powerful method for defining network architectures.
The networks you've seen so far are called fully-connected or dense networks. Each unit in one layer is connected to each unit in the next layer. In fully-connected networks, the input to each layer must be a one-dimensional vector (which can be stacked into a 2D tensor as a batch of multiple examples). However, our images are 28x28 2D tensors, so we need to convert them into 1D vectors. Thinking about sizes, we need to convert the batch of images with shape (64, 1, 28, 28) to a have a shape of (64, 784), 784 is 28 times 28. This is typically called flattening, we flattened the 2D images into 1D vectors.
Previously you built a network with one output unit. Here we need 10 output units, one for each digit. We want our network to predict the digit shown in an image, so what we'll do is calculate probabilities that the image is of any one digit or class. This ends up being a discrete probability distribution over the classes (digits) that tells us the most likely class for the image. That means we need 10 output units for the 10 classes (digits). We'll see how to convert the network output into a probability distribution next.
Exercise: Flatten the batch of images images. Then build a multi-layer network with 784 input units, 256 hidden units, and 10 output units using random tensors for the weights and biases. For now, use a sigmoid activation for the hidden layer. Leave the output layer without an activation, we'll add one that gives us a probability distribution next.
End of explanation
def softmax(x):
## TODO: Implement the softmax function here
# Using view to obtain one value per row
# summation across the columns using dim=1
return torch.exp(x) / torch.sum(torch.exp(x), dim=1).view(-1, 1)
# Here, out should be the output of the network in the previous excercise with shape (64,10)
probabilities = softmax(out)
# Does it have the right shape? Should be (64, 10)
print(probabilities.shape)
# Does it sum to 1?
print(probabilities.sum(dim=1))
Explanation: Now we have 10 outputs for our network. We want to pass in an image to our network and get out a probability distribution over the classes that tells us the likely class(es) the image belongs to. Something that looks like this:
<img src='assets/image_distribution.png' width=500px>
Here we see that the probability for each class is roughly the same. This is representing an untrained network, it hasn't seen any data yet so it just returns a uniform distribution with equal probabilities for each class.
To calculate this probability distribution, we often use the softmax function. Mathematically this looks like
$$
\Large \sigma(x_i) = \cfrac{e^{x_i}}{\sum_k^K{e^{x_k}}}
$$
What this does is squish each input $x_i$ between 0 and 1 and normalizes the values to give you a proper probability distribution where the probabilites sum up to one.
Exercise: Implement a function softmax that performs the softmax calculation and returns probability distributions for each example in the batch. Note that you'll need to pay attention to the shapes when doing this. If you have a tensor a with shape (64, 10) and a tensor b with shape (64,), doing a/b will give you an error because PyTorch will try to do the division across the columns (called broadcasting) but you'll get a size mismatch. The way to think about this is for each of the 64 examples, you only want to divide by one value, the sum in the denominator. So you need b to have a shape of (64, 1). This way PyTorch will divide the 10 values in each row of a by the one value in each row of b. Pay attention to how you take the sum as well. You'll need to define the dim keyword in torch.sum. Setting dim=0 takes the sum across the rows while dim=1 takes the sum across the columns.
End of explanation
from torch import nn
class Network(nn.Module):
def __init__(self):
# Important!
# Without it, PyTorch will not be able to follow the setup!
super().__init__()
# Inputs to hidden layer linear transformation
self.hidden = nn.Linear(784, 256)
# Output layer, 10 units - one for each digit
self.output = nn.Linear(256, 10)
# Define sigmoid activation and softmax output
self.sigmoid = nn.Sigmoid()
self.softmax = nn.Softmax(dim=1)
def forward(self, x):
# Pass the input tensor through each of our operations
x = self.hidden(x)
x = self.sigmoid(x)
x = self.output(x)
x = self.softmax(x)
return x
Explanation: Building networks with PyTorch
PyTorch provides a module nn that makes building networks much simpler. Here I'll show you how to build the same one as above with 784 inputs, 256 hidden units, 10 output units and a softmax output.
End of explanation
# Create the network and look at it's text representation
model = Network()
model
Explanation: Let's go through this bit by bit.
python
class Network(nn.Module):
Here we're inheriting from nn.Module. Combined with super().__init__() this creates a class that tracks the architecture and provides a lot of useful methods and attributes. It is mandatory to inherit from nn.Module when you're creating a class for your network. The name of the class itself can be anything.
python
self.hidden = nn.Linear(784, 256)
This line creates a module for a linear transformation, $x\mathbf{W} + b$, with 784 inputs and 256 outputs and assigns it to self.hidden. The module automatically creates the weight and bias tensors which we'll use in the forward method. You can access the weight and bias tensors once the network once it's create at net.hidden.weight and net.hidden.bias.
python
self.output = nn.Linear(256, 10)
Similarly, this creates another linear transformation with 256 inputs and 10 outputs.
python
self.sigmoid = nn.Sigmoid()
self.softmax = nn.Softmax(dim=1)
Here I defined operations for the sigmoid activation and softmax output. Setting dim=1 in nn.Softmax(dim=1) calculates softmax across the columns.
python
def forward(self, x):
PyTorch networks created with nn.Module must have a forward method defined. It takes in a tensor x and passes it through the operations you defined in the __init__ method.
python
x = self.hidden(x)
x = self.sigmoid(x)
x = self.output(x)
x = self.softmax(x)
Here the input tensor x is passed through each operation a reassigned to x. We can see that the input tensor goes through the hidden layer, then a sigmoid function, then the output layer, and finally the softmax function. It doesn't matter what you name the variables here, as long as the inputs and outputs of the operations match the network architecture you want to build. The order in which you define things in the __init__ method doesn't matter, but you'll need to sequence the operations correctly in the forward method.
Now we can create a Network object.
End of explanation
import torch.nn.functional as F
class Network(nn.Module):
def __init__(self):
super().__init__()
# Inputs to hidden layer linear transformation
self.hidden = nn.Linear(784, 256)
# Output layer, 10 units - one for each digit
self.output = nn.Linear(256, 10)
def forward(self, x):
# Hidden layer with sigmoid activation
x = F.sigmoid(self.hidden(x))
# Output layer with softmax activation
x = F.softmax(self.output(x), dim=1)
return x
Explanation: You can define the network somewhat more concisely and clearly using the torch.nn.functional module. This is the most common way you'll see networks defined as many operations are simple element-wise functions. We normally import this module as F, import torch.nn.functional as F.
End of explanation
## Your solution here
class MyNetwork(nn.Module):
def __init__(self):
super().__init__()
# Network parameters
self.h1 = nn.Linear(in_features=784,
out_features=128)
self.h2 = nn.Linear(in_features=128,
out_features=64)
self.out = nn.Linear(in_features=64,
out_features=10)
# Define sigmoid activation and softmax output
self.relu = nn.ReLU()
self.softmax = nn.Softmax(dim=1)
def forward(self, x: torch.tensor):
x = self.h1(x)
x = self.relu(x)
x = self.h2(x)
x = self.relu(x)
x = self.out(x)
x = self.softmax(x)
return x
model = MyNetwork()
model
Explanation: Activation functions
So far we've only been looking at the softmax activation, but in general any function can be used as an activation function. The only requirement is that for a network to approximate a non-linear function, the activation functions must be non-linear. Here are a few more examples of common activation functions: Tanh (hyperbolic tangent), and ReLU (rectified linear unit).
<img src="assets/activation.png" width=700px>
In practice, the ReLU function is used almost exclusively as the activation function for hidden layers.
Your Turn to Build a Network
<img src="assets/mlp_mnist.png" width=600px>
Exercise: Create a network with 784 input units, a hidden layer with 128 units and a ReLU activation, then a hidden layer with 64 units and a ReLU activation, and finally an output layer with a softmax activation as shown above. You can use a ReLU activation with the nn.ReLU module or F.relu function.
End of explanation
print(model.h1.weight)
print(model.h1.bias)
Explanation: Initializing weights and biases
The weights and such are automatically initialized for you, but it's possible to customize how they are initialized. The weights and biases are tensors attached to the layer you defined, you can get them with model.fc1.weight for instance.
End of explanation
# Set biases to all zeros
model.h1.bias.data.fill_(0)
# sample from random normal with standard dev = 0.01
model.h1.weight.data.normal_(std=0.01)
Explanation: For custom initialization, we want to modify these tensors in place. These are actually autograd Variables, so we need to get back the actual tensors with model.fc1.weight.data. Once we have the tensors, we can fill them with zeros (for biases) or random normal values.
End of explanation
# Grab some data
dataiter = iter(trainloader)
images, labels = dataiter.next()
# Resize images into a 1D vector, new shape is (batch size, color channels, image pixels)
images.resize_(64, 1, 784)
# or images.resize_(images.shape[0], 1, 784) to automatically get batch size
# Forward pass through the network
img_idx = 0
ps = model.forward(images[img_idx,:])
img = images[img_idx]
helper.view_classify(img.view(1, 28, 28), ps)
Explanation: Forward pass
Now that we have a network, let's see what happens when we pass in an image.
End of explanation
# Hyperparameters for our network
input_size = 784
hidden_sizes = [128, 64]
output_size = 10
# Build a feed-forward network
model = nn.Sequential(nn.Linear(input_size, hidden_sizes[0]),
nn.ReLU(),
nn.Linear(hidden_sizes[0], hidden_sizes[1]),
nn.ReLU(),
nn.Linear(hidden_sizes[1], output_size),
nn.Softmax(dim=1))
print(model)
# Forward pass through the network and display output
images, labels = next(iter(trainloader))
images.resize_(images.shape[0], 1, 784)
ps = model.forward(images[0,:])
helper.view_classify(images[0].view(1, 28, 28), ps)
Explanation: As you can see above, our network has basically no idea what this digit is. It's because we haven't trained it yet, all the weights are random!
Using nn.Sequential
PyTorch provides a convenient way to build networks like this where a tensor is passed sequentially through operations, nn.Sequential (documentation). Using this to build the equivalent network:
End of explanation
print(model[0])
model[0].weight
Explanation: The operations are availble by passing in the appropriate index. For example, if you want to get first Linear operation and look at the weights, you'd use model[0].
End of explanation
from collections import OrderedDict
model = nn.Sequential(OrderedDict([
('fc1', nn.Linear(input_size, hidden_sizes[0])),
('relu1', nn.ReLU()),
('fc2', nn.Linear(hidden_sizes[0], hidden_sizes[1])),
('relu2', nn.ReLU()),
('output', nn.Linear(hidden_sizes[1], output_size)),
('softmax', nn.Softmax(dim=1))]))
model
Explanation: You can also pass in an OrderedDict to name the individual layers and operations, instead of using incremental integers. Note that dictionary keys must be unique, so each operation must have a different name.
End of explanation
print(model[0])
print(model.fc1)
Explanation: Now you can access layers either by integer or the name
End of explanation |
15,096 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introductory Geocoding and Mapping
Juan Shishido, Andrew Chong, Patty Frontiera
Adapted from Juan Shishido's tutorial at
Step1: Using A Geocoding APIs
There are a number of Geocoding APIs that you can use to geocode place names and addresses. To use one of these you create code like that shown below. Each geocoding service may have different inputs and outputs.
Photon API
Here is an example of geocoding with the Photon API.
Step2: Geopy Module
The Geopy Module makes it easy to try a number of different geocoding APIs using the same syntax. It is updated frequently so it is also a good way to keep abreast of the popular geocoding services.
You can read Geopy's Documentation here
Step3: Navigating the response object
What is a Location type?
Experiment to see if has .keys() method or can be navigated using an index, etc response[0].
Step4: Write function that only returns latitude & longitude
Step5: Looping through a list of addresses from a text file
Load csv data into pandas DataFrame.
Step6: Quick Dip into Pandas
Pandas in 10 minutes
Step7: Apply upper function to each address using .apply()
Step8: Pass addresses through get_coords_geopy function
Step9: Working with GEOJSON/ Mapping your geocoded addresses
The following code creates the bart_coords.geojson and bart_coords.js files in the map/geojson folder. bart_coords.js is used by map/bartmap.html for our mapping of BART stations.
First try opening map/bart_map.html, which will be blank.
Create a feature for each row of data and push into geo_data object.
Step10: Create bart_coords.js file
Step11: Now open map/bart_map.html. Your spreadsheet of text addresses have now been geocoded and mapped!
Batch Geocoding with the Census Geocoder API
The Census Geocoding API provides a great, free service for geocoding US street addresses. You can read about the Census Geocoder and test it using the online interface at http
Step12: Save geocoded output to file
Step13: Read geocoded data back in to do any post processing
When we do this we will add header rows as there were none in the Census Geocoder output.
Step14: Exercise | Python Code:
import json
import requests
import pandas as pd
import geopy
from pprint import pprint
Explanation: Introductory Geocoding and Mapping
Juan Shishido, Andrew Chong, Patty Frontiera
Adapted from Juan Shishido's tutorial at: http://people.ischool.berkeley.edu/~juanshishido/geocoding-workshop/intro-geocoding.html
School of Information
GSR, D-Lab
Imports
End of explanation
def get_geocoded_object_photon(address):
# URL
url = 'http://photon.komoot.de/api/?q=' + address.replace(' ', '+')
# Response
response = requests.get(url)
return json.loads(response.text)
get_geocoded_object_photon("1600 Amphitheatre Pkwy, Mountain View, CA")
Explanation: Using A Geocoding APIs
There are a number of Geocoding APIs that you can use to geocode place names and addresses. To use one of these you create code like that shown below. Each geocoding service may have different inputs and outputs.
Photon API
Here is an example of geocoding with the Photon API.
End of explanation
# Create a geocoding object
google_geocoder = geopy.geocoders.GoogleV3()
# Geocode one address with this geocoder
response = google_geocoder.geocode("1600 Amphitheatre Pkwy, Mountain View, CA", google_geocoder)
type(response)
Explanation: Geopy Module
The Geopy Module makes it easy to try a number of different geocoding APIs using the same syntax. It is updated frequently so it is also a good way to keep abreast of the popular geocoding services.
You can read Geopy's Documentation here: https://geopy.readthedocs.org/en/1.10.0/#.
We will use Geopy to geocode addresses with the Google Geocoding Service.
Google Geocoding API
The Google Geocoder and is extremely popular. It is very accurate for two reasons. It has a robust and sophisticated address parser and a great reference database of streets, parcels, and properties. However, it limits you to 2,500 free address geocodes per day. Moreover, you need to read the Google Geocoder terms of use to make sure that you do not violate them.
Test out response object
End of explanation
# response.keys()
response[0]
response[1]
response[1][0]
response[1][1]
Explanation: Navigating the response object
What is a Location type?
Experiment to see if has .keys() method or can be navigated using an index, etc response[0].
End of explanation
def get_coords_geopy(address, geocoder_instance):
response = geocoder_instance.geocode(address)
lat_lon = response[1][0], response[1][1]
return lat_lon
# Now try it
coords = get_coords_geopy("1600 Amphitheatre Pkwy, Mountain View, CA", google_geocoder)
coords
Explanation: Write function that only returns latitude & longitude
End of explanation
bart = pd.read_csv("data/bartstations.csv")
type(bart)
bart.keys()
Explanation: Looping through a list of addresses from a text file
Load csv data into pandas DataFrame.
End of explanation
bart['weekday_visitors'] = 1000*bart.index
bart['weekend_visitors'] = 2000*bart.index
bart['weekly_visitors'] = bart['weekday_visitors'] + bart['weekend_visitors']
bart
Explanation: Quick Dip into Pandas
Pandas in 10 minutes: http://pandas.pydata.org/pandas-docs/stable/10min.html
End of explanation
one_address = "1245 Broadway, Oakland, CA 94612"
one_address_caps = one_address.upper()
bart['address_in_CAPS'] = bart['address'].apply(lambda x: x.upper())
bart
Explanation: Apply upper function to each address using .apply()
End of explanation
google_geocoder = geopy.geocoders.GoogleV3()
bart['latitude'], bart['longitude'] = zip(*bart['address'].apply(lambda x: get_coords_geopy(x, google_geocoder)))
bart[['station_name', 'latitude', 'longitude']]
Explanation: Pass addresses through get_coords_geopy function
End of explanation
geo_data = {
'type': 'FeatureCollection',
'features': []
}
for i in bart.index:
feature = {
'type': 'Feature',
'geometry': {
"type": "Point",
"coordinates": [float(bart['longitude'][i]), float(bart['latitude'][i])]
},
'properties': {
'station_name': bart['station_name'][i]
}
}
# Add the feature into the GeoJSON wrapper
geo_data['features'].append(feature)
with open('map/geojson/bart_coords.geojson', 'w') as f:
json.dump(geo_data, f, indent=2)
Explanation: Working with GEOJSON/ Mapping your geocoded addresses
The following code creates the bart_coords.geojson and bart_coords.js files in the map/geojson folder. bart_coords.js is used by map/bartmap.html for our mapping of BART stations.
First try opening map/bart_map.html, which will be blank.
Create a feature for each row of data and push into geo_data object.
End of explanation
with open('map/geojson/bart_coords.geojson', 'r') as infile:
lines = infile.readlines()
with open('map/geojson/bart_coords.js', 'w') as outfile:
outfile.write('var bart = ')
outfile.writelines(lines)
infile.close()
outfile.close()
Explanation: Create bart_coords.js file
End of explanation
# Identify the file with the addresses in the format required by the geocoder.
# Take a look at the contents of this file to see how these are structured.
cgfile = 'data/bart_addresses_census_format.csv'
# Configure API parameters
# You can read about them here: http://geocoding.geo.census.gov/geocoder/Geocoding_Services_API.html
url = 'http://geocoding.geo.census.gov/geocoder/geographies/addressbatch'
payload = {'benchmark':'Public_AR_Current','vintage':'ACS2013_Current'}
files = {'addressFile': (cgfile, open(cgfile, 'rb'), 'text/csv')}
# Submit the file of addresses to geocode
r = requests.post(url, files=files, data = payload)
# Review the output
print (r.text)
Explanation: Now open map/bart_map.html. Your spreadsheet of text addresses have now been geocoded and mapped!
Batch Geocoding with the Census Geocoder API
The Census Geocoding API provides a great, free service for geocoding US street addresses. You can read about the Census Geocoder and test it using the online interface at http://geocoding.geo.census.gov/geocoder. Note, the online geocoding interface allows one to geocode up to 1000 addresses at a time without programming via a file upload utility. Below we show you how to programmatically access the Census Geocoding API.
The good stuff:
There are no restrictions on the number of addresses you can geocode or what you can do with the results.
You can submit a file of up to 1000 addresses to geocode at a time. This is called batch geocoding.
The bad stuff:
The geocoding results will not be as good as what you get with the Google Geocoder. But for most applications it is good enough.
You are limited to US addresses.
Address format:
The Census Geocding API requires addresses to be in a strict format:
- no header row
- the first column must contain a unique id
- the id is followed by comma separated values for street address, city, state, and zip
- if any component is missing a comma must still be used to separate address components
- example:
<pre>
1,409 Main St,Oakland,CA,94605
2,310 Main St,,CA,94605
</Pre>
Note: The id column can contain any unique value, for example a place name rather than a numeric id.
- example:
<pre>
house 1,409 Main St,Oakland,CA,94605
house 2,310 Main St,,CA,94605
</Pre>
Read in file of addresses to geocode with the Census Geocoder
End of explanation
# Save geocoded data to file
with open('data/bart_geocoded_addresses_from_census.csv', 'w') as outfile:
outfile.writelines(r.text)
outfile.close()
Explanation: Save geocoded output to file
End of explanation
# Read geocoded data into pandas data frame
colnames = ['station_name','inaddr','ismatch','matchtype','maddr','lon_lat','tlid','sideofst','fipstate','fipcounty','fiptract','junk']
bart_geocoded = pd.read_csv('data/bart_geocoded_addresses_from_census.csv',sep=",", header=None)
bart_geocoded.columns = colnames
bart_geocoded
del bart_geocoded['junk'] #delete junk column at end of file
bart_geocoded
# Subset data frame to select only matched addresses
bart_geocoded_match = bart_geocoded.loc[bart_geocoded['ismatch']== 'Match']
# Reformat and create a new data frame that we can map
data = {'station': bart_geocoded_match['station_name'],
'longitude': bart_geocoded_match['lon_lat'].str.split(',', expand=True)[0],
'latitude': bart_geocoded_match['lon_lat'].str.split(',', expand=True)[1],
'address': bart_geocoded_match['maddr']
}
bart.df = pd.DataFrame(data, columns = ['station', 'longitude', 'latitude','address'])
bart.df
Explanation: Read geocoded data back in to do any post processing
When we do this we will add header rows as there were none in the Census Geocoder output.
End of explanation
# Your code here to create the geojson data
# Now save the geojson data to a file
with open('map/geojson/bart_coords2.geojson', 'w') as f:
json.dump(geo_data, f, indent=2)
# Read in the geojson data and write it out to a javascript file so we can map it
with open('map/geojson/bart_coords2.geojson', 'r') as infile:
lines = infile.readlines()
with open('map/geojson/bart_coords2.js', 'w') as outfile:
outfile.write('var bart = ')
outfile.writelines(lines)
infile.close()
outfile.close()
Explanation: Exercise:
Use the mapping code from the Google Geocoder section above to create a map using the census geocoder results.
End of explanation |
15,097 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TensorFlow Transfer Learning
This notebook shows how to use pre-trained models from TensorFlowHub. Sometimes, there is not enough data, computational resources, or time to train a model from scratch to solve a particular problem. We'll use a pre-trained model to classify flowers with better accuracy than a new model for use in a mobile application.
Learning Objectives
Know how to apply image augmentation
Know how to download and use a TensorFlow Hub module as a layer in Keras.
Step1: Exploring the data
As usual, let's take a look at the data before we start building our model. We'll be using a creative-commons licensed flower photo dataset of 3670 images falling into 5 categories
Step2: We can use python's built in pathlib tool to get a sense of this unstructured data.
Step3: Let's display the images so we can see what our model will be trying to learn.
Step4: Building the dataset
Keras has some convenient methods to read in image data. For instance tf.keras.preprocessing.image.ImageDataGenerator is great for small local datasets. A tutorial on how to use it can be found here, but what if we have so many images, it doesn't fit on a local machine? We can use tf.data.datasets to build a generator based on files in a Google Cloud Storage Bucket.
We have already prepared these images to be stored on the cloud in gs
Step5: Let's figure out how to read one of these images from the cloud. TensorFlow's tf.io.read_file can help us read the file contents, but the result will be a Base64 image string. Hmm... not very readable for humans or Tensorflow.
Thankfully, TensorFlow's tf.image.decode_jpeg function can decode this string into an integer array, and tf.image.convert_image_dtype can cast it into a 0 - 1 range float. Finally, we'll use tf.image.resize to force image dimensions to be consistent for our neural network.
We'll wrap these into a function as we'll be calling these repeatedly. While we're at it, let's also define our constants for our neural network.
Step6: Is it working? Let's see!
TODO 1.a
Step7: One flower down, 3669 more of them to go. Rather than load all the photos in directly, we'll use the file paths given to us in the csv and load the images when we batch. tf.io.decode_csv reads in csv rows (or each line in a csv file), while tf.math.equal will help us format our label such that it's a boolean array with a truth value corresponding to the class in CLASS_NAMES, much like the labels for the MNIST Lab.
Step8: Next, we'll transform the images to give our network more variety to train on. There are a number of image manipulation functions. We'll cover just a few
Step9: Finally, we'll make a function to craft our full dataset using tf.data.dataset. The tf.data.TextLineDataset will read in each line in our train/eval csv files to our decode_csv function.
.cache is key here. It will store the dataset in memory
Step10: We'll test it out with our training set. A batch size of one will allow us to easily look at each augmented image.
Step11: TODO 1.c
Step12: Note
Step13: If your model is like mine, it learns a little bit, slightly better then random, but ugh, it's too slow! With a batch size of 32, 5 epochs of 5 steps is only getting through about a quarter of our images. Not to mention, this is a much larger problem then MNIST, so wouldn't we need a larger model? But how big do we need to make it?
Enter Transfer Learning. Why not take advantage of someone else's hard work? We can take the layers of a model that's been trained on a similar problem to ours and splice it into our own model.
Tensorflow Hub is a database of models, many of which can be used for Transfer Learning. We'll use a model called MobileNet which is an architecture optimized for image classification on mobile devices, which can be done with TensorFlow Lite. Let's compare how a model trained on ImageNet data compares to one built from scratch.
The tensorflow_hub python package has a function to include a Hub model as a layer in Keras. We'll set the weights of this model as un-trainable. Even though this is a compressed version of full scale image classification models, it still has over four hundred thousand paramaters! Training all these would not only add to our computation, but it is also prone to over-fitting. We'll add some L2 regularization and Dropout to prevent that from happening to our trainable weights.
TODO 2.b
Step14: Even though we're only adding one more Dense layer in order to get the probabilities for each of the 5 flower types, we end up with over six thousand parameters to train ourselves. Wow!
Moment of truth. Let's compile this new model and see how it compares to our MNIST architecture. | Python Code:
import os
import pathlib
from PIL import Image
import IPython.display as display
import matplotlib.pylab as plt
import numpy as np
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.layers import (
Conv2D, Dense, Dropout, Flatten, MaxPooling2D, Softmax)
import tensorflow_hub as hub
Explanation: TensorFlow Transfer Learning
This notebook shows how to use pre-trained models from TensorFlowHub. Sometimes, there is not enough data, computational resources, or time to train a model from scratch to solve a particular problem. We'll use a pre-trained model to classify flowers with better accuracy than a new model for use in a mobile application.
Learning Objectives
Know how to apply image augmentation
Know how to download and use a TensorFlow Hub module as a layer in Keras.
End of explanation
data_dir = tf.keras.utils.get_file(
'flower_photos',
'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
untar=True)
# Print data path
print("cd", data_dir)
Explanation: Exploring the data
As usual, let's take a look at the data before we start building our model. We'll be using a creative-commons licensed flower photo dataset of 3670 images falling into 5 categories: 'daisy', 'roses', 'dandelion', 'sunflowers', and 'tulips'.
The below tf.keras.utils.get_file command downloads a dataset to the local Keras cache. To see the files through a terminal, copy the output of the cell below.
End of explanation
data_dir = pathlib.Path(data_dir)
image_count = len(list(data_dir.glob('*/*.jpg')))
print("There are", image_count, "images.")
CLASS_NAMES = np.array(
[item.name for item in data_dir.glob('*') if item.name != "LICENSE.txt"])
print("These are the available classes:", CLASS_NAMES)
Explanation: We can use python's built in pathlib tool to get a sense of this unstructured data.
End of explanation
roses = list(data_dir.glob('roses/*'))
for image_path in roses[:3]:
display.display(Image.open(str(image_path)))
Explanation: Let's display the images so we can see what our model will be trying to learn.
End of explanation
!gsutil cat gs://cloud-ml-data/img/flower_photos/train_set.csv \
| head -5 > /tmp/input.csv
!cat /tmp/input.csv
!gsutil cat gs://cloud-ml-data/img/flower_photos/train_set.csv | \
sed 's/,/ /g' | awk '{print $2}' | sort | uniq > /tmp/labels.txt
!cat /tmp/labels.txt
Explanation: Building the dataset
Keras has some convenient methods to read in image data. For instance tf.keras.preprocessing.image.ImageDataGenerator is great for small local datasets. A tutorial on how to use it can be found here, but what if we have so many images, it doesn't fit on a local machine? We can use tf.data.datasets to build a generator based on files in a Google Cloud Storage Bucket.
We have already prepared these images to be stored on the cloud in gs://cloud-ml-data/img/flower_photos/. The images are randomly split into a training set with 90% data and an iterable with 10% data listed in CSV files:
Training set: train_set.csv
Evaluation set: eval_set.csv
Explore the format and contents of the train.csv by running:
End of explanation
IMG_HEIGHT = 224
IMG_WIDTH = 224
IMG_CHANNELS = 3
BATCH_SIZE = 32
# 10 is a magic number tuned for local training of this dataset.
SHUFFLE_BUFFER = 10 * BATCH_SIZE
AUTOTUNE = tf.data.experimental.AUTOTUNE
VALIDATION_IMAGES = 370
VALIDATION_STEPS = VALIDATION_IMAGES // BATCH_SIZE
def decode_img(img, reshape_dims):
# Convert the compressed string to a 3D uint8 tensor.
img = tf.image.decode_jpeg(img, channels=IMG_CHANNELS)
# Use `convert_image_dtype` to convert to floats in the [0,1] range.
img = tf.image.convert_image_dtype(img, tf.float32)
# Resize the image to the desired size.
return tf.image.resize(img, reshape_dims)
Explanation: Let's figure out how to read one of these images from the cloud. TensorFlow's tf.io.read_file can help us read the file contents, but the result will be a Base64 image string. Hmm... not very readable for humans or Tensorflow.
Thankfully, TensorFlow's tf.image.decode_jpeg function can decode this string into an integer array, and tf.image.convert_image_dtype can cast it into a 0 - 1 range float. Finally, we'll use tf.image.resize to force image dimensions to be consistent for our neural network.
We'll wrap these into a function as we'll be calling these repeatedly. While we're at it, let's also define our constants for our neural network.
End of explanation
img = tf.io.read_file(
"gs://cloud-ml-data/img/flower_photos/daisy/754296579_30a9ae018c_n.jpg")
# Uncomment to see the image string.
#print(img)
img = decode_img(img, [IMG_WIDTH, IMG_HEIGHT])
plt.imshow((img.numpy()));
Explanation: Is it working? Let's see!
TODO 1.a: Run the decode_img function and plot it to see a happy looking daisy.
End of explanation
def decode_csv(csv_row):
record_defaults = ["path", "flower"]
filename, label_string = tf.io.decode_csv(csv_row, record_defaults)
image_bytes = tf.io.read_file(filename=filename)
label = tf.math.equal(CLASS_NAMES, label_string)
return image_bytes, label
Explanation: One flower down, 3669 more of them to go. Rather than load all the photos in directly, we'll use the file paths given to us in the csv and load the images when we batch. tf.io.decode_csv reads in csv rows (or each line in a csv file), while tf.math.equal will help us format our label such that it's a boolean array with a truth value corresponding to the class in CLASS_NAMES, much like the labels for the MNIST Lab.
End of explanation
MAX_DELTA = 63.0 / 255.0 # Change brightness by at most 17.7%
CONTRAST_LOWER = 0.2
CONTRAST_UPPER = 1.8
def read_and_preprocess(image_bytes, label, random_augment=False):
if random_augment:
img = decode_img(image_bytes, [IMG_HEIGHT + 10, IMG_WIDTH + 10])
img = tf.image.random_crop(img, [IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS])
img = tf.image.random_flip_left_right(img)
img = tf.image.random_brightness(img, MAX_DELTA)
img = tf.image.random_contrast(img, CONTRAST_LOWER, CONTRAST_UPPER)
else:
img = decode_img(image_bytes, [IMG_WIDTH, IMG_HEIGHT])
return img, label
def read_and_preprocess_with_augment(image_bytes, label):
return read_and_preprocess(image_bytes, label, random_augment=True)
Explanation: Next, we'll transform the images to give our network more variety to train on. There are a number of image manipulation functions. We'll cover just a few:
tf.image.random_crop - Randomly deletes the top/bottom rows and left/right columns down to the dimensions specified.
tf.image.random_flip_left_right - Randomly flips the image horizontally
tf.image.random_brightness - Randomly adjusts how dark or light the image is.
tf.image.random_contrast - Randomly adjusts image contrast.
TODO 1.b: Add the missing parameters from the random augment functions.
End of explanation
def load_dataset(csv_of_filenames, batch_size, training=True):
dataset = tf.data.TextLineDataset(filenames=csv_of_filenames) \
.map(decode_csv).cache()
if training:
dataset = dataset \
.map(read_and_preprocess_with_augment) \
.shuffle(SHUFFLE_BUFFER) \
.repeat(count=None) # Indefinately.
else:
dataset = dataset \
.map(read_and_preprocess) \
.repeat(count=1) # Each photo used once.
# Prefetch prepares the next set of batches while current batch is in use.
return dataset.batch(batch_size=batch_size).prefetch(buffer_size=AUTOTUNE)
Explanation: Finally, we'll make a function to craft our full dataset using tf.data.dataset. The tf.data.TextLineDataset will read in each line in our train/eval csv files to our decode_csv function.
.cache is key here. It will store the dataset in memory
End of explanation
train_path = "gs://cloud-ml-data/img/flower_photos/train_set.csv"
train_data = load_dataset(train_path, 1)
itr = iter(train_data)
Explanation: We'll test it out with our training set. A batch size of one will allow us to easily look at each augmented image.
End of explanation
image_batch, label_batch = next(itr)
img = image_batch[0]
plt.imshow(img)
print(label_batch[0])
Explanation: TODO 1.c: Run the below cell repeatedly to see the results of different batches. The images have been un-normalized for human eyes. Can you tell what type of flowers they are? Is it fair for the AI to learn on?
End of explanation
eval_path = "gs://cloud-ml-data/img/flower_photos/eval_set.csv"
nclasses = len(CLASS_NAMES)
hidden_layer_1_neurons = 400
hidden_layer_2_neurons = 100
dropout_rate = 0.25
num_filters_1 = 64
kernel_size_1 = 3
pooling_size_1 = 2
num_filters_2 = 32
kernel_size_2 = 3
pooling_size_2 = 2
layers = [
Conv2D(num_filters_1, kernel_size=kernel_size_1,
activation='relu',
input_shape=(IMG_WIDTH, IMG_HEIGHT, IMG_CHANNELS)),
MaxPooling2D(pooling_size_1),
Conv2D(num_filters_2, kernel_size=kernel_size_2,
activation='relu'),
MaxPooling2D(pooling_size_2),
Flatten(),
Dense(hidden_layer_1_neurons, activation='relu'),
Dense(hidden_layer_2_neurons, activation='relu'),
Dropout(dropout_rate),
Dense(nclasses),
Softmax()
]
old_model = Sequential(layers)
old_model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
train_ds = load_dataset(train_path, BATCH_SIZE)
eval_ds = load_dataset(eval_path, BATCH_SIZE, training=False)
old_model.fit_generator(
train_ds,
epochs=5,
steps_per_epoch=5,
validation_data=eval_ds,
validation_steps=VALIDATION_STEPS
)
Explanation: Note: It may take a 4-5 minutes to see result of different batches.
MobileNetV2
These flower photos are much larger than handwritting recognition images in MNIST. They are about 10 times as many pixels per axis and there are three color channels, making the information here over 200 times larger!
How do our current techniques stand up? Copy your best model architecture over from the <a href="2_mnist_models.ipynb">MNIST models lab</a> and see how well it does after training for 5 epochs of 50 steps.
TODO 2.a Copy over the most accurate model from 2_mnist_models.ipynb or build a new CNN Keras model.
End of explanation
module_selection = "mobilenet_v2_100_224"
module_handle = "https://tfhub.dev/google/imagenet/{}/feature_vector/4" \
.format(module_selection)
transfer_model = tf.keras.Sequential([
hub.KerasLayer(module_handle, trainable=False),
tf.keras.layers.Dropout(rate=0.2),
tf.keras.layers.Dense(
nclasses,
activation='softmax',
kernel_regularizer=tf.keras.regularizers.l2(0.0001))
])
transfer_model.build((None,)+(IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS))
transfer_model.summary()
Explanation: If your model is like mine, it learns a little bit, slightly better then random, but ugh, it's too slow! With a batch size of 32, 5 epochs of 5 steps is only getting through about a quarter of our images. Not to mention, this is a much larger problem then MNIST, so wouldn't we need a larger model? But how big do we need to make it?
Enter Transfer Learning. Why not take advantage of someone else's hard work? We can take the layers of a model that's been trained on a similar problem to ours and splice it into our own model.
Tensorflow Hub is a database of models, many of which can be used for Transfer Learning. We'll use a model called MobileNet which is an architecture optimized for image classification on mobile devices, which can be done with TensorFlow Lite. Let's compare how a model trained on ImageNet data compares to one built from scratch.
The tensorflow_hub python package has a function to include a Hub model as a layer in Keras. We'll set the weights of this model as un-trainable. Even though this is a compressed version of full scale image classification models, it still has over four hundred thousand paramaters! Training all these would not only add to our computation, but it is also prone to over-fitting. We'll add some L2 regularization and Dropout to prevent that from happening to our trainable weights.
TODO 2.b: Add a Hub Keras Layer at the top of the model using the handle provided.
End of explanation
transfer_model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
train_ds = load_dataset(train_path, BATCH_SIZE)
eval_ds = load_dataset(eval_path, BATCH_SIZE, training=False)
transfer_model.fit(
train_ds,
epochs=5,
steps_per_epoch=5,
validation_data=eval_ds,
validation_steps=VALIDATION_STEPS
)
Explanation: Even though we're only adding one more Dense layer in order to get the probabilities for each of the 5 flower types, we end up with over six thousand parameters to train ourselves. Wow!
Moment of truth. Let's compile this new model and see how it compares to our MNIST architecture.
End of explanation |
15,098 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: MLP in vanilla JAX
We construct a simple MLP with L hidden layers (relu activation), and scalar output (linear activation).
Note
Step2: Our first flax model
Here we recreate the vanilla model in flax. Since we don't specify how the parameters are initialized, the behavior will not be identical to the vanilla model --- we will fix this below, but for now, we focus on model construction.
We see that the model is a subclass of nn.Module, which is a subclass of Python's dataclass. The child class (written by the user) must define a model.call(inputs) method, that applies the function to the input, and a model.setup() method, that creates the modules inside this model.
The module (parent) class defines two main methods
Step3: Compact modules
To reduce the amount of boiler plate code, flax makes it possible to define a module just by writing the call method, avoiding the need to write a setup function. The corresponding layers will be created when the init funciton is called, so the input shape can be inferred lazily (when passed an input).
Step4: Explicit parameter initialization
We can control the initialization of the random parameters in each submodule by specifying an init function. Below we show how to initialize our MLP to match the vanilla JAX model. We then check both methods give the same outputs.
Step5: Creating your own modules
Now we illustrate how to create a module with its own parameters, instead of relying on composing built-in primitives. As an example, we write our own dense layer class.
Step6: Stochastic layers
Some layers may need a source of randomness. If so, we must pass them a PRNG in the init and apply functions, in addition to the PRNG used for parameter initialization. We illustrate this below using dropout. We construct two versions, one which is stochastic (for training), and one which is deterministic (for evaluation).
Step7: Stateful layers
In addition to parameters, linen modules can contain other kinds of variables, which may be mutable as we illustrate below.
Indeed, parameters are just a special case of variable.
In particular, this line
p = self.param('param_name', init_fn, shape, dtype)
is a convenient shorthand for this
Step8: Combining mutable variables and immutable parameters
We can combine mutable variables with immutable parameters.
As an example, consider a simplified version of batch normalization, which
computes the running mean of its inputs, and adds an optimzable offset (bias) term.
Step9: The intial variables are
Step10: To call the function with the updated batch stats, we have to stitch together the new mutated state with the old state, as shown below.
Step11: If we pass in x=2*ones(N,D), the running average gets updated to
$$
0.99 * 0.01 + (1-0.99) * 2.0 = 0.0299
$$
and the output becomes
$$
2- 0.0299 + 1 = 2.9701
$$
Step12: Optimization
Flax has several built-in (first-order) optimizers, as we illustrate below on a random linear function. (Note that we can also fit a model defined in flax using some other kind of optimizer, such as that provided by the optax library.)
Step13: Worked example
Step14: Data
Step15: Model
Step16: Training loop | Python Code:
import numpy as np
# np.set_printoptions(precision=3)
np.set_printoptions(formatter={"float": lambda x: "{0:0.5f}".format(x)})
import matplotlib.pyplot as plt
import jax
print(jax.__version__)
print(jax.devices())
from jax import lax, random, numpy as jnp
key = random.PRNGKey(0)
from typing import Any, Callable, Dict, Iterator, Mapping, Optional, Sequence, Tuple
# Useful type aliases
Array = jnp.ndarray
PRNGKey = Array
Batch = Mapping[str, np.ndarray]
OptState = Any
# Install Flax at head:
!pip install --upgrade -q git+https://github.com/google/flax.git
import flax
from flax.core import freeze, unfreeze
from flax import linen as nn
from flax import optim
from jax.config import config
# config.enable_omnistaging() # Linen requires enabling omnistaging
Explanation: <a href="https://colab.research.google.com/github/probml/probml-notebooks/blob/main/notebooks/flax_intro.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Introduction to neural networks using Flax
Flax / Linen is a neural net library, built on top of JAX, "designed to offer an implicit variable management API to save the user from having to manually thread thousands of variables through a complex tree of functions." To handle both current and future JAX transforms (configured and composed in any way), Linen Modules are defined as explicit functions of the form
$$
f(v_{in}, x) \rightarrow v_{out}, y
$$
Where $v_{in}$ is the collection of variables (eg. parameters) and PRNG state used by the model, $v_{out}$ the mutated output variable collections, $x$ the input data and $y$ the output data. We illustrate this below. Our tutorial is based on the official flax intro and linen colab. Details are in the flax source code. Note: please be sure to read our JAX tutorial first.
End of explanation
# We define the parameter initializers using a signature that is flax-compatible
# https://flax.readthedocs.io/en/latest/_modules/jax/_src/nn/initializers.html
def weights_init(key, shape, dtype=jnp.float32):
return random.normal(key, shape, dtype)
# return jnp.ones(shape, dtype)
def bias_init(key, shape, dtype=jnp.float32):
return jnp.zeros(shape, dtype)
def relu(a):
return jnp.maximum(a, 0)
# A minimal MLP class
class MLP0:
features: Sequence[int] # number of features in each layer
def __init__(self, features): # class constructor
self.features = features
def init(self, key, x): # initialize parameters
in_size = np.shape(x)[1]
sizes = np.concatenate(([in_size], self.features))
nlayers = len(sizes)
params = {}
for i in range(nlayers - 1):
in_size = sizes[i]
out_size = sizes[i + 1]
subkey1, subkey2, key = random.split(key, num=3)
W = weights_init(subkey1, (in_size, out_size))
b = bias_init(subkey2, out_size)
params[f"W{i}"] = W
params[f"b{i}"] = b
return params
def apply(self, params, x): # forwards pass
activations = x
nhidden_layers = len(self.features) - 1
for i in range(nhidden_layers):
W = params[f"W{i}"]
b = params[f"b{i}"]
outputs = jnp.dot(activations, W) + b
activations = relu(outputs)
# for final layer, no activation function
i = nhidden_layers
outputs = jnp.dot(activations, params[f"W{i}"]) + params[f"b{i}"]
return outputs
key = random.PRNGKey(0)
D = 3
N = 2
x = random.normal(
key,
(
N,
D,
),
)
layer_sizes = [3, 1] # 1 hidden layer of size 3, 1 scalar output
model0 = MLP0(layer_sizes)
params0 = model0.init(key, x)
print("params")
for k, v in params0.items():
print(k, v.shape)
print(v)
y0 = model0.apply(params0, x)
print("\noutput")
print(y0)
Explanation: MLP in vanilla JAX
We construct a simple MLP with L hidden layers (relu activation), and scalar output (linear activation).
Note: JAX and Flax, like NumPy, are row-based systems, meaning that vectors are represented as row vectors and not column vectors.
End of explanation
class MLP(nn.Module):
features: Sequence[int]
default_attr: int = 42
def setup(self):
print("setup")
self.layers = [nn.Dense(feat) for feat in self.features]
def __call__(self, inputs):
print("call")
x = inputs
for i, lyr in enumerate(self.layers):
x = lyr(x)
if i != len(self.layers) - 1:
x = nn.relu(x)
return x
key = random.PRNGKey(0)
D = 3
N = 2
x = random.normal(
key,
(
N,
D,
),
)
layer_sizes = [3, 1] # 1 hidden layer of size 3, 1 scalar output
print("calling constructor")
model = MLP(layer_sizes) # just initialize attributes of the object
print("OUTPUT")
print(model)
print("\ncalling init")
variables = model.init(key, x) # calls setup then __call___
print("OUTPUT")
print(variables)
print("Calling apply")
y = model.apply(variables, x) # calls setup then __call___
print(y)
Explanation: Our first flax model
Here we recreate the vanilla model in flax. Since we don't specify how the parameters are initialized, the behavior will not be identical to the vanilla model --- we will fix this below, but for now, we focus on model construction.
We see that the model is a subclass of nn.Module, which is a subclass of Python's dataclass. The child class (written by the user) must define a model.call(inputs) method, that applies the function to the input, and a model.setup() method, that creates the modules inside this model.
The module (parent) class defines two main methods: model.apply(variables, input, that applies the function to the input (and variables) to generate an output; and model.init(key, input), that initializes the variables and returns them as a "frozen dictionary". This dictionary can contain multiple kinds of variables. In the example below, the only kind are parameters, which are immutable variables (that will usually get updated in an external optimization loop, as we show later). The parameters are automatically named after the corresponding module (here, dense0, dense1, etc). In this example, both modules are dense layers, so their parameters are a weight matrix (called 'kernel') and a bias vector.
The hyper-parameters (in this case, the size of each layer) are stored as attributes of the class, and are specified when the module is constructed.
End of explanation
class MLP(nn.Module):
features: Sequence[int]
@nn.compact
def __call__(self, inputs):
x = inputs
for i, feat in enumerate(self.features):
x = nn.Dense(feat)(x)
if i != len(self.features) - 1:
x = nn.relu(x)
return x
model = MLP(layer_sizes)
print(model)
params = model.init(key, x)
print(params)
y = model.apply(params, x)
print(y)
Explanation: Compact modules
To reduce the amount of boiler plate code, flax makes it possible to define a module just by writing the call method, avoiding the need to write a setup function. The corresponding layers will be created when the init funciton is called, so the input shape can be inferred lazily (when passed an input).
End of explanation
def make_const_init(x):
def init_params(key, shape, dtype=jnp.float32):
return x
return init_params
class MLP_init(nn.Module):
features: Sequence[int]
params_init: Dict
def setup(self):
nlayers = len(self.features)
layers = []
for i in range(nlayers):
W = self.params_init[f"W{i}"]
b = self.params_init[f"b{i}"]
weights_init = make_const_init(W)
bias_init = make_const_init(b)
layer = nn.Dense(self.features[i], kernel_init=weights_init, bias_init=bias_init)
layers.append(layer)
self.layers = layers
def __call__(self, inputs):
x = inputs
for i, lyr in enumerate(self.layers):
x = lyr(x)
if i != len(self.layers) - 1:
x = nn.relu(x)
return x
params_init = params0
model = MLP_init(layer_sizes, params_init)
print(model)
variables = model.init(key, x)
params = variables["params"]
print(params)
W0 = params0["W0"]
W = params["layers_0"]["kernel"]
assert np.allclose(W, W0)
y = model.apply(variables, x)
print(y)
assert np.allclose(y, y0)
Explanation: Explicit parameter initialization
We can control the initialization of the random parameters in each submodule by specifying an init function. Below we show how to initialize our MLP to match the vanilla JAX model. We then check both methods give the same outputs.
End of explanation
class SimpleDense(nn.Module):
features: int # num output features for this layer
kernel_init: Callable = nn.initializers.lecun_normal()
bias_init: Callable = nn.initializers.zeros
@nn.compact
def __call__(self, inputs):
features_in = inputs.shape[-1] # infer shape from input
features_out = self.features
kernel = self.param("kernel", self.kernel_init, (features_in, features_out))
bias = self.param("bias", self.bias_init, (features_out,))
outputs = jnp.dot(inputs, kernel) + bias
return outputs
model = SimpleDense(features=3)
print(model)
vars = model.init(key, x)
print(vars)
y = model.apply(vars, x)
print(y)
Explanation: Creating your own modules
Now we illustrate how to create a module with its own parameters, instead of relying on composing built-in primitives. As an example, we write our own dense layer class.
End of explanation
class Block(nn.Module):
features: int
training: bool
@nn.compact
def __call__(self, inputs):
x = nn.Dense(self.features)(inputs)
x = nn.Dropout(rate=0.5)(x, deterministic=not self.training)
return x
N = 1
D = 2
x = random.uniform(key, (N, D))
model = Block(features=3, training=True)
key = random.PRNGKey(0)
variables = model.init({"params": key, "dropout": key}, x)
# variables = model.init(key, x) # cannot share the rng
print("variables", variables)
# Apply stochastic model
for i in range(2):
key, subkey = random.split(key)
y = model.apply(variables, x, rngs={"dropout": subkey})
print(f"train output {i}, ", y)
# Now make a deterministic version
eval_model = Block(features=3, training=False)
key = random.PRNGKey(0)
# variables = eval_model.init({'params': key, 'dropout': key}, x)
for i in range(2):
key, subkey = random.split(key)
y = eval_model.apply(variables, x, rngs={"dropout": subkey})
print(f"eval output {i}, ", y)
Explanation: Stochastic layers
Some layers may need a source of randomness. If so, we must pass them a PRNG in the init and apply functions, in addition to the PRNG used for parameter initialization. We illustrate this below using dropout. We construct two versions, one which is stochastic (for training), and one which is deterministic (for evaluation).
End of explanation
class Counter(nn.Module):
@nn.compact
def __call__(self):
# variable(collection, name, init_fn, *init_args)
counter1 = self.variable("counter", "count1", lambda: jnp.zeros((), jnp.int32))
counter2 = self.variable("counter", "count2", lambda: jnp.zeros((), jnp.int32))
is_initialized = self.has_variable("counter", "count1")
if is_initialized:
counter1.value += 1
counter2.value += 2
return counter1.value, counter2.value
model = Counter()
print(model)
init_variables = model.init(key) # calls the `call` method
print("initialized variables:\n", init_variables)
counter = init_variables["counter"]["count1"]
print("counter 1 value", counter)
y, mutated_variables = model.apply(init_variables, mutable=["counter"])
print("mutated variables:\n", mutated_variables)
print("output:\n", y)
Explanation: Stateful layers
In addition to parameters, linen modules can contain other kinds of variables, which may be mutable as we illustrate below.
Indeed, parameters are just a special case of variable.
In particular, this line
p = self.param('param_name', init_fn, shape, dtype)
is a convenient shorthand for this:
p = self.variable('params', 'param_name', lambda s, d: init_fn(self.make_rng('params'), s, d), shape, dtype).value
Example: counter
End of explanation
class BiasAdderWithRunningMean(nn.Module):
decay: float = 0.99
@nn.compact
def __call__(self, x):
is_initialized = self.has_variable("params", "bias")
# variable(collection, name, init_fn, *init_args)
ra_mean = self.variable("batch_stats", "mean", lambda s: jnp.zeros(s), x.shape[1:])
dummy_mutable = self.variable("mutables", "dummy", lambda s: 42, 0)
# param(name, init_fn, *init_args)
bias = self.param("bias", lambda rng, shape: jnp.ones(shape), x.shape[1:])
if is_initialized:
ra_mean.value = self.decay * ra_mean.value + (1.0 - self.decay) * jnp.mean(x, axis=0, keepdims=True)
return x - ra_mean.value + bias
Explanation: Combining mutable variables and immutable parameters
We can combine mutable variables with immutable parameters.
As an example, consider a simplified version of batch normalization, which
computes the running mean of its inputs, and adds an optimzable offset (bias) term.
End of explanation
key = random.PRNGKey(0)
N = 2
D = 5
x = jnp.ones((N, D))
model = BiasAdderWithRunningMean()
variables = model.init(key, x)
print("initial variables:\n", variables)
nonstats, stats = variables.pop("batch_stats")
print("nonstats", nonstats)
print("stats", stats)
y, mutables = model.apply(variables, x, mutable=["batch_stats"])
print("output", y)
print("mutables", mutables)
Explanation: The intial variables are:
params = (bias=1), batch_stats=(mean=0)
If we pass in x=ones(N,D), the running average becomes
$$
0.990 + (1-0.99)1 = 0.01
$$
and the output becomes
$$
1 - 0.01 + 1 = 1.99
$$
End of explanation
variables = unfreeze(nonstats)
print(variables)
variables["batch_stats"] = mutables["batch_stats"]
variables = freeze(variables)
print(variables)
Explanation: To call the function with the updated batch stats, we have to stitch together the new mutated state with the old state, as shown below.
End of explanation
x = 2 * jnp.ones((N, D))
y, mutables = model.apply(variables, x, mutable=["batch_stats"])
print("output", y)
print("batch_stats", mutables)
assert np.allclose(y, 2.9701)
assert np.allclose(mutables["batch_stats"]["mean"], 0.0299)
Explanation: If we pass in x=2*ones(N,D), the running average gets updated to
$$
0.99 * 0.01 + (1-0.99) * 2.0 = 0.0299
$$
and the output becomes
$$
2- 0.0299 + 1 = 2.9701
$$
End of explanation
D = 5
key = jax.random.PRNGKey(0)
params = {"w": jax.random.normal(key, (D,))}
print(params)
x = jax.random.normal(key, (D,))
def loss(params):
w = params["w"]
return jnp.dot(x, w)
loss_grad_fn = jax.value_and_grad(loss)
v, g = loss_grad_fn(params)
print(v)
print(g)
from flax import optim
optimizer_def = optim.Momentum(learning_rate=0.1, beta=0.9)
print(optimizer_def)
optimizer = optimizer_def.create(params)
print(optimizer)
for i in range(10):
params = optimizer.target
loss_val, grad = loss_grad_fn(params)
optimizer = optimizer.apply_gradient(grad)
params = optimizer.target
print("step {}, loss {:0.3f}, params {}".format(i, loss_val, params))
Explanation: Optimization
Flax has several built-in (first-order) optimizers, as we illustrate below on a random linear function. (Note that we can also fit a model defined in flax using some other kind of optimizer, such as that provided by the optax library.)
End of explanation
!pip install superimport
!wget https://raw.githubusercontent.com/probml/pyprobml/master/scripts/fit_flax.py
import fit_flax as ff
ff.test()
Explanation: Worked example: MLP for MNIST
We demonstrate how to fit a shallow MLP to MNIST using Flax.
We use this function:
https://github.com/probml/pyprobml/blob/master/scripts/fit_flax.py
Import code
End of explanation
import tensorflow_datasets as tfds
import tensorflow as tf
def process_record(batch):
image = batch["image"]
label = batch["label"]
# flatten image to vector
shape = image.get_shape().as_list()
D = np.prod(shape) # no batch dimension
image = tf.reshape(image, (D,))
# rescale to -1..+1
image = tf.cast(image, dtype=tf.float32)
image = ((image / 255.0) - 0.5) * 2.0
# convert to standard names
return {"X": image, "y": label}
def load_mnist(split, batch_size):
dataset, info = tfds.load("mnist", split=split, with_info=True)
dataset = dataset.map(process_record)
if split == "train":
dataset = dataset.shuffle(10 * batch_size, seed=0)
dataset = dataset.batch(batch_size)
dataset = dataset.prefetch(tf.data.experimental.AUTOTUNE)
dataset = dataset.cache()
dataset = dataset.repeat()
dataset = tfds.as_numpy(dataset) # leave TF behind
num_examples = info.splits[split].num_examples
return iter(dataset), num_examples
batch_size = 100
train_iter, num_train = load_mnist("train", batch_size)
test_iter, num_test = load_mnist("test", batch_size)
num_epochs = 3
num_steps = num_train // batch_size
print(f"{num_epochs} epochs with batch size {batch_size} will take {num_steps} steps")
batch = next(train_iter)
print(batch["X"].shape)
print(batch["y"].shape)
Explanation: Data
End of explanation
class Model(nn.Module):
nhidden: int
nclasses: int
@nn.compact
def __call__(self, x):
if self.nhidden > 0:
x = nn.Dense(self.nhidden)(x)
x = nn.relu(x)
x = nn.Dense(self.nclasses)(x) # logits
x = nn.log_softmax(x) # log probabilities
return x
Explanation: Model
End of explanation
model = Model(nhidden=128, nclasses=10)
rng = jax.random.PRNGKey(0)
num_steps = 200
params, history = ff.fit_model(model, rng, num_steps, train_iter, test_iter, print_every=20)
display(history)
plt.figure()
plt.plot(history["test_accuracy"], "o-", label="test accuracy")
plt.xlabel("num. minibatches")
plt.legend()
plt.show()
Explanation: Training loop
End of explanation |
15,099 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create dataframe
Step2: Select a random subset of 2 without replacement | Python Code:
import pandas as pd
import numpy as np
Explanation: Title: Random Sampling Dataframe
Slug: pandas_sampling_dataframe
Summary: Random Sampling Dataframe
Date: 2016-05-01 12:00
Category: Python
Tags: Data Wrangling
Authors: Chris Albon
import modules
End of explanation
raw_data = {'first_name': ['Jason', 'Molly', 'Tina', 'Jake', 'Amy'],
'last_name': ['Miller', 'Jacobson', 'Ali', 'Milner', 'Cooze'],
'age': [42, 52, 36, 24, 73],
'preTestScore': [4, 24, 31, 2, 3],
'postTestScore': [25, 94, 57, 62, 70]}
df = pd.DataFrame(raw_data, columns = ['first_name', 'last_name', 'age', 'preTestScore', 'postTestScore'])
df
Explanation: Create dataframe
End of explanation
df.take(np.random.permutation(len(df))[:2])
Explanation: Select a random subset of 2 without replacement
End of explanation |
Subsets and Splits