code
stringlengths 2.5k
150k
| kind
stringclasses 1
value |
---|---|
# An analysis of the dataset presented in [this technical comment](https://arxiv.org/abs/2004.06601), but with our quality cuts applied
As a response to our paper [Dessert et al. _Science_ 2020](https://science.sciencemag.org/content/367/6485/1465) (DRS20), we received [a technical comment](https://arxiv.org/abs/2004.06601) (BRMS). BRMS performed a simplified version of our analysis in a partially overlapping dataset using 17 Ms of MOS observations spanning 20$^\circ$ to 35$^\circ$ from the Galactic Center. They assumed a single power-law background with additional lines at 3.1, 3.3, 3.7, and 3.9 keV, and claim a 4$\sigma$ detection of a line at 3.48 keV using an energy window of 3-4 keV. However, it is important to note that the BRMS analysis do not apply any (stated) quality cuts to their dataset. On the other hand, as detailed in DRS20, we selected low-background or blank-sky observations, so the data is much cleaner.
In our formal response to the technical comment, we repeat this analysis on the 8.5 Ms of the BRMS dataset that passes the quality cuts. In this notebook, we show this data and analysis in detail. Many of the details will follow the procedure used in the notebook `DRS20_mos_stacked`. For a pedagogical introduction to the analysis here, we refer to that notebook.
If you use the data in this example in a publication, please cite Dessert et al. _Science_ 2020.
**Please direct any questions to [email protected].**
```
# Import required modules
%matplotlib inline
%load_ext autoreload
%autoreload 2
import sys,os
import numpy as np
from scipy.stats import chi2 as chi2_scipy
from scipy.optimize import dual_annealing
from scipy.optimize import minimize
import matplotlib.pyplot as plt
from matplotlib import rc
from matplotlib import rcParams
rc('font', **{'family': 'serif', 'serif': ['Computer Modern']})
rcParams['text.usetex'] = True
rcParams['text.latex.unicode'] = True
```
**NB**: In this notebook, we minimize with `scipy` so that it is easy to run for the interested reader. For scientific analysis, we recommend [Minuit](https://iminuit.readthedocs.io/en/latest/) as a minimizer. In our paper, we used Minuit.
# Define signal line energy
By default we will look for an anomalous line at 3.48 keV, as defined by the EUXL parameter below, denoting the energy of the unidentified X-ray line. Lines at different energies can be searched for by changing this parameter accordingly (for example to 3.55 keV as in the previous notebook). We start with 3.48 keV as this is the fiducial line energy in BMRS. We note that 3.48 keV is the energy where the weakest limit is obtained, although on the clean data we will not find any evidence for a feature there.
```
EUXL = 3.48 # [keV]
```
**NB:** changing EUXL will of course vary the results below, and values in the surrounding discussion will not necessarily be reflective.
# Load in the data and models
First we will load in the data products that we will use in the analysis. These include the stacked MOS data, associated energy bins, and uncertainties.
We will use data from two regions of interest (ROI):
- **Signal Region (SR)**: 20-35 degrees from the Galactic Center, this was the fiducial ROI in BRMS (DRS20 instead used 5-45);
- **Background Region (BR)**: 60-90 degrees from the Galactic Center, a useful region for studying background as it contains less dark matter.
We also load the appropriately averaged D-factors for these two regions (ROIs) for our fiducial NFW profile, along with the respective exposure times.
```
## Signal Region (20-35 degrees)
data = np.load("../data/data_mos_boyarsky_ROI_our_cuts.npy") # [cts/s/keV]
data_yerrs = np.load("../data/data_yerrs_mos_boyarsky_ROI_our_cuts.npy") # [cts/s/keV]
QPB = np.load("../data/QPB_mos_boyarsky_ROI_our_cuts.npy") # [cts/s/keV]
# Exposure time
Exp = 8.49e6 # [s]
# D-factor averaged over the signal ROI
D_signal = 4.4e28 # [keV/cm^2]
## Background Region (60-90 degrees)
# Data and associated errors
data_bkg = np.load("../data/data_mos_bkg_ROI.npy") # [cts/s/keV]
data_yerrs_bkg = np.load("../data/data_yerrs_mos_bkg_ROI.npy") # [cts/s/keV]
# Exposure time
Exp_bkg = 67.64e6 # [s]
# D-factor averaged over the background ROI
D_bkg = 1.91e28 # [keV/cm^2]
## Energy binning appropriate for both the signal and background
Energies=np.load("../data/mos_energies.npy") # [keV]
```
## Load in the Models
Next we use the models that will be used in fitting the above data.
There are a sequence of models corresponding to physical line fluxes at the energies specified by `Es_line`. That is, `mod_UXL` gives the detectors counts as a function of energy after forward modeling a physical line at EUXL keV with a flux of 1 cts/cm$^2$/s/sr.
```
# Load the forward-modeled lines and energies
mods = np.load("../data/mos_mods.npy")
Es_line = np.load("../data/mos_mods_line_energies.npy")
# Load the detector response
det_res = np.load("../data/mos_det_res.npy")
arg_UXL = np.argmin((Es_line-EUXL)**2)
mod_UXL = mods[arg_UXL]
print "The energy of our "+str(EUXL)+" keV line example will be: "+str(Es_line[arg_UXL])+" keV"
# How to go from flux to sin^2(2\theta)
def return_sin_theta_lim(E_line,flux,D_factor):
"""
D_factor [keV/cm^2]
flux [cts/cm^2/s/sr]
E_line [keV] (dark matter mass is twice this value)
returns: associated sin^2(2theta)
"""
DMmass = 2.*E_line
res = (4.*np.pi*DMmass/D_factor)/1.361e-22*(1/DMmass)**5*flux
return res
```
# Visualize the data
Data in the signal region, where the dashed vertical line denotes the location of a putative signal line. Note in particular the flux is similar to that in Fig. 2 of DRS20, indicating that the included observations are low-background.
```
fig = plt.figure(figsize=(10,8))
plt.errorbar(Energies,data,yerr=data_yerrs,xerr=(Energies[1]-Energies[0])/2.,
color="black",label="data",marker="o", fmt='none',capsize=4)
plt.axvline(EUXL,color="black",linestyle="dashed")
plt.xlim(EUXL-0.25,EUXL+0.25)
plt.ylim(7.9e-2,0.1)
plt.xticks(fontsize=22)
plt.yticks(fontsize=22)
plt.xlabel(r"$E$ [keV]",fontsize=22)
plt.ylabel(r"SR Flux [cts/s/keV]",fontsize=22)
plt.show()
```
# Statistical analysis
Now, let's perform a rigorous statistical analysis, using profile likelihood. As we operate in the large counts limit for the stacked data, we can perform a simple $\chi^2$ analysis rather than a full joint likelihood analysis as used by default in Dessert et al. 2020.
```
## Define the functions we will use
class chi2:
""" A set offunctions for calculation the chisq associated with different hypotheses
"""
def __init__(self,ens,dat,err,null_mod,sig_template):
self._ens = ens
self._dat = dat
self._err = err
self._null_mod = null_mod
self._sig_template = sig_template
self._A_sig = 0.0
def chi2(self,x):
null_mod = self._null_mod(self._ens,x[1:])
sig_mod = self._sig_template*x[0]
return np.sum((self._dat - null_mod - sig_mod)**2/self._err**2)
def chi2_null(self,x):
null_mod = self._null_mod(self._ens,x)
return np.sum((self._dat - null_mod)**2/self._err**2)
def chi2_fixed_signal(self,x):
null_mod = self._null_mod(self._ens,x)
sig_mod = self._sig_template*self._A_sig
return np.sum((self._dat - null_mod - sig_mod)**2/self._err**2)
def fix_signal_strength(self,A_sig):
self._A_sig = A_sig
```
## Fit within $E_{\rm UXL} \pm 0.25$ keV
First, we will fit the models from $[E_{\rm UXL}-0.25,\,E_{\rm UXL}+0.25]$ keV. Later in this notebook, we broaden this range to 3.0 to 4.0 keV. For the default $E_{\rm UXL} = 3.48$ keV, this corresponds to $3.23~{\rm keV} < E < 3.73~{\rm keV}$.
To begin with then, let's reduce the dataset to this restricted range.
```
whs_reduced = np.where((Energies >= EUXL-0.25) & (Energies <= EUXL+0.25))[0]
Energies_reduced = Energies[whs_reduced]
data_reduced = data[whs_reduced]
data_yerrs_reduced = data_yerrs[whs_reduced]
data_bkg_reduced = data_bkg[whs_reduced]
data_yerrs_bkg_reduced = data_yerrs_bkg[whs_reduced]
mod_UXL_reduced = mod_UXL[whs_reduced]
```
Let's fit this data with the background only hypothesis and consider the quality of fit.
## A polynomial background model
Here we model the continuum background as a quadratic. In addition, we add degrees of freedom associated with the possible background lines at 3.3 keV and 3.7 keV.
```
arg_3p3 = np.argmin((Es_line-3.32)**2)
mod_3p3 = mods[arg_3p3]
arg_3p7 = np.argmin((Es_line-3.68)**2)
mod_3p7 = mods[arg_3p7]
def mod_poly_two_lines(ens,x):
"An extended background model to include two additional lines"
A, B, C, S1, S2 = x
return A+B*ens + C*ens**2 + S1*mod_3p3[whs_reduced] + S2*mod_3p7[whs_reduced]
chi2_instance = chi2(Energies_reduced,data_reduced,data_yerrs_reduced,mod_poly_two_lines,mod_UXL_reduced)
mn_null_line = minimize(chi2_instance.chi2_null,np.array([0.282,-0.098, 0.011,0.1,0.1]),method='Nelder-Mead')
mn_line = minimize(chi2_instance.chi2,np.array([1.e-2,mn_null_line.x[0],mn_null_line.x[1],mn_null_line.x[2],mn_null_line.x[3],mn_null_line.x[4]]),method='Nelder-Mead',options={'fatol':1e-10,'xatol':1e-10,'adaptive':True})
print "The Delta chi^2 between signal and null model is:", mn_null_line.fun - mn_line.fun
print "The chi^2/DOF of the null-model fit is:", mn_null_line.fun/(len(whs_reduced)-5.)
print "Expected 68% containment for the chi^2/DOF:", np.array(chi2_scipy.interval(0.68,len(whs_reduced)-5.))/float(len(whs_reduced)-5.)
print "Expected 99% containment for the chi^2/DOF:", np.array(chi2_scipy.interval(0.99,len(whs_reduced)-5.))/float(len(whs_reduced)-5.)
```
The null model is a good fit to the data, and the best-fit signal strength is still consistent with zero at 1$\sigma$.
Next we plot the best fit signal and background model, in particular we see the model is almost identical in the two cases, emphasizing the lack of preference for a new emission line at 3.48 keV in this dataset.
```
fig = plt.figure(figsize=(10,8))
plt.errorbar(Energies,data,yerr=data_yerrs,xerr=(Energies[1]-Energies[0])/2.,
color="black",label="data",marker="o", fmt='none',capsize=4)
plt.plot(Energies_reduced,mod_poly_two_lines(Energies_reduced,mn_null_line.x),'k-',label =r"Null model")
plt.plot(Energies_reduced,mod_poly_two_lines(Energies_reduced,mn_line.x[1:])+mn_line.x[0]*mod_UXL_reduced,
'r-',label =r"Signal model")
plt.axvline(EUXL,color="black",linestyle="dashed")
plt.xlim(EUXL-0.25,EUXL+0.25)
plt.ylim(0.08,0.1)
plt.xticks(fontsize=22)
plt.yticks(fontsize=22)
plt.xlabel(r"$E$ [keV]",fontsize=22)
plt.ylabel(r"SR Flux [cts/s/keV]",fontsize=22)
plt.legend(fontsize=22)
plt.show()
```
Finally let's compute the associated limit via profile likelihood.
```
A_sig_array = np.linspace(mn_line.x[0],0.05,100)
chi2_sig_array = np.zeros(len(A_sig_array))
bf = mn_line.x[1:]
for i in range(len(A_sig_array)):
chi2_instance.fix_signal_strength(A_sig_array[i])
mn_profile = minimize(chi2_instance.chi2_fixed_signal,bf,method='Nelder-Mead',
options={'fatol':1e-10,'xatol':1e-10,'adaptive':True})
bf = mn_profile.x
chi2_sig_array[i] = mn_profile.fun
amin = np.argmin((chi2_sig_array-chi2_sig_array[0] - 2.71)**2)
limit_signal_strength = A_sig_array[amin]
print "The 95% upper limit on the signal flux is", limit_signal_strength, "cts/cm^2/s/sr"
print "This corresponds to a limit on sin^2(2theta) of", return_sin_theta_lim(EUXL,limit_signal_strength,D_signal)
```
## Power law background model
Now let's try a power law for the continuum background model (along with the two lines) as done in BMRS. Given that the stacked data is the sum of power laws, we would not expect the stacked data to be a power law itself, although in our relatively clean dataset we find it to be a reasonable description.
```
def mod_power_two_lines(ens,x):
"An extended background model to include two additional lines"
A, n, S1, S2 = x
return A*ens**n + S1*mod_3p3[whs_reduced] + S2*mod_3p7[whs_reduced]
chi2_instance = chi2(Energies_reduced,data_reduced,data_yerrs_reduced,mod_power_two_lines,mod_UXL_reduced)
mn_null_line = minimize(chi2_instance.chi2_null,np.array([0.18244131, -0.58714693, 0.02237754, 0.01157593]),method='Nelder-Mead')
mn_line = minimize(chi2_instance.chi2,np.array([1.e-2,mn_null_line.x[0],mn_null_line.x[1],mn_null_line.x[2],mn_null_line.x[3]]),method='Nelder-Mead',options={'fatol':1e-10,'xatol':1e-10,'adaptive':True})
print "The Delta chi^2 between signal and null model is:", mn_null_line.fun - mn_line.fun
print "The chi^2/DOF of the null-model fit is:", mn_null_line.fun/(len(whs_reduced)-4.)
fig = plt.figure(figsize=(10,8))
plt.errorbar(Energies,data,yerr=data_yerrs,xerr=(Energies[1]-Energies[0])/2.,
color="black",label="data",marker="o", fmt='none',capsize=4)
plt.plot(Energies_reduced,mod_power_two_lines(Energies_reduced,mn_null_line.x),'k-',label =r"Null model")
plt.plot(Energies_reduced,mod_power_two_lines(Energies_reduced,mn_line.x[1:])+mn_line.x[0]*mod_UXL_reduced,
'r-',label =r"Signal model")
plt.axvline(EUXL,color="black",linestyle="dashed")
plt.xlim(EUXL-0.25,EUXL+0.25)
plt.ylim(0.08,0.1)
plt.xticks(fontsize=22)
plt.yticks(fontsize=22)
plt.xlabel(r"$E$ [keV]",fontsize=22)
plt.ylabel(r"SR Flux [cts/s/keV]",fontsize=22)
plt.legend(fontsize=22)
plt.show()
A_sig_array = np.linspace(mn_line.x[0],0.05,100)
chi2_sig_array = np.zeros(len(A_sig_array))
bf = mn_line.x[1:]
for i in range(len(A_sig_array)):
chi2_instance.fix_signal_strength(A_sig_array[i])
mn_profile = minimize(chi2_instance.chi2_fixed_signal,bf,method='Nelder-Mead',
options={'fatol':1e-10,'xatol':1e-10,'adaptive':True})
bf = mn_profile.x
chi2_sig_array[i] = mn_profile.fun
amin = np.argmin((chi2_sig_array-chi2_sig_array[0] - 2.71)**2)
limit_signal_strength = A_sig_array[amin]
print "The 95% upper limit on the signal flux is", limit_signal_strength, "cts/cm^2/s/sr"
print "This corresponds to a limit on sin^2(2theta) of", return_sin_theta_lim(EUXL,limit_signal_strength,D_signal)
```
The power law continuum background does not substantively change the results: we still find no evidence for a line. Note this is the same procedure as in BMRS's test color-coded red in their Fig. 1 and Tab. 1. In that analysis, they find marginal 1.3$\sigma$ evidence for a line, although on our cleaner dataset we found no evidence.
**NB:** As an aside, BMRS also perform an analysis, color-coded green in their Fig. 1 and Tab. 1, in which they fix the 3.3 keV and 3.7 keV emission lines to their best fit fluxes in the fit. They claim that DRS20, in our Supplementary Material Sec 2.7, also fixed the fluxes of these lines. This statement is incorrect.
# Departing from the narrow window
We now fit the same dataset over the 3-4 keV range.
Our procedure is as follows. Firstly, we update the dataset. Then we will define a new background model incorporating these additional lines. Finally we repeat our default $\chi^2$ fit procedure. Note that we continue to use a power law continuum background model here. As such, the following analysis is a repetition of the BMRS magenta color-coded analysis on this reduced and low-background dataset. In that magenta analysis, they claim a 4.0$\sigma$ detection of a line at 3.48 keV. Let us see what we obtain on when we include only the observations passing our quality cuts.
```
whs_reduced = np.where((Energies >= 3.0) & (Energies <= 4.0))[0]
Energies_reduced = Energies[whs_reduced]
data_reduced = data[whs_reduced]
data_yerrs_reduced = data_yerrs[whs_reduced]
data_bkg_reduced = data_bkg[whs_reduced]
data_yerrs_bkg_reduced = data_yerrs_bkg[whs_reduced]
mod_UXL_reduced = mod_UXL[whs_reduced]
arg_3p1 = np.argmin((Es_line-3.12)**2)
mod_3p1 = mods[arg_3p1]
arg_3p9 = np.argmin((Es_line-3.90)**2)
mod_3p9 = mods[arg_3p9]
arg_3p7 = np.argmin((Es_line-3.68)**2)
mod_3p7 = mods[arg_3p7]
arg_3p3 = np.argmin((Es_line-3.32)**2)
mod_3p3 = mods[arg_3p3]
def mod_power_four_lines(ens,x):
A, n,S1, S2, S3, S4 = x
return A*ens**n + S1*mod_3p3[whs_reduced] + S2*mod_3p7[whs_reduced]+ S3*mod_3p1[whs_reduced] + S4*mod_3p9[whs_reduced]
chi2_instance = chi2(Energies_reduced,data_reduced,data_yerrs_reduced,mod_power_four_lines,mod_UXL_reduced)
x0 = np.array([0.18088868 ,-0.58201284 , 0.02472505 , 0.01364361 , 0.08959867,
0.03220519])
bounds = np.array([[1e-6,5],[-3,0],[0,0.5],[0,0.5],[0,0.5],[0,0.5]])
mn_null = dual_annealing(chi2_instance.chi2_null,x0=x0,bounds=bounds,local_search_options={"method": "Nelder-Mead"},seed=1234,maxiter=500)
boundss = np.array([[-0.5,0.5],[1e-6,5],[-3,0],[0,0.1],[0,0.1],[0,0.1],[0,0.2]])
x0s=np.array([1.e-2,mn_null.x[0],mn_null.x[1],mn_null.x[2],mn_null.x[3],mn_null.x[4],mn_null.x[5]])
mn = dual_annealing(chi2_instance.chi2,x0=x0s,bounds=boundss,local_search_options={"method": "Nelder-Mead"},seed=1234,maxiter=500)
print "Best fit background parameters:", mn_null.x
print "Best fit signal+background parameters:", mn.x
print "The Delta chi^2 between signal and null model is:", mn_null.fun - mn.fun
print "The chi^2/DOF of the null-model fit is:", mn_null.fun/(len(whs_reduced)-6.)
print "NB: the best-fit signal strength in this case is", mn.x[0], "cts/cm$^2$/s/sr"
```
We find no evidence for a 3.5 keV line when we expand the energy window. Although the best-fit signal strength is positive, the $\Delta \chi^2 \sim 0.03$, which is entirely negligable significance.
Let's have a look at the best fit signal and background models in this case. There are subtle difference between the two, but no excess is appearing at 3.48 keV.
Additionally, we are defining a fixed signal to plot overtop the data for illustration. The default signal parameters here corresponds to a 2$\sigma$ downward fluctuationn in the signal reported in [Cappelluti et. al. ApJ 2018](https://iopscience.iop.org/article/10.3847/1538-4357/aaaa68/meta) from observations of the Chandra Deep Fields. Note that even taking the conservative downward flucutation, it is not a good fit to the data. This plot appears in our response to BMRS.
```
flux_ill = 4.8e-11 / return_sin_theta_lim(EUXL,1.,D_signal)
print "Flux [cts/cm^2/s/sr] and sin^(2theta) for illustration: ", flux_ill, return_sin_theta_lim(EUXL,flux_ill,D_signal)
chi2_instance.fix_signal_strength(flux_ill)
mn_f = dual_annealing(chi2_instance.chi2_fixed_signal,x0=x0,bounds=bounds,local_search_options={"method": "Nelder-Mead"},seed=1234,maxiter=500)
print "Delta chi^2 between fixed signal and null:", mn_null.fun-mn_f.fun
def avg_data(data,n):
return np.mean(data.reshape(-1, n), axis=1)
fig = plt.figure(figsize=(10,8))
plt.errorbar(avg_data(Energies,6),avg_data(data,6),yerr=np.sqrt(6*avg_data(data_yerrs**2,6))/6.,xerr=6*(Energies[1]-Energies[0])/2.,
color="black",marker="o", fmt='none',capsize=4)
plt.plot(Energies_reduced,mod_power_four_lines(Energies_reduced,mn_null.x),
'k-',label =r"Null P.L. model")
plt.plot(Energies_reduced,mod_power_four_lines(Energies_reduced,mn.x[1:])+mn.x[0]*mod_UXL_reduced,
'r-',label =r"Best fit signal model")
plt.plot(Energies_reduced,mod_power_four_lines(Energies_reduced,mn_f.x)+chi2_instance._A_sig*mod_UXL_reduced,
'r--',label =r"$\sin^2(2\theta) = 4.8 \times 10^{-11}$")
plt.xlim(3,4)
plt.ylim(0.08,0.1)
plt.xticks(fontsize=22)
plt.yticks(fontsize=22)
plt.xlabel(r"$E$ [keV]",fontsize=22)
plt.ylabel(r"Flux [cts/s/keV]",fontsize=22)
plt.legend(fontsize=22)
plt.show()
```
**NB:** In the plot above we averaged the data solely for presentation purposes, no averaging was performed in the analysis.
Finally, we compute the limit in this case using the by now familiar procedure.
```
A_sig_array = np.linspace(mn.x[0],0.05,100)
chi2_sig_array = np.zeros(len(A_sig_array))
bf = mn.x[1:]
for i in range(len(A_sig_array)):
chi2_instance.fix_signal_strength(A_sig_array[i])
mn_profile = minimize(chi2_instance.chi2_fixed_signal,bf,method='Nelder-Mead')
bf = mn_profile.x
chi2_sig_array[i] = mn_profile.fun
amin = np.argmin((chi2_sig_array-chi2_sig_array[0] - 2.71)**2)
limit_signal_strength = A_sig_array[amin]
print "The 95% upper limit on the signal flux is", limit_signal_strength, "cts/cm^2/s/sr"
print "This corresponds to a limit on sin^2(2theta) of", return_sin_theta_lim(EUXL,limit_signal_strength,D_signal)
```
## Now with a polynomial background
Here we repeat the earlier analysis but with a polynomial background model, as used in the stacked analysis in DRS20 Supplementary Material Sec. 2.9.
```
whs_reduced = np.where((Energies >= 3.0) & (Energies <= 4.0))[0]
Energies_reduced = Energies[whs_reduced]
data_reduced = data[whs_reduced]
data_yerrs_reduced = data_yerrs[whs_reduced]
data_bkg_reduced = data_bkg[whs_reduced]
data_yerrs_bkg_reduced = data_yerrs_bkg[whs_reduced]
mod_UXL_reduced = mod_UXL[whs_reduced]
arg_3p1 = np.argmin((Es_line-3.12)**2) #3.12 #should really be 3.128
mod_3p1 = mods[arg_3p1]
arg_3p9 = np.argmin((Es_line-3.90)**2)
mod_3p9 = mods[arg_3p9]
arg_3p7 = np.argmin((Es_line-3.68)**2)
mod_3p7 = mods[arg_3p7]
arg_3p3 = np.argmin((Es_line-3.32)**2)
mod_3p3 = mods[arg_3p3]
def mod_poly_four_lines(ens,x):
A, B, C,S1, S2, S3, S4 = x
return A+B*ens + C*ens**2 + S1*mod_3p3[whs_reduced] + S2*mod_3p7[whs_reduced]+ S3*mod_3p1[whs_reduced] + S4*mod_3p9[whs_reduced]
chi2_instance = chi2(Energies_reduced,data_reduced,data_yerrs_reduced,mod_poly_four_lines,mod_UXL_reduced)
x0 = np.array([ 0.2015824 , -0.05098609 , 0.0052141 , 0.02854594 , 0.01742288,
0.08976637 , 0.029351 ])
bounds = np.array([[-1,1],[-0.5,0.5],[-0.1,0.1],[0,0.2],[0,0.2],[0,0.2],[0,0.2]])
mn_null = dual_annealing(chi2_instance.chi2_null,x0=x0,bounds=bounds,local_search_options={"method": "Nelder-Mead"},seed=1234,maxiter=3000)
boundss = np.array([[-0.5,0.5],[-1,1],[-0.5,0.5],[-0.1,0.1],[0,0.2],[0,0.2],[0,0.2],[0,0.2]])
x0s=np.array([1.e-2,mn_null.x[0],mn_null.x[1],mn_null.x[2],mn_null.x[3],mn_null.x[4],mn_null.x[5],mn_null.x[6]])
mn = dual_annealing(chi2_instance.chi2,x0=x0s,bounds=boundss,local_search_options={"method": "Nelder-Mead"},seed=1234,maxiter=3000)
print "Best fit background parameters:", mn_null.x
print "Best fit signal+background parameters:", mn.x
print "The Delta chi^2 between signal and null model is:", mn_null.fun - mn.fun
print "The chi^2/DOF of the null-model fit is:", mn_null.fun/(len(whs_reduced)-7.)
print "NB: the best-fit signal strength in this case is:", mn.x[0], "cts/cm$^2$/s/sr"
fig = plt.figure(figsize=(10,8))
plt.errorbar(avg_data(Energies,6),avg_data(data,6),yerr=np.sqrt(6*avg_data(data_yerrs**2,6))/6.,xerr=6*(Energies[1]-Energies[0])/2.,
color="black",marker="o", fmt='none',capsize=4)
plt.plot(Energies_reduced,mod_poly_four_lines(Energies_reduced,mn_null.x),
'k-',label =r"Null P.L. model")
plt.plot(Energies_reduced,mod_poly_four_lines(Energies_reduced,mn.x[1:])+mn.x[0]*mod_UXL_reduced,
'r-',label =r"Best fit signal model")
plt.xlim(3,4)
plt.ylim(0.08,0.1)
plt.xticks(fontsize=22)
plt.yticks(fontsize=22)
plt.xlabel(r"$E$ [keV]",fontsize=22)
plt.ylabel(r"Flux [cts/s/keV]",fontsize=22)
plt.legend(fontsize=22)
plt.show()
A_sig_array = np.linspace(mn.x[0],0.05,100)
chi2_sig_array = np.zeros(len(A_sig_array))
bf = mn.x[1:]
for i in range(len(A_sig_array)):
chi2_instance.fix_signal_strength(A_sig_array[i])
mn_profile = minimize(chi2_instance.chi2_fixed_signal,bf,method='Nelder-Mead',
options={'fatol':1e-10,'xatol':1e-10,'adaptive':True})
bf = mn_profile.x
chi2_sig_array[i] = mn_profile.fun
amin = np.argmin((chi2_sig_array-chi2_sig_array[0] - 2.71)**2)
limit_signal_strength = A_sig_array[amin]
print "The 95% upper limit on the signal flux is", limit_signal_strength, "cts/cm^2/s/sr"
print "This corresponds to a limit on sin^2(2theta) of", return_sin_theta_lim(EUXL,limit_signal_strength,D_signal)
```
This change to the background continuum model does not change any conclusions. The 3.5 keV line is in tension with these limits.
## Subtract the background data
Now, we subtract off the data taken far away from the Galactic Center. We use a folded powerlaw for the background continuum under the assumption that the residual flux in the signal region should be astrophysical.
```
# A folded powerlaw function
def folded_PL(A,n):
mod_F = np.matmul(det_res,A*Energies**n)
return mod_F
def mod_folded_power_four_lines(ens,x):
A, n,S1, S2, S3, S4 = x
return folded_PL(A,n)[whs_reduced] + S1*mod_3p3[whs_reduced] + S2*mod_3p7[whs_reduced]+ S3*mod_3p1[whs_reduced] + S4*mod_3p9[whs_reduced]
chi2_instance = chi2(Energies_reduced,data_reduced- data_bkg[whs_reduced],np.sqrt(data_yerrs_reduced**2+data_yerrs_bkg_reduced**2),mod_folded_power_four_lines,mod_UXL_reduced)
x0 = np.array([1.80533176e-02, -5.18514882e-01, 9.80776897e-03, 1.45353856e-04, 6.39560515e-02, 1.84053386e-02])
bounds = np.array([[0.0,0.1],[-2,0],[0,0.1],[0,0.2],[0,0.2],[0,0.2]])
mn_null = dual_annealing(chi2_instance.chi2_null,x0=x0,bounds=bounds,local_search_options={"method": "Nelder-Mead"},seed=1234,maxiter=1000)
boundss = np.array([[-0.5,0.5],[0.0,0.1],[-2,0],[0,0.1],[0,0.2],[0,0.2],[0,0.2]])
x0s=np.array([1.e-2,mn_null.x[0],mn_null.x[1],mn_null.x[2],mn_null.x[3],mn_null.x[4],mn_null.x[5]])
mn = dual_annealing(chi2_instance.chi2,x0=x0s,bounds=boundss,local_search_options={"method": "Nelder-Mead"},seed=1234,maxiter=1000)
print "Best fit background parameters:", mn_null.x
print "Best fit signal+background parameters:", mn.x
print "The Delta chi^2 between signal and null model is:", mn_null.fun - mn.fun
print "The chi^2/DOF of the null-model fit is:", mn_null.fun/(len(whs_reduced)-6.)
print "NB: the best-fit signal strength in this case is:", mn.x[0], "cts/cm$^2$/s/sr"
fig = plt.figure(figsize=(10,6))
plt.errorbar(avg_data(Energies,6),avg_data(data-data_bkg,6),yerr=np.sqrt(6*avg_data(data_yerrs**2+data_yerrs_bkg**2,6))/6.,xerr=6*(Energies[1]-Energies[0])/2.,
color="black",marker="o", fmt='none',capsize=4) #label="data"
plt.plot(Energies_reduced,mod_folded_power_four_lines(Energies_reduced,mn_null.x),
'k-',label =r"Null model")
plt.plot(Energies_reduced,mod_folded_power_four_lines(Energies_reduced,mn.x[1:])+mn.x[0]*mod_UXL_reduced,
'r-',label =r"Best fit signal model")
plt.xlim(3,4)
plt.ylim(0.006,0.015)
plt.xticks(fontsize=22)
plt.yticks(fontsize=22)
plt.xlabel(r"$E$ [keV]",fontsize=22)
plt.ylabel(r"SR Flux [cts/s/keV]",fontsize=22)
plt.legend(fontsize=22)
plt.show()
A_sig_array = np.linspace(mn.x[0],0.05,100)
chi2_sig_array = np.zeros(len(A_sig_array))
bf = mn.x[1:]
for i in range(len(A_sig_array)):
chi2_instance.fix_signal_strength(A_sig_array[i])
mn_profile = minimize(chi2_instance.chi2_fixed_signal,bf,method='Nelder-Mead')
bf = mn_profile.x
chi2_sig_array[i] = mn_profile.fun
amin = np.argmin((chi2_sig_array-chi2_sig_array[0] - 2.71)**2)
limit_signal_strength = A_sig_array[amin]
print "The 95% upper limit on the signal flux is", limit_signal_strength, "cts/cm^2/s/sr"
print "This corresponds to a limit on sin^2(2theta) of", return_sin_theta_lim(EUXL,limit_signal_strength,D_signal-D_bkg)
```
In this version of the analysis, too, we see no evidence for a 3.5 keV line and obtain comparable limits as in the stacked analyses in the previous sections.
# Include the Quiescent Particle Background (QPB)
Now we will do a joint likelihood including the QPB data. The QPB data is complicated because the data is correlated from observation to observation. Thus, summing the data leads to correlated uncertainties. To account for this, we will estimate the uncertainties on the QPB data in a data-driven way by fixing the normalization of the $\chi^2$ function such that the powerlaw gives the expected $\chi^2/{\rm DOF}$. We note that this is just an approximation, which is not necessary within the context of the full joint likelihood framework.
```
# We are going to fix a powerlaw to the QPB data and then renormalize the chi^2 function
def PL(A,n,ens):
return A*ens**n
def chi2_QPB_UN(x):
A,n = x
mod = PL(A,n,Energies_reduced)
return np.sum((mod-QPB[whs_reduced])**2)
mn_QPB = minimize(chi2_QPB_UN,[0.084,-0.20],method="Nelder-Mead")
bf_QPB=mn_QPB.x
chi2_not_reduced = chi2_QPB_UN(bf_QPB)
# The function below has the expected normalization
chi2_QPB = lambda x: chi2_QPB_UN(x)/chi2_not_reduced*((len(QPB[whs_reduced])-2.))
fig = plt.figure(figsize=(10,8))
plt.scatter(Energies_reduced,QPB[whs_reduced],marker="o",color="black")
plt.plot(Energies_reduced,PL(bf_QPB[0],bf_QPB[1],Energies_reduced),'r-',label="best-fit P.L.")
plt.xlim(3,4)
plt.ylim(0.04,0.065)
plt.xlabel(r"$E$ [keV]",fontsize=22)
plt.ylabel(r"QPB [cts/s/keV]",fontsize=22)
plt.legend(fontsize=22)
plt.xticks(fontsize=22)
plt.yticks(fontsize=22)
plt.show()
def mod_2power_four_lines(ens,x):
AQPB, nQPB,A, n,S1, S2, S3, S4 = x
return PL(AQPB,nQPB,ens)+ folded_PL(A,n)[whs_reduced] + S1*mod_3p3[whs_reduced] + S2*mod_3p7[whs_reduced]+ S3*mod_3p1[whs_reduced] + S4*mod_3p9[whs_reduced]
chi2_instance = chi2(Energies_reduced,data_reduced,data_yerrs_reduced,mod_2power_four_lines,mod_UXL_reduced)
x0 = np.array([0.07377512 ,-0.28001362 , 0.15844243, -1.07912658 , 0.02877547,
0.01134023 , 0.08755627 , 0.03134949])
bounds = np.array([[0.75*bf_QPB[0],1.25*bf_QPB[0]],[-1,0],[0.0001,2.0],[-3,0],[0,0.1],[0,0.1],[0,0.1],[0,0.1]])
# Below is the joint likelihood for the null model
def joint_chi2(x):
return chi2_QPB(x[:2])+chi2_instance.chi2_null(x)
mn_null = dual_annealing(joint_chi2,x0=x0,bounds=bounds,local_search_options={"method": "Nelder-Mead"},seed=1234,maxiter=1000)
# Below is the joint likelihood for the signal model
def joint_chi2_sig(x):
return chi2_QPB(x[1:3])+chi2_instance.chi2(x)
boundss = np.array([[-0.5,0.5],[0.5*bf_QPB[0],2*bf_QPB[0]],[-1,0],[0.0001,2.0],[-3,0],[0,0.1],[0,0.1],[0,0.1],[0,0.1]])
x0s=np.array([1.e-2,mn_null.x[0],mn_null.x[1],mn_null.x[2],mn_null.x[3],mn_null.x[4],mn_null.x[5],mn_null.x[6],mn_null.x[7]])
mn = dual_annealing(joint_chi2_sig,x0=x0s,bounds=boundss,local_search_options={"method": "Nelder-Mead"},seed=1234,maxiter=1000)
print "The Delta chi^2 between signal and null model is:", mn_null.fun - mn.fun
print "NB: the best-fit signal strength in this case is:", mn.x[0], "cts/cm$^2$/s/sr"
fig = plt.figure(figsize=(10,8))
plt.errorbar(avg_data(Energies,6),avg_data(data,6),yerr=np.sqrt(6*avg_data(data_yerrs**2,6))/6.,xerr=6*(Energies[1]-Energies[0])/2.,
color="black",marker="o", fmt='none',capsize=4) #label="data"
plt.plot(Energies_reduced,mod_2power_four_lines(Energies_reduced,mn.x[1:])+mn.x[0]*mod_UXL_reduced,
'r-',label =r"Best fit signal model")
x0 = np.array([bf_QPB[0],bf_QPB[1], 0.064218, -0.4306988 , 0.02542355 , 0.01451921 , 0.09027154, 0.03331636])
plt.plot(Energies_reduced,mod_2power_four_lines(Energies_reduced,mn_null.x),
'k-',label =r"Null P.L. model")
plt.xlim(3,4)
plt.ylim(0.08,0.1)
plt.xticks(fontsize=22)
plt.yticks(fontsize=22)
plt.xlabel(r"$E$ [keV]",fontsize=22)
plt.ylabel(r"Flux [cts/s/keV]",fontsize=22)
plt.legend(fontsize=22)
plt.show()
A_sig_array = np.linspace(mn.x[0],0.05,100)
chi2_sig_array = np.zeros(len(A_sig_array))
bf = mn.x[1:]
for i in range(len(A_sig_array)):
chi2_instance.fix_signal_strength(A_sig_array[i])
mn_profile = minimize(chi2_instance.chi2_fixed_signal,bf,method='Nelder-Mead')
bf = mn_profile.x
chi2_sig_array[i] = mn_profile.fun
amin = np.argmin((chi2_sig_array-chi2_sig_array[0] - 2.71)**2)
limit_signal_strength = A_sig_array[amin]
print "The 95% upper limit on the signal flux is", limit_signal_strength, "cts/cm^2/s/sr"
print "This corresponds to a limit on sin^2(2theta) of", return_sin_theta_lim(EUXL,limit_signal_strength,D_signal)
```
Finally, including the QPB in our analysis does not significantly change the results.
# Summary
To summarize, we see no evidence of a 3.5 keV line in any of our analysis variations here. We obtain the following limits on $\sin^2(2\theta)$ for $E_{\rm UXL} = 3.48$ keV:
* Quadratic background fit within $E_{\rm UXL} \pm 0.25$ keV: $2.35 \times 10^{-11}$
* Power law background fit within $E_{\rm UXL} \pm 0.25$ keV: $1.82 \times 10^{-11}$
* Power law background fit from 3 to 4 keV: $1.34 \times 10^{-11}$
* Quadratic background fit from 3 to 4 keV: $2.45 \times 10^{-11}$
* Power law background fit on background-subtracted data from 3 to 4 keV: $1.87 \times 10^{-11}$
* Power law background fit with joint (X-ray + QPB) likelihood from 3 to 4 keV: $1.68 \times 10^{-11}$
Although these limits are much weaker than our fiducial limit presented in DRS20, they still strongly constrain the 3.5 keV line.
| github_jupyter |
# Multi-ConvNet Sentiment Classifier
In this notebook, we concatenate the outputs of *multiple, parallel convolutional layers* to classify IMDB movie reviews by their sentiment.
#### Load dependencies
```
import tensorflow
from tensorflow.keras.datasets import imdb
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.models import Model # new!
from tensorflow.keras.layers import Input, concatenate # new!
from tensorflow.keras.layers import Dense, Dropout, Embedding, SpatialDropout1D, Conv1D, GlobalMaxPooling1D
from tensorflow.keras.callbacks import ModelCheckpoint
import os
from sklearn.metrics import roc_auc_score
import matplotlib.pyplot as plt
```
#### Set hyperparameters
```
# output directory name:
output_dir = 'model_output/multiconv'
# training:
epochs = 4
batch_size = 128
# vector-space embedding:
n_dim = 64
n_unique_words = 5000
max_review_length = 400
pad_type = trunc_type = 'pre'
drop_embed = 0.2
# convolutional layer architecture:
n_conv_1 = n_conv_2 = n_conv_3 = 256
k_conv_1 = 3
k_conv_2 = 2
k_conv_3 = 4
# dense layer architecture:
n_dense = 256
dropout = 0.2
```
#### Load data
```
(x_train, y_train), (x_valid, y_valid) = imdb.load_data(num_words=n_unique_words)
```
#### Preprocess data
```
x_train = pad_sequences(x_train, maxlen=max_review_length, padding=pad_type, truncating=trunc_type, value=0)
x_valid = pad_sequences(x_valid, maxlen=max_review_length, padding=pad_type, truncating=trunc_type, value=0)
```
#### Design neural network architecture
```
input_layer = Input(shape=(max_review_length,),
dtype='int16', name='input')
# embedding:
embedding_layer = Embedding(n_unique_words, n_dim,
name='embedding')(input_layer)
drop_embed_layer = SpatialDropout1D(drop_embed,
name='drop_embed')(embedding_layer)
# three parallel convolutional streams:
conv_1 = Conv1D(n_conv_1, k_conv_1,
activation='relu', name='conv_1')(drop_embed_layer)
maxp_1 = GlobalMaxPooling1D(name='maxp_1')(conv_1)
conv_2 = Conv1D(n_conv_2, k_conv_2,
activation='relu', name='conv_2')(drop_embed_layer)
maxp_2 = GlobalMaxPooling1D(name='maxp_2')(conv_2)
conv_3 = Conv1D(n_conv_3, k_conv_3,
activation='relu', name='conv_3')(drop_embed_layer)
maxp_3 = GlobalMaxPooling1D(name='maxp_3')(conv_3)
# concatenate the activations from the three streams:
concat = concatenate([maxp_1, maxp_2, maxp_3])
# dense hidden layers:
dense_layer = Dense(n_dense,
activation='relu', name='dense')(concat)
drop_dense_layer = Dropout(dropout, name='drop_dense')(dense_layer)
dense_2 = Dense(int(n_dense/4),
activation='relu', name='dense_2')(drop_dense_layer)
dropout_2 = Dropout(dropout, name='drop_dense_2')(dense_2)
# sigmoid output layer:
predictions = Dense(1, activation='sigmoid', name='output')(dropout_2)
# create model:
model = Model(input_layer, predictions)
model.summary()
```
#### Configure model
```
model.compile(loss='binary_crossentropy', optimizer='nadam', metrics=['accuracy'])
modelcheckpoint = ModelCheckpoint(filepath=output_dir+"/weights.{epoch:02d}.hdf5")
if not os.path.exists(output_dir):
os.makedirs(output_dir)
```
#### Train!
```
model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_valid, y_valid), callbacks=[modelcheckpoint])
```
#### Evaluate
```
model.load_weights(output_dir+"/weights.02.hdf5")
y_hat = model.predict(x_valid)
plt.hist(y_hat)
_ = plt.axvline(x=0.5, color='orange')
"{:0.2f}".format(roc_auc_score(y_valid, y_hat)*100.0)
```
| github_jupyter |
# 工厂规划
等级:中级
## 目的和先决条件
此模型和Factory Planning II都是生产计划问题的示例。在生产计划问题中,必须选择要生产哪些产品,要生产多少产品以及要使用哪些资源,以在满足一系列限制的同时最大化利润或最小化成本。这些问题在广泛的制造环境中都很常见。
### What You Will Learn
在此特定示例中,我们将建模并解决生产组合问题:在每个阶段中,我们可以制造一系列产品。每种产品在不同的机器上生产需要不同的时间,并产生不同的利润。目的是创建最佳的多周期生产计划,以使利润最大化。由于维护,某些机器在特定时期内不可用。由于市场限制,每个产品每个月的销售量都有上限,并且存储容量也受到限制。
In Factory Planning II, we’ll add more complexity to this example; the month in which each machine is down for maintenance will be chosen as a part of the optimized plan.
More information on this type of model can be found in example # 3 of the fifth edition of Modeling Building in Mathematical Programming by H. P. Williams on pages 255-256 and 300-302.
This modeling example is at the intermediate level, where we assume that you know Python and are familiar with the Gurobi Python API. In addition, you should have some knowledge about building mathematical optimization models.
**Note:** You can download the repository containing this and other examples by clicking [here](https://github.com/Gurobi/modeling-examples/archive/master.zip). In order to run this Jupyter Notebook properly, you must have a Gurobi license. If you do not have one, you can request an [evaluation license](https://www.gurobi.com/downloads/request-an-evaluation-license/?utm_source=Github&utm_medium=website_JupyterME&utm_campaign=CommercialDataScience) as a *commercial user*, or download a [free license](https://www.gurobi.com/academia/academic-program-and-licenses/?utm_source=Github&utm_medium=website_JupyterME&utm_campaign=AcademicDataScience) as an *academic user*.
---
## Problem Description
A factory makes seven products (Prod 1 to Prod 7) using a range of machines including:
- Four grinders
- Two vertical drills
- Three horizontal drills
- One borer
- One planer
Each product has a defined profit contribution per unit sold (defined as the sales price per unit minus the cost of raw materials). In addition, the manufacturing of each product requires a certain amount of time on each machine (in hours). The contribution and manufacturing time value are shown below. A dash indicates that the manufacturing process for the given product does not require that machine.
| <i></i> | PROD1 | PROD2 | PROD3 | PROD4 | PROD5 | PROD6 | PROD7 |
| --- | --- | --- | --- | --- | --- | --- | --- |
| Profit | 10 | 6 | 8 | 4 | 11 | 9 | 3 |
| Grinding | 0.5 | 0.7 | - | - | 0.3 | 0.2 | 0.5 |
| Vertical Drilling | 0.1 | 0.2 | - | 0.3 | - | 0.6 | - |
| Horizontal Drilling | 0.2 | - | 0.8 | - | - | - | 0.6 |
| Boring | 0.05 | 0.03 | - | 0.07 | 0.1 | - | 0.08 |
| Planning | - | - | 0.01 | - | 0.05 | - | 0.05 |
In each of the six months covered by this model, one or more of the machines is scheduled to be down for maintenance and as a result will not be available to use for production that month. The maintenance schedule is as follows:
| Month | Machine |
| --- | --- |
| January | One grinder |
| February | Two horizontal drills |
| March | One borer |
| April | One vertical drill |
| May | One grinder and one vertical drill |
| June | One horizontal drill |
There are limitations on how many of each product can be sold in a given month. These limits are shown below:
| Month | PROD1 | PROD2 | PROD3 | PROD4 | PROD5 | PROD6 | PROD7 |
| --- | --- | --- | --- | --- | --- | --- | --- |
| January | 500 | 1000 | 300 | 300 | 800 | 200 | 100 |
| February | 600 | 500 | 200 | 0 | 400 | 300 | 150 |
| March | 300 | 600 | 0 | 0 | 500 | 400 | 100 |
| April | 200 | 300 | 400 | 500 | 200 | 0 | 100 |
| May | 0 | 100 | 500 | 100 | 1000 | 300 | 0 |
| June | 500 | 500 | 100 | 300 | 1100 | 500 | 60 |
Up to 100 units of each product may be stored in inventory at a cost of $\$0.50$ per unit per month. At the start of January, there is no product inventory. However, by the end of June, there should be 50 units of each product in inventory.
The factory produces products six days a week using two eight-hour shifts per day. It may be assumed that each month consists of 24 working days. Also, for the purposes of this model, there are no production sequencing issues that need to be taken into account.
What should the production plan look like? Also, is it possible to recommend any price increases and determine the value of acquiring any new machines?
This problem is based on a larger model built for the Cornish engineering company of Holman Brothers.
---
## Model Formulation
### Sets and Indices
$t \in \text{Months}=\{\text{Jan},\text{Feb},\text{Mar},\text{Apr},\text{May},\text{Jun}\}$: Set of months.
$p \in \text{Products}=\{1,2,\dots,7\}$: Set of products.
$m \in \text{Machines}=\{\text{Grinder},\text{VertDrill},\text{horiDrill},\text{Borer},\text{Planer}\}$: Set of machines.
### Parameters
$\text{hours_per_month} \in \mathbb{R}^+$: Time (in hours/month) available at any machine on a monthly basis. It results from multiplying the number of working days (24 days) by the number of shifts per day (2) by the duration of a shift (8 hours).
$\text{max_inventory} \in \mathbb{N}$: Maximum number of units of a single product type that can be stored in inventory at any given month.
$\text{holding_cost} \in \mathbb{R}^+$: Monthly cost (in USD/unit/month) of keeping in inventory a unit of any product type.
$\text{store_target} \in \mathbb{N}$: Number of units of each product type to keep in inventory at the end of the planning horizon.
$\text{profit}_p \in \mathbb{R}^+$: Profit (in USD/unit) of product $p$.
$\text{installed}_m \in \mathbb{N}$: Number of machines of type $m$ installed in the factory.
$\text{down}_{t,m} \in \mathbb{N}$: Number of machines of type $m$ scheduled for maintenance at month $t$.
$\text{time_req}_{m,p} \in \mathbb{R}^+$: Time (in hours/unit) needed on machine $m$ to manufacture one unit of product $p$.
$\text{max_sales}_{t,p} \in \mathbb{N}$: Maximum number of units of product $p$ that can be sold at month $t$.
### Decision Variables
$\text{make}_{t,p} \in \mathbb{R}^+$: Number of units of product $p$ to manufacture at month $t$.
$\text{store}_{t,p} \in [0, \text{max_inventory}] \subset \mathbb{R}^+$: Number of units of product $p$ to store at month $t$.
$\text{sell}_{t,p} \in [0, \text{max_sales}_{t,p}] \subset \mathbb{R}^+$: Number of units of product $p$ to sell at month $t$.
**Assumption:** We can produce fractional units.
### Objective Function
- **Profit:** Maximize the total profit (in USD) of the planning horizon.
\begin{equation}
\text{Maximize} \quad Z = \sum_{t \in \text{Months}}\sum_{p \in \text{Products}}
(\text{profit}_p*\text{make}_{t,p} - \text{holding_cost}*\text{store}_{t,p})
\tag{0}
\end{equation}
### Constraints
- **Initial Balance:** For each product $p$, the number of units produced should be equal to the number of units sold plus the number stored (in units of product).
\begin{equation}
\text{make}_{\text{Jan},p} = \text{sell}_{\text{Jan},p} + \text{store}_{\text{Jan},p} \quad \forall p \in \text{Products}
\tag{1}
\end{equation}
- **Balance:** For each product $p$, the number of units produced in month $t$ and the ones previously stored should be equal to the number of units sold and stored in that month (in units of product).
\begin{equation}
\text{store}_{t-1,p} + \text{make}_{t,p} = \text{sell}_{t,p} + \text{store}_{t,p} \quad \forall (t,p) \in \text{Months} \setminus \{\text{Jan}\} \times \text{Products}
\tag{2}
\end{equation}
- **Inventory Target:** The number of units of product $p$ kept in inventory at the end of the planning horizon should hit the target (in units of product).
\begin{equation}
\text{store}_{\text{Jun},p} = \text{store_target} \quad \forall p \in \text{Products}
\tag{3}
\end{equation}
- **Machine Capacity:** Total time used to manufacture any product at machine type $m$ cannot exceed its monthly capacity (in hours).
\begin{equation}
\sum_{p \in \text{Products}}\text{time_req}_{m,p}*\text{make}_{t,p} \leq \text{hours_per_month}*(\text{installed}_m - \text{down}_{t,m}) \quad \forall (t,m) \in \text{Months} \times \text{Machines}
\tag{4}
\end{equation}
---
## Python Implementation
We import the Gurobi Python Module and other Python libraries.
```
import gurobipy as gp
import numpy as np
import pandas as pd
from gurobipy import GRB
# tested with Python 3.7.0 & Gurobi 9.0
```
### Input Data
We define all the input data of the model.
```
# Parameters
products = ["Prod1", "Prod2", "Prod3", "Prod4", "Prod5", "Prod6", "Prod7"]
machines = ["grinder", "vertDrill", "horiDrill", "borer", "planer"]
months = ["Jan", "Feb", "Mar", "Apr", "May", "Jun"]
profit = {"Prod1":10, "Prod2":6, "Prod3":8, "Prod4":4, "Prod5":11, "Prod6":9, "Prod7":3}
time_req = {
"grinder": { "Prod1": 0.5, "Prod2": 0.7, "Prod5": 0.3,
"Prod6": 0.2, "Prod7": 0.5 },
"vertDrill": { "Prod1": 0.1, "Prod2": 0.2, "Prod4": 0.3,
"Prod6": 0.6 },
"horiDrill": { "Prod1": 0.2, "Prod3": 0.8, "Prod7": 0.6 },
"borer": { "Prod1": 0.05,"Prod2": 0.03,"Prod4": 0.07,
"Prod5": 0.1, "Prod7": 0.08 },
"planer": { "Prod3": 0.01,"Prod5": 0.05,"Prod7": 0.05 }
}
# number of machines down
down = {("Jan","grinder"): 1, ("Feb", "horiDrill"): 2, ("Mar", "borer"): 1,
("Apr", "vertDrill"): 1, ("May", "grinder"): 1, ("May", "vertDrill"): 1,
("Jun", "planer"): 1, ("Jun", "horiDrill"): 1}
# number of each machine available
installed = {"grinder":4, "vertDrill":2, "horiDrill":3, "borer":1, "planer":1}
# market limitation of sells
max_sales = {
("Jan", "Prod1") : 500,
("Jan", "Prod2") : 1000,
("Jan", "Prod3") : 300,
("Jan", "Prod4") : 300,
("Jan", "Prod5") : 800,
("Jan", "Prod6") : 200,
("Jan", "Prod7") : 100,
("Feb", "Prod1") : 600,
("Feb", "Prod2") : 500,
("Feb", "Prod3") : 200,
("Feb", "Prod4") : 0,
("Feb", "Prod5") : 400,
("Feb", "Prod6") : 300,
("Feb", "Prod7") : 150,
("Mar", "Prod1") : 300,
("Mar", "Prod2") : 600,
("Mar", "Prod3") : 0,
("Mar", "Prod4") : 0,
("Mar", "Prod5") : 500,
("Mar", "Prod6") : 400,
("Mar", "Prod7") : 100,
("Apr", "Prod1") : 200,
("Apr", "Prod2") : 300,
("Apr", "Prod3") : 400,
("Apr", "Prod4") : 500,
("Apr", "Prod5") : 200,
("Apr", "Prod6") : 0,
("Apr", "Prod7") : 100,
("May", "Prod1") : 0,
("May", "Prod2") : 100,
("May", "Prod3") : 500,
("May", "Prod4") : 100,
("May", "Prod5") : 1000,
("May", "Prod6") : 300,
("May", "Prod7") : 0,
("Jun", "Prod1") : 500,
("Jun", "Prod2") : 500,
("Jun", "Prod3") : 100,
("Jun", "Prod4") : 300,
("Jun", "Prod5") : 1100,
("Jun", "Prod6") : 500,
("Jun", "Prod7") : 60,
}
holding_cost = 0.5
max_inventory = 100
store_target = 50
hours_per_month = 2*8*24
```
## Model Deployment
We create a model and the variables. For each product (seven kinds of products) and each time period (month), we will create variables for the amount of which products get manufactured, held, and sold. In each month, there is an upper limit on the amount of each product that can be sold. This is due to market limitations.
```
factory = gp.Model('Factory Planning I')
make = factory.addVars(months, products, name="Make") # quantity manufactured
store = factory.addVars(months, products, ub=max_inventory, name="Store") # quantity stored
sell = factory.addVars(months, products, ub=max_sales, name="Sell") # quantity sold
```
Next, we insert the constraints. The balance constraints ensure that the amount of product that is in storage in the prior month plus the amount that gets manufactured equals the amount that is sold and held for each product in the current month. This ensures that all products in the model are manufactured in some month. The initial storage is empty.
```
#1. Initial Balance
Balance0 = factory.addConstrs((make[months[0], product] == sell[months[0], product]
+ store[months[0], product] for product in products), name="Initial_Balance")
#2. Balance
Balance = factory.addConstrs((store[months[months.index(month) -1], product] +
make[month, product] == sell[month, product] + store[month, product]
for product in products for month in months
if month != months[0]), name="Balance")
```
The Inventory Target constraints force that at the end of the last month the storage contains the specified amount of each product.
```
#3. Inventory Target
TargetInv = factory.addConstrs((store[months[-1], product] == store_target for product in products), name="End_Balance")
```
The capacity constraints ensure that, for each month, the time all products require on a certain kind of machine is less than or equal to the available hours for that type of machine in that month multiplied by the number of available machines in that period. Each product requires some machine hours on different machines. Each machine is down in one or more months due to maintenance, so the number and type of available machines varies per month. There can be multiple machines per machine type.
```
#4. Machine Capacity
MachineCap = factory.addConstrs((gp.quicksum(time_req[machine][product] * make[month, product]
for product in time_req[machine])
<= hours_per_month * (installed[machine] - down.get((month, machine), 0))
for machine in machines for month in months),
name = "Capacity")
```
The objective is to maximize the profit of the company, which consists of
the profit for each product minus the cost for storing the unsold products. This can be stated as:
```
#0. Objective Function
obj = gp.quicksum(profit[product] * sell[month, product] - holding_cost * store[month, product]
for month in months for product in products)
factory.setObjective(obj, GRB.MAXIMIZE)
```
Next, we start the optimization and Gurobi finds the optimal solution.
```
factory.optimize()
```
---
## Analysis
The result of the optimization model shows that the maximum profit we can achieve is $\$93,715.18$.
Let's see the solution that achieves that optimal result.
### Production Plan
This plan determines the amount of each product to make at each period of the planning horizon. For example, in February we make 700 units of product Prod1.
```
rows = months.copy()
columns = products.copy()
make_plan = pd.DataFrame(columns=columns, index=rows, data=0.0)
for month, product in make.keys():
if (abs(make[month, product].x) > 1e-6):
make_plan.loc[month, product] = np.round(make[month, product].x, 1)
make_plan
```
### Sales Plan
This plan defines the amount of each product to sell at each period of the planning horizon. For example, in February we sell 600 units of product Prod1.
```
rows = months.copy()
columns = products.copy()
sell_plan = pd.DataFrame(columns=columns, index=rows, data=0.0)
for month, product in sell.keys():
if (abs(sell[month, product].x) > 1e-6):
sell_plan.loc[month, product] = np.round(sell[month, product].x, 1)
sell_plan
```
### Inventory Plan
This plan reflects the amount of product in inventory at the end of each period of the planning horizon. For example, at the end of February we have 100 units of Prod1 in inventory.
```
rows = months.copy()
columns = products.copy()
store_plan = pd.DataFrame(columns=columns, index=rows, data=0.0)
for month, product in store.keys():
if (abs(store[month, product].x) > 1e-6):
store_plan.loc[month, product] = np.round(store[month, product].x, 1)
store_plan
```
**Note:** If you want to write your solution to a file, rather than print it to the terminal, you can use the model.write() command. An example implementation is:
`factory.write("factory-planning-1-output.sol")`
---
## References
H. Paul Williams, Model Building in Mathematical Programming, fifth edition.
Copyright © 2020 Gurobi Optimization, LLC
| github_jupyter |
### Note
* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
```
# Dependencies and Setup
import pandas as pd
# File to Load (Remember to Change These)
school_data_to_load = "Resources/schools_complete.csv"
student_data_to_load = "Resources/students_complete.csv"
# Read School and Student Data File and store into Pandas DataFrames
school_data = pd.read_csv(school_data_to_load)
student_data = pd.read_csv(student_data_to_load)
# Combine the data into a single dataset.
school_data_complete = pd.merge(student_data, school_data, how="left", on=["school_name", "school_name"])
school_data_complete
```
## District Summary
* Calculate the total number of schools
* Calculate the total number of students
* Calculate the total budget
* Calculate the average math score
* Calculate the average reading score
* Calculate the percentage of students with a passing math score (70 or greater)
* Calculate the percentage of students with a passing reading score (70 or greater)
* Calculate the percentage of students who passed math **and** reading (% Overall Passing)
* Create a dataframe to hold the above results
* Optional: give the displayed data cleaner formatting
```
total_school = len(school_data_complete["school_name"].unique())
total_student = sum(school_data_complete["size"].unique())
student_pass_math = school_data_complete.loc[school_data_complete["math_score"] >= 70]
student_pass_reading = school_data_complete.loc[school_data_complete["reading_score"] >= 70]
student_pass_math_and_reading = school_data_complete.loc[(school_data_complete["math_score"] >= 70) & (school_data_complete["reading_score"] >= 70)]
student_pass_math_percentage = student_pass_math["Student ID"].count()/total_student
student_pass_reading_percentage = student_pass_reading["Student ID"].count()/total_student
student_pass_math_and_reading_percentage = student_pass_math_and_reading["Student ID"].count()/total_student
district_summary_df = pd.DataFrame({
"Total Schools": [total_school],
"Total Students": [f"{total_student:,}"],
"Total Budget": [f"${sum(school_data_complete['budget'].unique()):,}"],
"Average Math Score":[f"${school_data_complete['math_score'].mean():.2f}"],
"Average Reading Score":[f"${school_data_complete['reading_score'].mean():.2f}"],
"% Passing Math":[f"{student_pass_math_percentage:.2%}"],
"% Passing Reading":[f"{student_pass_reading_percentage:.2%}"],
"% Overall Passing":[f"{student_pass_math_and_reading_percentage:.2%}"]
})
district_summary_df
```
## School Summary
* Create an overview table that summarizes key metrics about each school, including:
* School Name
* School Type
* Total Students
* Total School Budget
* Per Student Budget
* Average Math Score
* Average Reading Score
* % Passing Math
* % Passing Reading
* % Overall Passing (The percentage of students that passed math **and** reading.)
* Create a dataframe to hold the above results
```
group_by_school_data = school_data_complete.groupby(["school_name","type"])
group_by_school_pass_math = student_pass_math.groupby(["school_name","type"])
group_by_school_pass_reading = student_pass_reading.groupby(["school_name","type"])
group_by_school_pass_math_and_reading = student_pass_math_and_reading.groupby(["school_name","type"])
school_summary_df = pd.DataFrame({
"Total Students": group_by_school_data["school_name"].count(),
"Total School Budget": group_by_school_data['budget'].mean(),
"Per Student Budget": group_by_school_data["budget"].mean()/group_by_school_data["school_name"].count(),
"Average Math Score": group_by_school_data["math_score"].mean(),
"Average Reading Score": group_by_school_data["reading_score"].mean(),
"% Passing Math": group_by_school_pass_math["school_name"].count()/group_by_school_data["school_name"].count(),
"% Passing Reading": group_by_school_pass_reading["school_name"].count()/group_by_school_data["school_name"].count(),
"% Overall Passing": group_by_school_pass_math_and_reading["school_name"].count()/group_by_school_data["school_name"].count()
})
school_summary = school_summary_df.copy()
school_summary_df["Total School Budget"] = school_summary_df["Total School Budget"].map("${:,}".format)
school_summary_df["Per Student Budget"] = school_summary_df["Per Student Budget"].map("${:,.0f}".format)
school_summary_df["Average Math Score"] = school_summary_df["Average Math Score"].map("{:,.2f}".format)
school_summary_df["Average Reading Score"] = school_summary_df["Average Reading Score"].map("{:,.2f}".format)
school_summary_df["% Passing Math"] = school_summary_df["% Passing Math"].map("{:.2%}".format)
school_summary_df["% Passing Reading"] = school_summary_df["% Passing Reading"].map("{:,.2%}".format)
school_summary_df["% Overall Passing"] = school_summary_df["% Overall Passing"].map("{:,.2%}".format)
school_summary_df
```
## Top Performing Schools (By % Overall Passing)
* Sort and display the top five performing schools by % overall passing.
```
group_by_school_data_sorted = school_summary_df.sort_values("% Overall Passing", ascending=False)
group_by_school_data_sorted.head(5)
```
## Bottom Performing Schools (By % Overall Passing)
* Sort and display the five worst-performing schools by % overall passing.
```
group_by_school_data_sorted = school_summary_df.sort_values("% Overall Passing")
group_by_school_data_sorted.head(5)
```
## Math Scores by Grade
* Create a table that lists the average Reading Score for students of each grade level (9th, 10th, 11th, 12th) at each school.
* Create a pandas series for each grade. Hint: use a conditional statement.
* Group each series by school
* Combine the series into a dataframe
* Optional: give the displayed data cleaner formatting
```
# math_score_by_grade = school_data_complete.groupby(["school_name","grade"])
# math_score_by_grade["math_score"].mean()
math_score_by_grade_9 = school_data_complete.loc[school_data_complete["grade"] == "9th"]
math_score_by_grade_9_school = math_score_by_grade_9.groupby("school_name")
math_score_by_grade_10 = school_data_complete.loc[school_data_complete["grade"] == "10th"]
math_score_by_grade_10_school = math_score_by_grade_10.groupby("school_name")
math_score_by_grade_11 = school_data_complete.loc[school_data_complete["grade"] == "11th"]
math_score_by_grade_11_school = math_score_by_grade_11.groupby("school_name")
math_score_by_grade_12 = school_data_complete.loc[school_data_complete["grade"] == "12th"]
math_score_by_grade_12_school = math_score_by_grade_12.groupby("school_name")
math_score_by_grade_df = pd.DataFrame({
"9th": math_score_by_grade_9_school["math_score"].mean().map("{:.2f}".format),
"10th":math_score_by_grade_10_school["math_score"].mean().map("{:.2f}".format),
"11th": math_score_by_grade_11_school["math_score"].mean().map("{:.2f}".format),
"12th":math_score_by_grade_12_school["math_score"].mean().map("{:.2f}".format)
})
math_score_by_grade_df
```
## Reading Score by Grade
* Perform the same operations as above for reading scores
```
# reading_score_by_grade = school_data_complete.groupby(["school_name","grade"])
# reading_score_by_grade["reading_score"].mean()
reading_score_by_grade_9 = school_data_complete.loc[school_data_complete["grade"] == "9th"]
reading_score_by_grade_9_school = reading_score_by_grade_9.groupby("school_name")
reading_score_by_grade_10 = school_data_complete.loc[school_data_complete["grade"] == "10th"]
reading_score_by_grade_10_school = reading_score_by_grade_10.groupby("school_name")
reading_score_by_grade_11 = school_data_complete.loc[school_data_complete["grade"] == "11th"]
reading_score_by_grade_11_school = reading_score_by_grade_11.groupby("school_name")
reading_score_by_grade_12 = school_data_complete.loc[school_data_complete["grade"] == "12th"]
reading_score_by_grade_12_school = reading_score_by_grade_12.groupby("school_name")
reading_score_by_grade_df = pd.DataFrame({
"9th": reading_score_by_grade_9_school["reading_score"].mean().map("{:.2f}".format),
"10th":reading_score_by_grade_10_school["reading_score"].mean().map("{:.2f}".format),
"11th":reading_score_by_grade_11_school["reading_score"].mean().map("{:.2f}".format),
"12th":reading_score_by_grade_12_school["reading_score"].mean().map("{:.2f}".format)
})
reading_score_by_grade_df
```
## Scores by School Spending
* Create a table that breaks down school performances based on average Spending Ranges (Per Student). Use 4 reasonable bins to group school spending. Include in the table each of the following:
* Average Math Score
* Average Reading Score
* % Passing Math
* % Passing Reading
* Overall Passing Rate (Average of the above two)
```
score_by_spending = school_summary.copy()
bins=[0,585,630,645,680]
labels=["<$585","$585-630","$630-645","$645-680"]
score_by_spending["Spending Ranges (Per Student)"] = pd.cut(score_by_spending["Per Student Budget"],bins,labels=labels, include_lowest=True)
score_by_spending
score_by_spending_group = score_by_spending.groupby(["Spending Ranges (Per Student)"])
score_by_spending_df = pd.DataFrame({
"Average Math Score":map("{:.2f}".format,score_by_spending_group["Average Math Score"].mean()),
"Average Reading Score":map("{:.2f}".format,score_by_spending_group["Average Reading Score"].mean()),
"% Passing Math": map("{:.2%}".format,score_by_spending_group["% Passing Math"].mean()),
"% Passing Reading": map("{:.2%}".format,score_by_spending_group["% Passing Reading"].mean()),
"% Overall Passing":map("{:.2%}".format, score_by_spending_group["% Overall Passing"].mean())
})
score_by_spending_df
```
## Scores by School Size
* Perform the same operations as above, based on school size.
```
score_by_school_size = school_summary.copy()
bins=[0,1000,2000,5000]
labels = ["Small (<1000)","Medium (1000-2000)","Large (2000-5000)"]
score_by_school_size["School Type"] = pd.cut(score_by_school_size["Total Students"],bins,labels=labels)
score_by_school_size_group = score_by_school_size.groupby(["School Type"])
score_by_school_size_df = pd.DataFrame({
"Average Math Score":map("{:.2f}".format,score_by_school_size_group["Average Math Score"].mean()),
"Average Reading Score":map("{:.2f}".format,score_by_school_size_group["Average Reading Score"].mean()),
"% Passing Math": map("{:.2%}".format,score_by_school_size_group["% Passing Math"].mean()),
"% Passing Reading": map("{:.2%}".format,score_by_school_size_group["% Passing Reading"].mean()),
"% Overall Passing":map("{:.2%}".format, score_by_school_size_group["% Overall Passing"].mean())
})
score_by_school_size_df
```
## Scores by School Type
* Perform the same operations as above, based on school type
```
score_by_school_type = school_summary.copy()
score_by_school_type_group = score_by_school_type.groupby(["type"])
score_by_school_type_df = pd.DataFrame({
"Average Math Score":map("{:.2f}".format,score_by_school_type_group["Average Math Score"].mean()),
"Average Reading Score":map("{:.2f}".format,score_by_school_type_group["Average Reading Score"].mean()),
"% Passing Math": map("{:.2%}".format,score_by_school_type_group["% Passing Math"].mean()),
"% Passing Reading": map("{:.2%}".format,score_by_school_type_group["% Passing Reading"].mean()),
"% Overall Passing":map("{:.2%}".format, score_by_school_type_group["% Overall Passing"].mean())
})
score_by_school_type_df
```
| github_jupyter |
# Lesson 2: Computer Vision Fundamentals
## Submission, Markus Schwickert, 2018-02-22
### Photos
```
#importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
import os
%matplotlib inline
#reading in an image
k1=0 # select here which of the images in the directory you want to process (0-5)
test_images=os.listdir("test_images/")
print ('test_images/'+test_images[k1])
image = mpimg.imread('test_images/'+test_images[k1])
#printing out some stats and plotting
print('This image is:', type(image), 'with dimensions:', image.shape)
plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')
# Grab the x and y size and make a copy of the image
ysize = image.shape[0]
xsize = image.shape[1]
# Note: always make a copy rather than simply using "="
color_select = np.copy(image)
# Define our color selection criteria
# Note: if you run this code, you'll find these are not sensible values!!
# But you'll get a chance to play with them soon in a quiz
red_threshold = 180
green_threshold = 180
blue_threshold = 100
rgb_threshold = [red_threshold, green_threshold, blue_threshold]
# Identify pixels below the threshold
thresholds = (image[:,:,0] < rgb_threshold[0]) \
| (image[:,:,1] < rgb_threshold[1]) \
| (image[:,:,2] < rgb_threshold[2])
color_select[thresholds] = [0,0,0]
# Display the image
plt.imshow(color_select)
plt.show()
gray = cv2.cvtColor(color_select, cv2.COLOR_RGB2GRAY) #grayscale conversion
plt.imshow(gray, cmap='gray')
# Define a polygon region of interest
# Keep in mind the origin (x=0, y=0) is in the upper left in image processing
left_bottom = [0, ysize]
right_bottom = [xsize, ysize]
fp1 = [450, 320]
fp2 = [490, 320]
mask = np.zeros_like(gray)
ignore_mask_color = 255
# This time we are defining a four sided polygon to mask
vertices = np.array([[left_bottom, fp1, fp2, right_bottom]], dtype=np.int32)
cv2.fillPoly(mask, vertices, ignore_mask_color)
grayROI = cv2.bitwise_and(gray, mask)
# Display the image
plt.imshow(grayROI, cmap='gray')
# Canny edge detection
# Define a kernel size for Gaussian smoothing / blurring
# Note: this step is optional as cv2.Canny() applies a 5x5 Gaussian internally
kernel_size = 5
blur_gray = cv2.GaussianBlur(grayROI,(kernel_size, kernel_size), 0)
# Define parameters for Canny and run it
low_threshold = 50
high_threshold = 150
edges = cv2.Canny(blur_gray, low_threshold, high_threshold)
# Display the image
plt.imshow(edges, cmap='Greys_r')
# Hough Transformation
# Define the Hough transform parameters
# Make a blank the same size as our image to draw on
rho = 1
theta = np.pi/180
threshold = 1
min_line_length = 16
max_line_gap = 20
line_image = np.copy(image)*0 #creating a blank to draw lines on
# Run Hough on edge detected image
lines = cv2.HoughLinesP(edges, rho, theta, threshold, np.array([]),
min_line_length, max_line_gap)
# Iterate over the output "lines" and draw lines on the blank
for line in lines:
for x1,y1,x2,y2 in line:
cv2.line(line_image,(x1,y1),(x2,y2),(255,0,0),10)
# Create a "color" binary image to combine with line image
#color_edges = np.dstack((edges, edges, edges))
# Draw the lines on the edge image
print (test_images[k1])
combo = cv2.addWeighted(image, 0.8, line_image, 1, 0)
plt.imshow(combo)
mpimg.imsave('MS_images/'+test_images[k1], combo)
```
| github_jupyter |
```
import pandas as pd
import matplotlib.pyplot as plt
import datetime as dt
import seaborn as sns
import numpy as np
import matplotlib.dates as mdates
import datetime
#sns.set(color_codes=True)
import matplotlib as mpl
mpl.rcParams['pdf.fonttype'] = 42
import statistics as st
sns.set_style('whitegrid', {'axes.linewidth' : 0.5})
from statsmodels.distributions.empirical_distribution import ECDF
import scipy
import gc
from helpers import *
today_str = dt.datetime.today().strftime('%y%m%d')
def curve(startx, starty, endx, endy):
x1 = np.linspace(0,(endx-startx),100)
x2 = x1+startx
x = x1/(endx-startx)
y = (endy-starty)*(6*x**5-15*x**4+10*x**3)+starty
y = (endy-starty)*(-20*x**7+70*x**6-84*x**5+35*x**4)+starty
return x2, y
curative = pd.read_csv('~/Box/covid_CDPH/2021.07.06 Master Set Data Only_Deidentified.csv', encoding= 'unicode_escape')
curative['patient_symptom_date'] = pd.to_datetime(curative['patient_symptom_date'], errors='coerce')
curative['collection_time'] = pd.to_datetime(curative['collection_time'], errors='coerce')
curative['days'] = (pd.to_datetime(curative['collection_time'], utc=True) - pd.to_datetime(curative['patient_symptom_date'], utc=True)).dt.days
idph = pd.read_csv('~/Box/covid_IDPH/sentinel_surveillance/210706_SS_epic.csv', encoding= 'unicode_escape')
idph['test_date'] = pd.to_datetime(idph['test_date'])
idph['test_time'] = pd.to_datetime(idph['test_time'])
idph['date_symptoms_start'] = pd.to_datetime(idph['date_symptoms_start'])
idph['days'] = (idph['test_date'] - idph['date_symptoms_start']).dt.days
ss_cond = (idph['days'] <= 4) & (idph['days'] >= 0)
pos_cond = (idph['result'] == 'DETECTED') | (idph['result'] == 'POSITIVE') | (idph['result'] == 'Detected')
chi_cond = (idph['test_site_city'] == 'CHICAGO')
zips = pd.read_csv('./data/Chicago_ZIP_codes.txt', header=None)[0].values
idph['chicago'] = idph['pat_zip_code'].apply(lambda x: zip_in_zips(x, zips))
curative['chicago'] = curative['patient_city'] == 'Chicago'
curative_time_frame_cond = (curative['collection_time'] >= pd.to_datetime('9-27-20')) & (curative['collection_time'] <= pd.to_datetime('6-13-21'))
curative_ss = (curative['days'] >= 0) & (curative['days'] <= 4)
curative_symptom = curative['patient_is_symptomatic']
idph_time_frame_cond = (idph['test_date'] >= pd.to_datetime('9-27-20')) & (idph['test_date'] <= pd.to_datetime('6-13-21'))
idph_ss = (idph['days'] >= 0) & (idph['days'] <= 4)
idph_symptom = idph['symptomatic_per_cdc'] == 'Yes'
idph_chicago_site = (idph['test_site'] == 'IDPH COMMUNITY TESTING AUBURN GRESHAM') | (idph['test_site'] == 'IDPH AUBURN GRESHAM COMMUNITY TESTING') | (idph['test_site'] == 'IDPH HARWOOD HEIGHTS COMMUNITY TESTING')
idph_count = np.sum(idph_time_frame_cond & idph_ss & idph['chicago'] & idph_chicago_site)
curative_count = np.sum(curative_time_frame_cond & curative_ss & curative['chicago'])
pos_cond_curative = curative['test_result'] == 'POSITIVE'
curative['positive'] = pos_cond_curative
chi_idph = (idph['test_site_city'] == 'Chicago') | (idph['test_site_city'] == 'CHICAGO')
pos_cond_idph = (idph['result'] == 'DETECTED') | (idph['result'] == 'POSITIVE') | (idph['result'] == 'Detected')
idph['positive'] = pos_cond_idph
print(idph_count)
print(curative_count)
print('Tests collected at sentinel sites in study period: ')
sentinel_sites_total = len(curative[curative_time_frame_cond]) + len(idph[idph_time_frame_cond & idph_chicago_site])
print(sentinel_sites_total)
print('with Chicago residence: ')
chicago_residents = len(curative[curative_time_frame_cond & curative['chicago']]) + \
len(idph[idph_time_frame_cond & idph_chicago_site & idph['chicago']])
print(chicago_residents)
print('with valid symptom date: ')
with_symptom_date = len(curative[curative_time_frame_cond & curative['chicago']].dropna(subset=['days'])) + \
len(idph[idph_time_frame_cond & idph_chicago_site & idph['chicago']].dropna(subset=['days']))
print(with_symptom_date)
print('symptom date 4 or fewer days before test: ')
tot_ss = len(curative[curative_time_frame_cond & curative['chicago'] & curative_ss].dropna(subset=['days'])) + \
len(idph[idph_time_frame_cond & idph_chicago_site & idph['chicago'] & idph_ss].dropna(subset=['days']))
print(tot_ss)
print('and positive: ')
tot_sc = len(curative[curative_time_frame_cond & curative['chicago'] & curative_ss & pos_cond_curative].dropna(subset=['days'])) + \
len(idph[idph_time_frame_cond & idph_chicago_site & idph['chicago'] & idph_ss & pos_cond_idph].dropna(subset=['days']))
print(tot_sc)
h = 10
w = 8
fig = plt.figure(figsize=(w, h))
figh = h-0
figw = w-0
ax = fig.add_axes([0,0,figw/w,figh/h])
stop_location = np.arange(0,5,1)
line_width = 0.05
#ax.set_xlim([-0.05,1.05])
h_padding = 0.15
v_padding = 0.2
line_width = 0.2
line_height = 4.5
midpoint = (v_padding + line_height)/2
tot_height = sentinel_sites_total
ax.fill_between([stop_location[0], stop_location[0]+line_width],
[midpoint+line_height/2]*2,
[midpoint-line_height/2]*2,
color='gold', zorder=15)
#ax.text(x=stop_location[0]+line_width/1.75,
# y=midpoint, s="specimens collected at sentinel sites in study period n = " + "{:,}".format(sentinel_sites_total),
# ha='center', va='center',
# rotation=90, zorder=16, color='k', fontsize=14)
splits = [chicago_residents, with_symptom_date, tot_ss, tot_sc]
d = tot_height
splits_array = np.array(splits)/d
d_t = 1
d_ts = d
d_top = midpoint+line_height/2
d_bot = midpoint-line_height/2
d_x = stop_location[0]
# midpoint = figh/2
include_color_array = ['gold']*(len(splits)-1) + ['blue']
exclude_color_array = ['crimson']*(len(splits)-1) + ['blue']
for s, l_l, s1, include_color, exclude_color in zip(splits_array,
stop_location[1:],
splits,
include_color_array,
exclude_color_array):
t_line = line_height*d_t + v_padding
ax.fill_between([l_l, l_l+line_width],
[midpoint+t_line/2]*2,
[midpoint+t_line/2-line_height*s]*2,
color=include_color, zorder=13)
ax.fill_between([l_l, l_l+line_width],
[midpoint-t_line/2]*2,
[midpoint-t_line/2+line_height*(d_t-s)]*2,
color=exclude_color)
a1 = curve(d_x+line_width, d_bot,
l_l, midpoint-t_line/2)
a2 = curve(d_x+line_width, d_bot+line_height*(d_t-s),
l_l, midpoint-t_line/2+line_height*(d_t-s))
ax.fill_between(a1[0], a1[1], a2[1], color=exclude_color, alpha=0.25, linewidth=0)
ax.text((d_x+l_l+line_width)/2,
midpoint+t_line/2-line_height*(s)/2,
"n = "+"{:,}".format(s1),
ha='center', va='center',
rotation=0, fontsize=14)
ax.text((d_x+l_l+line_width)/2,
midpoint-t_line/2+line_height*(d_t-s)/2,
"n = "+"{:,}".format(d_ts - s1),
ha='center', va='center',
rotation=0, fontsize=14)
a1 = curve(d_x+line_width, d_top,
l_l, midpoint+t_line/2)
a2 = curve(d_x+line_width, d_bot+line_height*(d_t-s),
l_l, midpoint+t_line/2-line_height*s)
ax.fill_between(a1[0], a1[1], a2[1], color=include_color, alpha=0.25, linewidth=0)
d_t = s
d_ts = s1
d_top = midpoint+t_line/2
d_bot = midpoint+t_line/2-line_height*s
d_x = l_l
midpoint = midpoint+t_line/2-line_height*s/2
ax.text(x=stop_location[1]+line_width+0.05, y=0.35, s='not Chicago resident',
ha='left', va='center', fontsize=14)
ax.text(x=stop_location[2]+line_width+0.05, y=2.5, s='no valid date of symptom onset',
ha='left', va='center', fontsize=14)
ax.text(x=stop_location[3]+line_width+0.05, y=4.5, s='symptom onset > 4 days\nbefore specimen collection',
ha='left', va='top', fontsize=14)
ax.text(x=stop_location[4]+line_width+0.05, y=5.02, s=" positive test → sentinel case",
ha='left', va='top', fontsize=14, weight='bold')
ax.text(x=stop_location[4]+line_width+0.05, y=4.75, s=" negative or inconclusive test",
ha='left', va='top', fontsize=14)
ax.text(x=stop_location[0]-0.1,
y=2.5, s="specimens collected at\ntesting sites in study period\nn = " + "{:,}".format(sentinel_sites_total),
ha='right', va='center',
rotation=0, zorder=16, color='k', fontsize=14)
ax.fill_between(x=[2.95, 4 + line_width+0.05], y1=4.55, y2=5.075,
color='black', alpha=0.1, edgecolor='black', linewidth=0, linestyle='dashed', zorder=0)
ax.text(x=3.6, y=5.11, s="sentinel samples",
ha='center', va='bottom', fontsize=14, weight='bold')
ax.grid(False)
ax.axis('off')
fig.savefig('sankey_diagram_' + today_str + '.png', dpi=200, bbox_inches='tight')
fig.savefig('sankey_diagram_' + today_str + '.pdf', bbox_inches='tight')
```
| github_jupyter |
```
import os, importlib, sys
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
%load_ext autoreload
# Common paths
BASE_PATH = os.path.join(os.getcwd(), "..", "..")
MODULE_PATH = os.path.join(BASE_PATH, "modules")
DS_PATH = os.path.join(BASE_PATH, "datasets")
sys.path.append(MODULE_PATH)
import mp.MomentPropagation as mp
importlib.reload(mp)
import data.mnist as mnist_loader
importlib.reload(mnist_loader)
import tensorflow as tf
from tensorflow import keras
import tensorflow.keras
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Conv2D, MaxPool2D, Dropout, Flatten, Dense, Softmax
gpus = tf.config.experimental.list_physical_devices("GPU")
cpus = tf.config.experimental.list_physical_devices("CPU")
if gpus:
try:
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
logical_gpus = tf.config.experimental.list_logical_devices("GPU")
print(len(gpus), "Physical GPU's, ", len(logical_gpus), "Logical GPU's")
except RuntimeError as e:
print(e)
elif cpus:
try:
logical_cpus = tf.config.experimental.list_logical_devices("CPU")
print(len(cpus), "Physical CPU,", len(logical_cpus), "Logical CPU")
except RuntimeError as e:
print(e)
tfk = tf.keras
input_shape = (28, 28, 1)
model = Sequential([
Conv2D(128, 4,activation="relu", input_shape=input_shape),
MaxPool2D(),
Dropout(.2),
Conv2D(64, 3, activation="relu"),
MaxPool2D(),
Dropout(.2),
Flatten(),
Dense(512, activation="relu"),
Dense(256, activation="relu"),
Dense(128, activation="relu"),
Dense(1, activation="sigmoid")
])
model.build(input_shape)
model.summary()
%autoreload 2
import active_learning as active
importlib.reload(active)
import mp.MomentPropagation as mp
importlib.reload(mp)
mp = mp.MP()
mp_model = mp.create_MP_Model(model=model, use_mp=True, verbose=True)
import data.mnist as mnist_loader
importlib.reload(mnist_loader)
# Load Data
mnist_path = os.path.join(DS_PATH, "mnist")
inputs, targets = mnist_loader.load(mnist_path)
# Select only first and second class
selector = (targets==0) | (targets==1)
new_inputs = inputs[selector].astype("float32")/255.0
new_targets = targets[selector]
# Create splits
x_train, x_test, y_train, y_test = train_test_split(new_inputs, new_targets)
x_test, x_val, y_test, y_val = train_test_split(x_test, y_test)
new_inputs.shape
new_inputs[:, None, ...].shape
sample_input = np.random.randn(10, 28, 28, 1)
sample_input.shape
# pred_mp,var_mp = mp_model(sample_input)
mp_model
%autoreload 2
import bayesian
from bayesian import McDropout, MomentPropagation
mp_m = MomentPropagation(mp_model)
prediction = mp_m.predict(sample_input)
prediction
variance = mp_m.variance(prediction)
variance
exepctation = mp_m.expectation(prediction)
exepctation
%autoreload 2
from acl import ActiveLearning
from active_learning import TrainConfig, Config, Metrics, aggregates_per_key
import bayesian
from bayesian import McDropout, MomentPropagation
train_config = TrainConfig(
batch_size=2,
epochs=1
)
acq_config = Config(
name="std_mean",
pseudo=True
)
model_name = "mp"
acq_name = "max_entropy"
dp_model = McDropout(model)
mp_m = MomentPropagation(mp_model)
active_learning = ActiveLearning(
dp_model,
np.expand_dims(new_inputs, axis=-1), labels=new_targets,
train_config=train_config,
acq_name=acq_name
)
history = active_learning.start(step_size=40)
# Save history
METRICS_PATH = os.path.join(BASE_PATH, "metrics")
metrics = Metrics(METRICS_PATH, keys=["iteration", "train_time", "query_time", "loss"])
metrics.write()
# Compare mc dropout, moment propagation (max_entropy, bald)
history
%autoreload 2
metrics.write("test", history)
read_metrics = metrics.read("test")
read_metrics[:5]
pd.DataFrame(read_metrics)[["iteration", "loss"]]
```
| github_jupyter |
## Good review of numpy https://www.youtube.com/watch?v=GB9ByFAIAH4
## Numpy library - Remember to do pip install numpy
### Numpy provides support for math and logical operations on arrays
#### https://www.tutorialspoint.com/numpy/index.htm
### It supports many more data types than python
#### https://www.tutorialspoint.com/numpy/numpy_data_types.htm
### Only a single data type is allowed in any particular array
```
a = np.array([1,2,3,4])
print(id(a))
print(type(a))
b = np.array(a)
print(f'b = {id(b)}')
a = a + 1
a
```
# <img src='numpyArray.png' width ='400'>
```
# arange vs linspace - both generate a numpy array of numbers
import numpy as np
np.linspace(0,10,5) # specifies No. of values with 0 and 10 being first and last
np.arange(0, 10, 5) # specifies step size=5 starting at 0 up to but NOT including last
x = np.linspace(0,10,11) # generate 10 numbers
x = x + 1 # operates on all elements of the array
type(x)
# generate points and use function to transform them
import numpy as np
import matplotlib.pyplot as plt
x = np.arange(0,10,0.1)
y = np.sin(x)
plt.plot(x,y)
import numpy as np
import matplotlib.pyplot as plt
a = np.random.choice(np.linspace(0,10,10),100)
plt.hist(a,bins=np.arange(0,11,1))
np.linspace(0,10,11)
plt.hist(a,bins=np.arange(0,11,1),density=True)
# Use Bins 1/2 wide - What does plot meean?
plt.hist(a,bins=np.arange(0,11,0.5),density=True)
# Data as sampling from an unseen population
# Choose at random from 1 through 10
import numpy as np
import matplotlib.pyplot as plt
a = np.random.choice(np.arange(0,10),100)
a = np.random.random(100)*10.0
a
```
# Normal Distribution
$
\text{the normal distribution is given by} \\
$
$$
f(z)=\frac{1}{\sqrt{2 \pi}}e^{-\frac{(z)^2}{2}}
$$
$
\text{This can be rewritten in term of the mean and variance} \\
$
$$
f(x)=\frac{1}{\sigma \sqrt{2 \pi}}e^{-\frac{(x- \mu)^2}{2 \sigma^2}}
$$
The random variable $X$ described by the PDF is a normal variable that follows a normal distribution with mean $\mu$ and variance $\sigma^2$.
$
\text{Normal distribution notation is} \\
$
$$
X \sim N(\mu,\sigma^2) \\
$$
The total area under the PDF curve equals 1.
```
# Normal Data
a = np.random.normal(10,2,10)
plt.hist(a,bins=np.arange(5,16,1),density=True)
plt.scatter(np.arange(5,15,1),a)
plt.plot(a)
plt.hist(a,bins=np.arange(5,16,0.1), density=True)
plt.hist(a,bins=np.arange(5,16,1))
import numpy as np
import matplotlib.pyplot as plt
a = np.random.normal(0,2,200)
plt.hist(a, bins=np.arange(-5,5,1))
```
## Mean and Variance
$$
\mu = \frac{\sum(x)}{N}
$$
$$
\sigma^{2} =\sum{\frac{(x - \mu)^{2}}{N} }
$$
```
# IN CLASS - Generate a Population and calculate its mean and variance
import matplotlib.pyplot as plt
Npoints = 10
p = np.random.normal(0,10,Npoints*100)
def myMean(sample):
N = len(sample)
total = 0
for x in sample:
total = total + x
return x/N
pmean = myMean(p)
print(f'mean= {pmean}')
def myVar(sample,mean):
tsample = sample - mean
var = sum(tsample * tsample)/len(sample)
return var
pvar = myVar(p, pmean)
print(f'Variance = {pvar}')
print(f'realVar = ')
import numpy as np
import scipy as scipy
import matplotlib.pyplot as plt
from scipy.stats import norm
plt.style.use('ggplot')
fig, ax = plt.subplots()
x= np.arange(34,40,0.01)
y = np.random.normal(x)
lines = ax.plot(x, norm.pdf(x,loc=37,scale=1))
ax.set_ylim(0,0.45) # range
ax.set_xlabel('x',fontsize=20) # set x label
ax.set_ylabel('pdf(x)',fontsize=20,rotation=90) # set y label
ax.xaxis.set_label_coords(0.55, -0.05) # x label coordinate
ax.yaxis.set_label_coords(-0.1, 0.5) # y label coordinate
px=np.arange(36,37,0.1)
plt.fill_between(px,norm.pdf(px,loc=37,scale=1),color='r',alpha=0.5)
plt.show()
a = np.random.normal(10,1,20)
a
```
## Calculate the mean and subtract the mean from each data value
$$
```
from matplotlib import collections as matcoll
Npoints = 20
x = np.arange(0,Npoints)
y = np.random.normal(loc=10, scale=2, size=Npoints )
lines = []
for i in range(Npoints):
pair=[(x[i],0), (x[i], y[i])]
lines.append(pair)
linecoll = matcoll.LineCollection(lines)
fig, ax = plt.subplots()
ax.add_collection(linecoll)
plt.scatter(x,y, marker='o', color='blue')
plt.xticks(x)
plt.ylim(0,40)
plt.show()
ylim=(0,10)
```
### Numpy 2D Arrays
```
## Multi-Dimensional Arrays
<img src='multiArray.png' width = 500>
import numpy as np
# Numpy 2_D Arrays
a = [0,1,2]
b = [3,4,5]
c = [6,7,8]
z = [a,
b,
c]
a = np.arange(0,9)
z = a.reshape(3,3)
z
z[2,2]
z[0:3:2,0:3:2]
## Exercise - Produce a 10x10 checkerboard of 1s and 0s
import numpy as np
import seaborn as sns
from matplotlib.colors import ListedColormap as lc
Z = np.zeros((8,8),dtype=int)
Z[1::2,::2] = 1
Z[::2,1::2] = 1
print(Z)
sns.heatmap(Z, annot=True,linewidths=5,cbar=False)
import seaborn as sns
sns.heatmap(Z, annot=True,linewidths=5,cbar=False)
# IN CLASS - use the above formula to plot the normal distribution over x = -4 to 4
# takee mean = 0, and sigma = 1
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(-4,4,100)
y = (np.exp(-(x*x)/2))/np.sqrt(2*np.pi)
plt.plot(x,y)
import scipy.integrate as integrate
result = integrate.quad(lambda x: (np.exp(-(x*x)))/np.sqrt(2*np.pi) , -5, 5)
result
```
| github_jupyter |
<a href="https://colab.research.google.com/github/MattFinney/practical_data_science_in_python/blob/main/Session_2_Practical_Data_Science.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/><a>
# Practical Data Science in Python
## Unsupervised Learning: Classifying Spotify Tracks by Genre with $k$-Means Clustering
Authors: Matthew Finney, Paulina Toro Isaza
#### Run this First! (Function Definitions)
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_palette('Set1')
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
from sklearn.cluster import KMeans
from IPython.display import Audio, Image, clear_output
rs = 123
np.random.seed(rs)
def pca_plot(df, classes=None):
# Scale data for PCA
scaled_df = StandardScaler().fit_transform(df)
# Fit the PCA and extract the first two components
pca_results = PCA().fit_transform(scaled_df)
pca1_scores = pca_results[:,0]
pca2_scores = pca_results[:,1]
# Sort the legend labels
if classes is None:
hue_order = None
n_classes = 0
elif str(classes[0]).isnumeric():
classes = ['Cluster {}'.format(x) for x in classes]
hue_order = sorted(np.unique(classes))
n_classes = np.max(np.unique(classes).shape)
else:
hue_order = sorted(np.unique(classes))
n_classes = np.max(np.unique(classes).shape)
# Plot the first two principal components
plt.figure(figsize=(8.5,8.5))
plt.grid()
sns.scatterplot(pca1_scores, pca2_scores, s=50, hue=classes,
hue_order=hue_order, palette='Set1')
plt.xlabel("Principal Component {}".format(1))
plt.ylabel("Principal Component {}".format(2))
plt.title('Principal Component Plot')
plt.show()
def tracklist_player(track_list, df, header="Track Player"):
action = ''
for track in track_list:
print('{}\nTrack Name: {}\nArtist Name(s): {}'.format(header, df.loc[track,'name'],df.loc[track,'artist']))
try:
display(Image(df.loc[track,'cover_url'], format='jpeg', height=150))
except:
print('No cover art available')
try:
display(Audio(df.loc[track,'preview_url']+'.mp3', autoplay=True))
except:
print('No audio preview available')
print('Press <Enter> for the next track or q then <Enter> to quit: ')
action = input()
clear_output()
if action=='q':
break
print('No more clusters. Goodbye!')
def play_cluster_tracks(track_df, cluster_column="best_cluster"):
for cluster in sorted(track_df[cluster_column].unique()):
# Get the tracks in the cluster, and shuffle them for variety
tracks_list = track_df[track_df[cluster_column] == cluster].index.values
np.random.shuffle(tracks_list)
# Instantiate a tracklist player
tracklist_player(tracks_list, df=track_df, header='{}'.format(cluster))
# Load Track DataFrame
path = 'https://raw.githubusercontent.com/MattFinney/practical_data_science_in_python/main/spotify_track_data.csv'
tracks_df = pd.read_csv(path)
# Columns from the track dataframe which are relevant for our analysis
audio_feature_cols = ['danceability', 'energy', 'key', 'loudness', 'mode',
'speechiness', 'acousticness', 'instrumentalness',
'liveness', 'valence', 'tempo', 'duration_ms',
'time_signature']
# Show the first five rows of our dataframe
tracks_df.head()
```
## Recap from Session 1
In our earlier session, we started working with a dataset of Spotify tracks. We explored the variables in the dataset, and determined that audio features - like danceability, accousticness, and tempo - vary across the songs in our dataset and might help us to thoughtfully group the tracks into different playlists. We then used Principal Component Analysis (PCA), a dimensionality reduction technique, to visualize the variation in songs.
We'll pick up where we left off, with the PCA plot from last time. If you're just joining us for Session 2, don't fret! Attending Session 1 is NOT a prerequisite to learn and have fun in Session 2 today!
```
# Plot the principal component analysis results
pca_plot(tracks_df[audio_feature_cols])
```
## Today: Classification using $k$-Means Clustering
Our Principal Component Analysis in the first session helped us to visualize the variation of track audio features in just two dimensions. Looking at the scatterplot of the first two principal components above, we can see that there are a few different groups of tracks. But how do we mathematically separate the tracks into these meaningful groups?
One way to separate the tracks into meaningful groups based on similar audio features is to use clustering. Clustering is a machine learning technique that is very powerful for identifying patterns in unlabeled data where the ground truth is not known.
### What is $k$-Means Clustering?
$k$-Means Clustering is one of the most popular clustering algorithms. The algorithm assigns each data point to a cluster using four main steps.
**Step 1: Initialize the Clusters**\
Based on the user's desired number of clusters $k$, the algorithm randomly chooses a centroid for each cluster. In this example, we choose a $k=3$, therefore the algorithm randomly picks 3 centroids.

**Step 2: Assign Each Data Point**\
The algorithm assigns each point to the closest centroid to get $k$ initial clusters.

**Step 3: Recompute the Cluster Centers**\
For every cluster, the algorithm recomputes the centroid by taking the average of all points in the cluster. The changes in centroids are shown below by arrows.

**Step 4: Reassign the Points**\
Since the centroids change, the algorithm then re-assigns the points to the closest centroid. The image below shows the new clusters after re-assignment.

The algorithm repeats the calculation of centroids and assignment of points until points stop changing clusters. When clustering large datasets, you stop the algorithm before reaching convergence, using other criteria instead.
*Note: Some content in this section was [adapted](https://creativecommons.org/licenses/by/4.0/) from Google's free [Clustering in Machine Learning](https://developers.google.com/machine-learning/clustering) course. The course is a great resource if you want to explore clustering in more detail!*
### Cluster the Spotify Tracks using their Audio Features
Now, we will use the `sklearn.cluster.KMeans` Python library to apply the $k$-means algorithm to our `tracks_df` data. Based on our visual inspection of the PCA plot, let's start with a guess k=3 to get 3 clusters.
```
initial_k = ____
# Scale the data, so that the units of features don't impact feature importance
scaled_df = StandardScaler().fit_transform(tracks_df[audio_feature_cols])
# Cluster the data using the k means algorithm
initial_cluster_results = ______(n_clusters=initial_k, n_init=25, random_state=rs).fit(scaled_df)
```
Now, let's print the cluster results. Notice that we're given a number (0 or 1) for each observation in our data set. This number is the id of the cluster assigned to each track.
```
# Print the cluster results
print(initial_cluster_results._______)
```
And let's save the cluster results in our `tracks_df` dataframe as a column named `initial_cluster` so we can access them later.
```
# Save the cluster labels in our dataframe
tracks_df[______________] = ['Cluster ' + str(i) for i in __________.______]
```
Let's plot the PCA plot and color each observation based on the assigned cluster to visualize our $k$-means results.
```
# Show a PCA plot of the clusters
pca_plot(tracks_df[audio_feature_cols], classes=tracks_df['initial_cluster'])
```
Does it look like our $k$-means algorithm correctly separated the tracks into clusters? Does each color map to a distinct group of points?
### How do our clusters of songs differ?
One way we can evaluate our clusters is by looking how the distribution of each data feature varies by cluster. In our case, let's check to see if tracks in the different clusters tend to have different values of energy, loudness, or speechiness.
```
# Plot the distribution of audio features by cluster
g = sns.pairplot(tracks_df, hue="initial_cluster",
vars=['danceability', 'energy', 'loudness', 'speechiness', 'tempo'],
hue_order=sorted(tracks_df.initial_cluster.unique()), palette='Set1')
g.fig.suptitle('Distribution of Audio Features by Cluster', y=1.05)
plt.show()
```
### Experiment with different values of $k$
Use the slider to select different values of $k$, then run the cell below to see how the choice of the number of clusters affects our results.
```
trial_k = 10 #@param {type:"slider", min:1, max:10, step:1}
# Cluster the data using the k means algorithm
trial_cluster_results = KMeans(n_clusters=trial_k, n_init=25, random_state=rs).fit(scaled_df)
# Save the cluster labels in our dataframe
tracks_df['trial_cluster'] = ['Cluster ' + str(i) for i in trial_cluster_results.labels_]
# Show a PCA plot of the clusters
pca_plot(tracks_df[audio_feature_cols], classes=tracks_df['trial_cluster'])
# Plot the distribution of audio features by cluster
g = sns.pairplot(tracks_df, hue="trial_cluster",
vars=['danceability', 'energy', 'loudness', 'speechiness', 'tempo'],
hue_order=sorted(tracks_df.trial_cluster.unique()), palette='Set1')
g.fig.suptitle('Distribution of Audio Features by Cluster', y=1.05)
plt.show()
```
### Which value of $k$ works best for our data?
You may have noticed that the $k$-means algorithm requires you to choose $k$ and decide the number of clusters before you run the algorithm. But how do we know which value of $k$ is the best fit for our data?
One approach is to track the total distance from points to their cluster centroid as we increase the number of clusters, $k$. Usually, the total distance decreases as we increase $k$, but we reach a value of $k$ where increasing $k$ only marginally decreases the total distance. An elbow plot helps us to find that value of $k$; it's the value of $k$ where the slope of the line in the elbow plot crosses the threshold of slope $=-1$. When you plot distance vs $k$, this point often looks like an "elbow".
Let's build an elbow plot to select the value of $k$ that will give us the highest quality clusters that best explain the variation in our data.
```
# Calculate the Total Distance for each value of k between 1 and 10
scores = []
k_list = np.arange(____,____)
for i in k_list:
fit_k = _____(n_clusters=i, n_init=5, random_state=rs).fit(scaled_df)
scores.append(fit_k.inertia_)
# Plot this in an elbow plot
plt.figure(figsize=(11,8.5))
sns.lineplot(______, ______)
plt.xlabel('Number of clusters $k$')
plt.ylabel('Total Point to Centroid Distance')
plt.grid()
plt.title('The Elbow Method showing the optimal $k$')
plt.show()
```
Do you see the "elbow"? At what value of $k$ does it occur?
### Evaluate the results of our clustering algorithm for the best $k$
Use the slider below to choose the "best" $k$ that you determined from looking at the elbow plot. Evaluate the results in the PCA plot. Does this look like a good value of $k$ to separate the data into meaningful clusters?
```
best_k = 1 #@param {type:"slider", min:1, max:10, step:1}
# Cluster the data using the k means algorithm
best_cluster_results = KMeans(n_clusters=best_k, n_init=25, random_state=rs).fit(scaled_df)
# Save the cluster labels in our dataframe
tracks_df['best_cluster'] = ['Cluster ' + str(i) for i in best_cluster_results.labels_]
# Show a PCA plot of the clusters
pca_plot(tracks_df[audio_feature_cols], classes=tracks_df['best_cluster'])
```
## How did we do?
In addition to the mathematical ways to validate the selection of the best $k$ parameter for our model and the quality of our resulting clusters, there's another very important way to evaluate our results: listening to the tracks!
Let's listen to the tracks in each cluster! What do you notice about the attributes that tracks in each cluster have in common? What do you notice about how the clusters are different? What makes each cluster unique?
```
play_cluster_tracks(tracks_df, cluster_column='best_cluster')
```
## Wrap Up and Next Session
That's a wrap! Now that you've learned some practical skills in data science, please join us tomorrow afternoon for the third and final session in our series, where we'll talk about how to continue your studies and/or pursue a career in Data Science!
**Making Your Next Professional Play in Data Science**\
Friday, October 2 | 11:30am - 12:45pm PT\
[https://sched.co/dtqZ](https://sched.co/dtqZ)
| github_jupyter |
# Python是什么?
### Python是一种高级的多用途编程语言,广泛用于各种非技术和技术领域。Python是一种具备动态语义、面向对象的解释型高级编程语言。它的高级内建数据结构和动态类型及动态绑定相结合,使其在快速应用开发上极具吸引力,也适合于作为脚本或者“粘合剂”语言,将现有组件连接起来。Python简单、易学的语法强调可读性,因此可以降低程序维护成本。Python支持模块和软件包,鼓励模块化的代码重用。
```
print('hellow world')
```
## Python简史
### 1989,为了度过圣诞假期,Guido开始编写Python语言编译器。Python这个名字来自Guido的喜爱的电视连续剧《蒙蒂蟒蛇的飞行马戏团》。他希望新的语言Python能够满足他在C和Shell之间创建全功能、易学、可扩展的语言的愿景。
### 1989年由荷兰人Guido van Rossum于1989年发明,第一个公开发行版发行于1991年
### Granddaddy of Python web frameworks, Zope 1 was released in 1999
### Python 1.0 - January 1994 增加了 lambda, map, filter and reduce.
### Python 2.0 - October 16, 2000,加入了内存回收机制,构成了现在Python语言框架的基础
### Python 2.4 - November 30, 2004, 同年目前最流行的WEB框架Django 诞生
### Python 2.5 - September 19, 2006
### Python 2.6 - October 1, 2008
### Python 2.7 - July 3, 2010
### Python 3.0 - December 3, 2008
### Python 3.1 - June 27, 2009
### Python 3.2 - February 20, 2011
### Python 3.3 - September 29, 2012
### Python 3.4 - March 16, 2014
### Python 3.5 - September 13, 2015
### Python 3.6 - December 23, 2016
### Python 3.7 - June 15, 2018
## Python的主要运用领域有:
### 云计算:云计算最热的语言,典型的应用OpenStack
### WEB开发:许多优秀的WEB框架,许多大型网站是Python开发、YouTube、Dropbox、Douban……典型的Web框架包括Django
### 科学计算和人工智能:典型的图书馆NumPy、SciPy、Matplotlib、Enided图书馆、熊猫
### 系统操作和维护:操作和维护人员的基本语言
### 金融:定量交易、金融分析,在金融工程领域,Python不仅使用最多,而且使用最多,其重要性逐年增加。

## Python在一些公司的运用有:
### 谷歌:谷歌应用程序引擎,代码。谷歌。com、Google.、Google爬虫、Google广告和其他项目正在广泛使用Python。
### CIA:美国中情局网站是用Python开发的
### NASA:美国航天局广泛使用Python进行数据分析和计算
### YouTube:世界上最大的视频网站YouTube是用Python开发的。
### Dropbox:美国最大的在线云存储网站,全部用Python实现,每天处理10亿的文件上传和下载。
### Instagram:美国最大的照片共享社交网站,每天有3000多万张照片被共享,所有这些都是用Python开发的
### Facebook:大量的基本库是通过Python实现的
### Red.:世界上最流行的Linux发行版中的Yum包管理工具是用Python开发的
### Douban:几乎所有公司的业务都是通过Python开发的。
### 知识:中国最大的Q&A社区,通过Python开发(国外Quora)
### 除此之外,还有搜狐、金山、腾讯、盛大、网易、百度、阿里、淘宝、土豆、新浪、果壳等公司正在使用Python来完成各种任务。
## Python有如下特征:
### 1. 开放源码:Python和大部分可用的支持库及工具都是开源的,通常使用相当灵活和开放的许可证。
### 2. 多重范型:Python支持不同的编程和实现范型,例如面向对象和命令式/函数式或者过程式编程。
### 3. 多用途:Python可以用用于快速、交互式代码开发,也可以用于构建大型应用程序;它可以用于低级系统操作,也可以承担高级分析任务。
### 4. 跨平台:Python可用于大部分重要的操作系统,如Windows、Linux和Mac OS;它用于构建桌面应用和Web应用。
### 5. 运行速度慢:这里是指与C和C++相比。
## Python 常用标准库
### math模块为浮点运算提供了对底层C函数库的访问:
```
import math
print(math.pi)
print(math.log(1024, 2))
```
### random提供了生成随机数的工具。
```
import random
print(random.choice(['apple', 'pear', 'banana']))
print(random.random())
```
### datetime模块为日期和时间处理同时提供了简单和复杂的方法。
```
from datetime import date
now = date.today()
birthday = date(1999, 8, 20)
age = now - birthday
print(age.days)
```
### Numpy是高性能科学计算和数据分析的基础包。
### Pandas 纳入了大量库和一些标准的数据模型,提供了高效地操作大型数据集所需的工具。
### Statismodels是一个Python模块,它提供对许多不同统计模型估计的类和函数,并且可以进行统计测试和统计数据的探索。
### matplotlib一个绘制数据图的库。对于数据科学家或分析师非常有用。
### 更多https://docs.python.org/zh-cn/3/library/
# 基础架构工具
## Anaconda安装
https://www.anaconda.com/products/individual
## Spyder使用
## GitHub创建与使用
### GitHub 是一个面向开源及私有软件项目的托管平台,因为只支持 Git 作为唯一的版本库格式进行托管,故名 GitHub。 GitHub 于 2008 年 4 月 10 日正式上线,除了 Git 代码仓库托管及基本的 Web 管理界面以外,还提供了订阅、讨论组、文本渲染、在线文件编辑器、协作图谱(报表)、代码片段分享(Gist)等功能。目前,其注册用户已经超过350万,托管版本数量也是非常之多,其中不乏知名开源项目 Ruby on Rails、jQuery、python 等。GitHub 去年为漏洞支付了 16.6 万美元赏金。 2018年6月,GitHub被微软以75亿美元的价格收购。https://github.com/
# Python基础语法
```
print ("Hello, Python!")
```
## 行和缩进
### python 最具特色的就是用缩进来写模块。
### 缩进的空白数量是可变的,但是所有代码块语句必须包含相同的缩进空白数量,这个必须严格执行。
### 以下实例缩进为四个空格:
```
if 1>2:
print ("True")
else:
print ("False")
if True:
print ("Answer")
print ("True")
else:
print ("Answer")
# 没有严格缩进,在执行时会报错
print ("False")
```
## 多行语句
### Python语句中一般以新行作为语句的结束符。
### 但是我们可以使用斜杠( \)将一行的语句分为多行显示,如下所示:
```
total = 1 + \
2 + \
3
print(total)
```
### 语句中包含 [], {} 或 () 括号就不需要使用多行连接符。如下实例:
```
days = ['Monday', 'Tuesday', 'Wednesday',
'Thursday', 'Friday']
print(days)
```
## Python 引号
### Python 可以使用引号( ' )、双引号( " )、三引号( ''' 或 """ ) 来表示字符串,引号的开始与结束必须是相同类型的。
### 其中三引号可以由多行组成,编写多行文本的快捷语法,常用于文档字符串,在文件的特定地点,被当做注释。
```
word = 'word'
sentence = "这是一个句子。"
paragraph = """这是一个段落。
包含了多个语句"""
print(word)
print(sentence)
print(paragraph)
```
## Python注释
### python中单行注释采用 # 开头。
```
# 第一个注释
print ("Hello, Python!") # 第二个注释
```
# Python变量类型
## 标准数据类型
### 在内存中存储的数据可以有多种类型。
### 例如,一个人的年龄可以用数字来存储,他的名字可以用字符来存储。
### Python 定义了一些标准类型,用于存储各种类型的数据。
### Python有五个标准的数据类型:
### Numbers(数字)
### String(字符串)
### List(列表)
### Tuple(元组)
### Dictionary(字典)
## Python数字
### Python支持三种不同的数字类型:
### int(有符号整型)
### float(浮点型)
### complex(复数)
```
int1 = 1
float2 = 2.0
complex3 = 1+2j
print(type(int1),type(float2),type(complex3))
```
## Python字符串
### 字符串或串(String)是由数字、字母、下划线组成的一串字符。
```
st = '123asd_'
st1 = st[0:3]
print(st)
print(st1)
```
## Python列表
### List(列表) 是 Python 中使用最频繁的数据类型。
### 列表可以完成大多数集合类的数据结构实现。它支持字符,数字,字符串甚至可以包含列表(即嵌套)。
### 列表用 [ ] 标识,是 python 最通用的复合数据类型。
```
list1 = [ 'runoob', 786 , 2.23, 'john', 70.2 ]
tinylist = [123, 'john']
print (list1) # 输出完整列表
print (list1[0]) # 输出列表的第一个元素
print (list1[1:3]) # 输出第二个至第三个元素
print (list1[2:]) # 输出从第三个开始至列表末尾的所有元素
print (tinylist * 2) # 输出列表两次
print (list1 + tinylist) # 打印组合的列表
list1[0] = 0
print(list1)
```
## Python元组
### 元组是另一个数据类型,类似于 List(列表)。
### 元组用 () 标识。内部元素用逗号隔开。但是元组不能二次赋值,相当于只读列表。
```
tuple1 = ( 'runoob', 786 , 2.23, 'john', 70.2 )
tinytuple = (123, 'john')
print(tuple1[0])
print(tuple1+tinytuple)
```
## Python 字典
### 字典(dictionary)是除列表以外python之中最灵活的内置数据结构类型。列表是有序的对象集合,字典是无序的对象集合。
### 两者之间的区别在于:字典当中的元素是通过键来存取的,而不是通过偏移存取。
### 字典用"{ }"标识。字典由索引(key)和它对应的值value组成。
```
dict1 = {}
dict1['one'] = "This is one"
tinydict = {'name': 'john','code':6734, 'dept': 'sales'}
print(tinydict['name'])
print(dict1)
print(tinydict.keys())
print(tinydict.values())
```
# Python运算符
## Python算术运算符
```
a = 21
b = 10
c = 0
c = a + b
print ("c 的值为:", c)
c = a - b
print ("c 的值为:", c)
c = a * b
print ("c 的值为:", c)
c = a / b
print ("c 的值为:", c)
c = a % b #取余数
print ("c 的值为:", c)
# 修改变量 a 、b 、c
a = 2
b = 3
c = a**b
print ("c 的值为:", c)
a = 10
b = 5
c = a//b #取整
print ("c 的值为:", c)
```
## Python比较运算符
```
a = 21
b = 10
c = 0
if a == b :
print ("a 等于 b")
else:
print ("a 不等于 b")
if a != b :
print ("a 不等于 b")
else:
print ("a 等于 b")
if a < b :
print ("a 小于 b")
else:
print ("a 大于等于 b")
if a > b :
print ("a 大于 b")
else:
print ("a 小于等于 b")
# 修改变量 a 和 b 的值
a = 5
b = 20
if a <= b :
print ("a 小于等于 b")
else:
print ("a 大于 b")
if b >= a :
print ("b 大于等于 a")
else:
print ("b 小于 a")
```
## Python逻辑运算符
```
a = True
b = False
if a and b :
print ("变量 a 和 b 都为 true")
else:
print ("变量 a 和 b 有一个不为 true")
if a or b :
print ("变量 a 和 b 都为 true,或其中一个变量为 true")
else:
print ("变量 a 和 b 都不为 true")
if not( a and b ):
print ("变量 a 和 b 都为 false,或其中一个变量为 false")
else:
print ("变量 a 和 b 都为 true")
```
## Python赋值运算符
```
a = 21
b = 10
c = 0
c = a + b
print ("c 的值为:", c)
c += a
print ("c 的值为:", c)
c *= a
print ("c 的值为:", c)
c /= a
print ("c 的值为:", c)
c = 2
c %= a
print ("c 的值为:", c)
c **= a
print ("c 的值为:", c)
c //= a
print ("c 的值为:", c)
```
# Python 条件语句
### Python条件语句是通过一条或多条语句的执行结果(True或者False)来决定执行的代码块。
```
flag = False
name = 'luren'
if name == 'python': # 判断变量是否为 python
flag = True # 条件成立时设置标志为真
print ('welcome boss') # 并输出欢迎信息
else:
print (name) # 条件不成立时输出变量名称
num = 5
if num == 3: # 判断num的值
print ('boss')
elif num == 2:
print ('user')
elif num == 1:
print ('worker')
elif num < 0: # 值小于零时输出
print ('error')
else:
print ('roadman') # 条件均不成立时输出
num = 9
if num >= 0 and num <= 10: # 判断值是否在0~10之间
print ('hello')
num = 10
if num < 0 or num > 10: # 判断值是否在小于0或大于10
print ('hello')
else:
print ('undefine')
num = 8
# 判断值是否在0~5或者10~15之间
if (num >= 0 and num <= 5) or (num >= 10 and num <= 15):
print ('hello')
else:
print ('undefine')
```
# Python循环语句
## Python 提供了 for 循环和 while 循环
## Python While 循环语句
### Python 编程中 while 语句用于循环执行程序,即在某条件下,循环执行某段程序,以处理需要重复处理的相同任务。
```
count = 0
while (count < 9):
print ('The count is:', count)
count = count + 1
print ("Good bye!")
count = 0
while count < 5:
print (count, " is less than 5")
count = count + 1
else:
print (count, " is not less than 5")
```
## Python for 循环语句
```
fruits = ['banana', 'apple', 'mango']
for index in range(len(fruits)):
print ('当前水果 :', fruits[index])
print ("Good bye!")
```
## Python 循环嵌套
```
num=[];
i=2
for i in range(2,100):
j=2
for j in range(2,i):
if(i%j==0):
break
else:
num.append(i)
print(num)
print ("Good bye!")
```
## Python break 语句
```
for letter in 'Python':
if letter == 'h':
break
print ('当前字母 :', letter)
```
## Python continue 语句
### Python continue 语句跳出本次循环,而break跳出整个循环。
```
for letter in 'Python':
if letter == 'h':
continue
print ('当前字母 :', letter)
```
## Python pass 语句
### Python pass 是空语句,是为了保持程序结构的完整性。
### pass 不做任何事情,一般用做占位语句。
```
# 输出 Python 的每个字母
for letter in 'Python':
if letter == 'h':
pass
print ('这是 pass 块')
print ('当前字母 :', letter)
print ("Good bye!")
```
# Python应用实例(链家二手房数据分析)
## 一、根据上海的部分二手房信息,从多角度进行观察和分析房价与哪些因素有关以及房屋不同状况所占比例
## 二、先对数据进行预处理、构造预测房价的模型、并输入参数对房价进行预测
备注:数据来源CSDN下载。上海链家二手房.csv.因文件读入问题,改名为sh.csv
## 一、导入数据 对数据进行一些简单的预处理
```
#导入需要用到的包
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib as mpl
import matplotlib.pyplot as plt
from IPython.display import display
sns.set_style({'font.sans-serif':['simhei','Arial']})
%matplotlib inline
shanghai=pd.read_csv('sh.csv')# 将已有数据导进来
shanghai.head(n=1)#显示第一行数据 查看数据是否导入成功
```
### 每项数据类型均为object 不方便处理,需要对一些项删除单位转换为int或者float类型
### 有些列冗余 像house_img需要删除
### 有些列 如何house_desc包含多种信息 需要逐个提出来单独处理
```
shanghai.describe()
# 检查缺失值情况
shanghai.info()
#np.isnan(shanghai).any()
shanghai.dropna(inplace=True)
#数据处理 删除带有NAN项的行
df=shanghai.copy()
house_desc=df['house_desc']
house_desc[0]
```
### house_desc 中带有 室厅的信息 房子面积 楼层 朝向信息 需要分别提出来当一列 下面进行提取
```
df['layout']=df['house_desc'].map(lambda x:x.split('|')[0])
df['area']=df['house_desc'].map(lambda x:x.split('|')[1])
df['temp']=df['house_desc'].map(lambda x:x.split('|')[2])
#df['Dirextion']=df['house_desc'].map(lambda x:x.split('|')[3])
df['floor']=df['temp'].map(lambda x:x.split('/')[0])
df.head(n=1)
```
### 一些列中带有单位 不利于后期处理 去掉单位 并把数据类型转换为float或int
```
df['area']=df['area'].apply(lambda x:x.rstrip('平'))
df['singel_price']=df['singel_price'].apply(lambda x:x.rstrip('元/平'))
df['singel_price']=df['singel_price'].apply(lambda x:x.lstrip('单价'))
df['district']=df['district'].apply(lambda x:x.rstrip('二手房'))
df['house_time']=df['house_time'].apply(lambda x:str(x))
df['house_time']=df['house_time'].apply(lambda x:x.rstrip('年建'))
df.head(n=1)
```
### 删除一些不需要用到的列 以及 house_desc、temp
```
del df['house_img']
del df['s_cate_href']
del df['house_desc']
del df['zone_href']
del df['house_href']
del df['temp']
```
### 根据房子总价和房子面积 计算房子每平方米的价格
### 从house_title 描述房子信息中提取关键词。若带有 交通便利、地铁则认为其交通方便,否则交通不便
```
df.head(n=1)
df['singel_price']=df['singel_price'].apply(lambda x:float(x))
df['area']=df['area'].apply(lambda x:float(x))
df.head(n=1)
df.head(n=1)
df['house_title']=df['house_title'].apply(lambda x:str(x))
df['trafic']=df['house_title'].apply(lambda x:'交通便利' if x.find("交通便利")>=0 or x.find("地铁")>=0 else "交通不便" )
df.head(n=1)
```
## 二、根据各列信息 用可视化的形式展现 房价与不同因素如地区、房子面积、所在楼层等之间的关系
```
df_house_count = df.groupby('district')['house_price'].count().sort_values(ascending=False).to_frame().reset_index()
df_house_mean = df.groupby('district')['singel_price'].mean().sort_values(ascending=False).to_frame().reset_index()
f, [ax1,ax2,ax3] = plt.subplots(3,1,figsize=(20,15))
sns.barplot(x='district', y='singel_price', palette="Reds_d", data=df_house_mean, ax=ax1)
ax1.set_title('上海各大区二手房每平米单价对比',fontsize=15)
ax1.set_xlabel('区域')
ax1.set_ylabel('每平米单价')
sns.countplot(df['district'], ax=ax2)
sns.boxplot(x='district', y='house_price', data=df, ax=ax3)
ax3.set_title('上海各大区二手房房屋总价',fontsize=15)
ax3.set_xlabel('区域')
ax3.set_ylabel('房屋总价')
plt.show()
```
### 上面三幅图显示了 房子单价、总数量、总价与地区之间的关系。
#### 由上面第一幅图可以看到房子单价与地区有关,其中黄浦以及静安地区房价最高。这与地区的发展水平、交通便利程度以及离市中心远近程度有关
#### 由上面第二幅图可以直接看出不同地区的二手房数量,其中浦东最多
#### 由上面第三幅图可以看出上海二手房房价基本在一千万上下,很少有高于两千万的
```
f, [ax1,ax2] = plt.subplots(1, 2, figsize=(15, 5))
# 二手房的面积分布
sns.distplot(df['area'], bins=20, ax=ax1, color='r')
sns.kdeplot(df['area'], shade=True, ax=ax1)
# 二手房面积和价位的关系
sns.regplot(x='area', y='house_price', data=df, ax=ax2)
plt.show()
```
### 由从左到右第一幅图可以看出 基本二手房面积在60-200平方米之间,其中一百平方米左右的占比更大
### 由第二幅看出,二手房总结与二手房面积基本成正比,和我们的常识吻合
```
areas=[len(df[df.area<100]),len(df[(df.area>100)&(df.area<200)]),len(df[df.area>200])]
labels=['area<100' , '100<area<200','area>200']
plt.pie(areas,labels= labels,autopct='%0f%%',shadow=True)
plt.show()
# 绘制饼图
```
### 将面积划分为三个档次,面积大于200、面积小与100、面积在一百到两百之间 三者的占比情况可以发现 百分之六十九的房子面积在一百平方米一下,高于一百大于200的只有百分之二十五而面积大于两百的只有百分之四
```
df.loc[df['area']>1000]
# 查看size>1000的样本 发现只有一个是大于1000
f, ax1= plt.subplots(figsize=(20,20))
sns.countplot(y='layout', data=df, ax=ax1)
ax1.set_title('房屋户型',fontsize=15)
ax1.set_xlabel('数量')
ax1.set_ylabel('户型')
f, ax2= plt.subplots(figsize=(20,20))
sns.barplot(y='layout', x='house_price', data=df, ax=ax2)
plt.show()
```
### 上述两幅图显示了 不同户型的数量和价格
#### 由第一幅图看出2室1厅最多 2室2厅 3室2厅也较多 是主流的户型选择
#### 由第二幅看出 室和厅的数量增加随之价格也增加,但是室和厅之间的比例要适合
```
a1=0
a2=0
for x in df['trafic']:
if x=='交通便利':
a1=a1+1
else:
a2=a2+1
sizes=[a1,a2]
labels=['交通便利' , '交通不便']
plt.pie(sizes,labels= labels,autopct='%0f%%',shadow=True)
plt.show()
```
#### 上述图显示了上海二手房交通不便利情况。其中百分之六十一为交通不便,百分之三十八为交通不便。由于交通便利情况仅仅是根据对房屋的描述情况提取出来的,实际上 交通便利的占比会更高些
```
f, [ax1,ax2] = plt.subplots(1, 2, figsize=(20, 10))
sns.countplot(df['trafic'], ax=ax1)
ax1.set_title('交通是否便利数量对比',fontsize=15)
ax1.set_xlabel('交通是否便利')
ax1.set_ylabel('数量')
sns.barplot(x='trafic', y='house_price', data=df, ax=ax2)
ax2.set_title('交通是否便利房价对比',fontsize=15)
ax2.set_xlabel('交通是否便利')
ax2.set_ylabel('总价')
plt.show()
```
### 左边那幅图显示了交通便利以及不便的二手房数量,这与我们刚才的饼图信息一致
### 右边那幅图显示了交通便利与否与房价的关系。交通便利的房子价格更高
```
f, ax1= plt.subplots(figsize=(20,5))
sns.countplot(x='floor', data=df, ax=ax1)
ax1.set_title('楼层',fontsize=15)
ax1.set_xlabel('楼层数')
ax1.set_ylabel('数量')
f, ax2 = plt.subplots(figsize=(20, 5))
sns.barplot(x='floor', y='house_price', data=df, ax=ax2)
ax2.set_title('楼层',fontsize=15)
ax2.set_xlabel('楼层数')
ax2.set_ylabel('总价')
plt.show()
```
#### 楼层(地区、高区、中区、地下几层)与数量、房价的关系。高区、中区、低区居多
## 三、根据已有数据建立简单的上海二手房房间预测模型
### 对数据再次进行简单的预处理 把户型这列拆成室和厅
```
df[['室','厅']] = df['layout'].str.extract(r'(\d+)室(\d+)厅')
df['室'] = df['室'].astype(float)
df['厅'] = df['厅'].astype(float)
del df['layout']
df.head()
df.dropna(inplace=True)
df.info()
df.columns
```
### 删除不需要用到的信息如房子的基本信息描述
```
del df['house_title']
del df['house_detail']
del df['s_cate']
from sklearn.linear_model import LinearRegression
linear = LinearRegression()
area=df['area']
price=df['house_price']
area = np.array(area).reshape(-1,1) # 这里需要注意新版的sklearn需要将数据转换为矩阵才能进行计算
price = np.array(price).reshape(-1,1)
# 训练模型
model = linear.fit(area,price)
# 打印截距和回归系数
print(model.intercept_, model.coef_)
linear_p = model.predict(area)
plt.figure(figsize=(12,6))
plt.scatter(area,price)
plt.plot(area,linear_p,'red')
plt.xlabel("area")
plt.ylabel("price")
plt.show()
```
#### 上面用线性回归模型对房价进行简单的预测 红色的代表预测房价,而蓝色点代表真实值。可以看出在面积小于1000时真实值紧密分布在预测值两旁
# 注意!
## 当是用Jupyter Notebook编程时,第一步请检查Notebook是否可读性可写

## 如果显示read-only,请打开终端(CMD),输入sudo chmod -R 777 filename,给文件夹授权,之后重新打开Jupyter Notebook方可保存文件。
| github_jupyter |
<a href="https://colab.research.google.com/github/huan/concise-chit-chat/blob/master/Concise_Chit_Chat.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Concise Chit Chat
GitHub Repository: <https://github.com/huan/concise-chit-chat>
## Code TODO:
1. create a DataLoader class for dataset preprocess. (Use tf.data.Dataset inside?)
1. Create a PyPI package for easy load cornell movie curpos dataset(?)
1. Use PyPI module `embeddings` to load `GLOVES`, or use tfhub to load `GLOVES`?
1. How to do a `clip_norm`(or set `clip_value`) in Keras with Eager mode but without `tf.contrib`?
1. Better name for variables & functions
1. Code clean
1. Encapsulate all layers to Model Class:
1. ChitChatEncoder
1. ChitChatDecoder
1. ChitChatModel
1. Re-style to follow the book
1. ...?
## Book Todo
1. Outlines
1. What's seq2seq
1. What's word embedding
1.
1. Split code into snips
1. Write for snips
1. Content cleaning and optimizing
1. ...?
## Other
1. `keras.callbacks.TensorBoard` instead of `tf.contrib.summary`?
- `model.fit(callbacks=[TensorBoard(...)])`
1. download url? - http://old.pep.com.cn/gzsx/jszx_1/czsxtbjxzy/qrzptgjzxjc/dzkb/dscl/
### config.py
```
'''doc'''
# GO for start of the sentence
# DONE for end of the sentence
GO = '\b'
DONE = '\a'
# max words per sentence
MAX_LEN = 20
```
### data_loader.py
```
'''
data loader
'''
import gzip
import re
from typing import (
# Any,
List,
Tuple,
)
import tensorflow as tf
import numpy as np
# from .config import (
# GO,
# DONE,
# MAX_LEN,
# )
DATASET_URL = 'https://github.com/huan/concise-chit-chat/releases/download/v0.0.1/dataset.txt.gz'
DATASET_FILE_NAME = 'concise-chit-chat-dataset.txt.gz'
class DataLoader():
'''data loader'''
def __init__(self) -> None:
print('DataLoader', 'downloading dataset from:', DATASET_URL)
dataset_file = tf.keras.utils.get_file(
DATASET_FILE_NAME,
origin=DATASET_URL,
)
print('DataLoader', 'loading dataset from:', dataset_file)
# dataset_file = './data/dataset.txt.gz'
# with open(path, encoding='iso-8859-1') as f:
with gzip.open(dataset_file, 'rt') as f:
self.raw_text = f.read().lower()
self.queries, self.responses \
= self.__parse_raw_text(self.raw_text)
self.size = len(self.queries)
def get_batch(
self,
batch_size=32,
) -> Tuple[List[List[str]], List[List[str]]]:
'''get batch'''
# print('corpus_list', self.corpus)
batch_indices = np.random.choice(
len(self.queries),
size=batch_size,
)
batch_queries = self.queries[batch_indices]
batch_responses = self.responses[batch_indices]
return batch_queries, batch_responses
def __parse_raw_text(
self,
raw_text: str
) -> Tuple[List[List[str]], List[List[str]]]:
'''doc'''
query_list = []
response_list = []
for line in raw_text.strip('\n').split('\n'):
query, response = line.split('\t')
query, response = self.preprocess(query), self.preprocess(response)
query_list.append('{} {} {}'.format(GO, query, DONE))
response_list.append('{} {} {}'.format(GO, response, DONE))
return np.array(query_list), np.array(response_list)
def preprocess(self, text: str) -> str:
'''doc'''
new_text = text
new_text = re.sub('[^a-zA-Z0-9 .,?!]', ' ', new_text)
new_text = re.sub(' +', ' ', new_text)
new_text = re.sub(
'([\w]+)([,;.?!#&-\'\"-]+)([\w]+)?',
r'\1 \2 \3',
new_text,
)
if len(new_text.split()) > MAX_LEN:
new_text = (' ').join(new_text.split()[:MAX_LEN])
match = re.search('[.?!]', new_text)
if match is not None:
idx = match.start()
new_text = new_text[:idx+1]
new_text = new_text.strip().lower()
return new_text
```
### vocabulary.py
```
'''doc'''
import re
from typing import (
List,
)
import tensorflow as tf
# from .config import (
# DONE,
# GO,
# MAX_LEN,
# )
class Vocabulary:
'''voc'''
def __init__(self, text: str) -> None:
self.tokenizer = tf.keras.preprocessing.text.Tokenizer(filters='')
self.tokenizer.fit_on_texts(
[GO, DONE] + re.split(
r'[\s\t\n]',
text,
)
)
# additional 1 for the index 0
self.size = 1 + len(self.tokenizer.word_index.keys())
def texts_to_padded_sequences(
self,
text_list: List[List[str]]
) -> tf.Tensor:
'''doc'''
sequence_list = self.tokenizer.texts_to_sequences(text_list)
padded_sequences = tf.keras.preprocessing.sequence.pad_sequences(
sequence_list,
maxlen=MAX_LEN,
padding='post',
truncating='post',
)
return padded_sequences
def padded_sequences_to_texts(self, sequence: List[int]) -> str:
return 'tbw'
```
### model.py
```
'''doc'''
import tensorflow as tf
import numpy as np
from typing import (
List,
)
# from .vocabulary import Vocabulary
# from .config import (
# DONE,
# GO,
# MAX_LENGTH,
# )
EMBEDDING_DIM = 300
LATENT_UNIT_NUM = 500
class ChitEncoder(tf.keras.Model):
'''encoder'''
def __init__(
self,
) -> None:
super().__init__()
self.lstm_encoder = tf.keras.layers.CuDNNLSTM(
units=LATENT_UNIT_NUM,
return_state=True,
)
def call(
self,
inputs: tf.Tensor, # shape: [batch_size, max_len, embedding_dim]
training=None,
mask=None,
) -> tf.Tensor:
_, *state = self.lstm_encoder(inputs)
return state # shape: ([latent_unit_num], [latent_unit_num])
class ChatDecoder(tf.keras.Model):
'''decoder'''
def __init__(
self,
voc_size: int,
) -> None:
super().__init__()
self.lstm_decoder = tf.keras.layers.CuDNNLSTM(
units=LATENT_UNIT_NUM,
return_sequences=True,
return_state=True,
)
self.dense = tf.keras.layers.Dense(
units=voc_size,
)
self.time_distributed_dense = tf.keras.layers.TimeDistributed(
self.dense
)
self.initial_state = None
def set_state(self, state=None):
'''doc'''
# import pdb; pdb.set_trace()
self.initial_state = state
def call(
self,
inputs: tf.Tensor, # shape: [batch_size, None, embedding_dim]
training=False,
mask=None,
) -> tf.Tensor:
'''chat decoder call'''
# batch_size = tf.shape(inputs)[0]
# max_len = tf.shape(inputs)[0]
# outputs = tf.zeros(shape=(
# batch_size, # batch_size
# max_len, # max time step
# LATENT_UNIT_NUM, # dimention of hidden state
# ))
# import pdb; pdb.set_trace()
outputs, *states = self.lstm_decoder(inputs, initial_state=self.initial_state)
self.initial_state = states
outputs = self.time_distributed_dense(outputs)
return outputs
class ChitChat(tf.keras.Model):
'''doc'''
def __init__(
self,
vocabulary: Vocabulary,
) -> None:
super().__init__()
self.word_index = vocabulary.tokenizer.word_index
self.index_word = vocabulary.tokenizer.index_word
self.voc_size = vocabulary.size
# [batch_size, max_len] -> [batch_size, max_len, voc_size]
self.embedding = tf.keras.layers.Embedding(
input_dim=self.voc_size,
output_dim=EMBEDDING_DIM,
mask_zero=True,
)
self.encoder = ChitEncoder()
# shape: [batch_size, state]
self.decoder = ChatDecoder(self.voc_size)
# shape: [batch_size, max_len, voc_size]
def call(
self,
inputs: List[List[int]], # shape: [batch_size, max_len]
teacher_forcing_targets: List[List[int]]=None, # shape: [batch_size, max_len]
training=None,
mask=None,
) -> tf.Tensor: # shape: [batch_size, max_len, embedding_dim]
'''call'''
batch_size = tf.shape(inputs)[0]
inputs_embedding = self.embedding(tf.convert_to_tensor(inputs))
state = self.encoder(inputs_embedding)
self.decoder.set_state(state)
if training:
teacher_forcing_targets = tf.convert_to_tensor(teacher_forcing_targets)
teacher_forcing_embeddings = self.embedding(teacher_forcing_targets)
# outputs[:, 0, :].assign([self.__go_embedding()] * batch_size)
batch_go_embedding = tf.ones([batch_size, 1, 1]) * [self.__go_embedding()]
batch_go_one_hot = tf.ones([batch_size, 1, 1]) * [tf.one_hot(self.word_index[GO], self.voc_size)]
outputs = batch_go_one_hot
output = self.decoder(batch_go_embedding)
for t in range(1, MAX_LEN):
outputs = tf.concat([outputs, output], 1)
if training:
target = teacher_forcing_embeddings[:, t, :]
decoder_input = tf.expand_dims(target, axis=1)
else:
decoder_input = self.__indice_to_embedding(tf.argmax(output))
output = self.decoder(decoder_input)
return outputs
def predict(self, inputs: List[int], temperature=1.) -> List[int]:
'''doc'''
outputs = self([inputs])
outputs = tf.squeeze(outputs)
word_list = []
for t in range(1, MAX_LEN):
output = outputs[t]
indice = self.__logit_to_indice(output, temperature=temperature)
word = self.index_word[indice]
if indice == self.word_index[DONE]:
break
word_list.append(word)
return ' '.join(word_list)
def __go_embedding(self) -> tf.Tensor:
return self.embedding(
tf.convert_to_tensor(self.word_index[GO]))
def __logit_to_indice(
self,
inputs,
temperature=1.,
) -> int:
'''
[vocabulary_size]
convert one hot encoding to indice with temperature
'''
inputs = tf.squeeze(inputs)
prob = tf.nn.softmax(inputs / temperature).numpy()
indice = np.random.choice(self.voc_size, p=prob)
return indice
def __indice_to_embedding(self, indice: int) -> tf.Tensor:
tensor = tf.convert_to_tensor([[indice]])
return self.embedding(tensor)
```
### Train
### Tensor Board
[Quick guide to run TensorBoard in Google Colab](https://www.dlology.com/blog/quick-guide-to-run-tensorboard-in-google-colab/)
`tensorboard` vs `tensorboard/` ?
```
LOG_DIR = '/content/data/tensorboard/'
get_ipython().system_raw(
'tensorboard --logdir {} --host 0.0.0.0 --port 6006 &'
.format(LOG_DIR)
)
# Install
! npm install -g localtunnel
# Tunnel port 6006 (TensorBoard assumed running)
get_ipython().system_raw('lt --port 6006 >> url.txt 2>&1 &')
# Get url
! cat url.txt
'''train'''
import tensorflow as tf
# from chit_chat import (
# ChitChat,
# DataLoader,
# Vocabulary,
# )
tf.enable_eager_execution()
data_loader = DataLoader()
vocabulary = Vocabulary(data_loader.raw_text)
chitchat = ChitChat(vocabulary=vocabulary)
def loss(model, x, y) -> tf.Tensor:
'''doc'''
weights = tf.cast(
tf.not_equal(y, 0),
tf.float32,
)
prediction = model(
inputs=x,
teacher_forcing_targets=y,
training=True,
)
# implment the following contrib function in a loop ?
# https://stackoverflow.com/a/41135778/1123955
# https://stackoverflow.com/q/48025004/1123955
return tf.contrib.seq2seq.sequence_loss(
prediction,
tf.convert_to_tensor(y),
weights,
)
def grad(model, inputs, targets):
'''doc'''
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return tape.gradient(loss_value, model.variables)
def train() -> int:
'''doc'''
learning_rate = 1e-3
num_batches = 8000
batch_size = 128
print('Dataset size: {}, Vocabulary size: {}'.format(
data_loader.size,
vocabulary.size,
))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
root = tf.train.Checkpoint(
optimizer=optimizer,
model=chitchat,
optimizer_step=tf.train.get_or_create_global_step(),
)
root.restore(tf.train.latest_checkpoint('./data/save'))
print('checkpoint restored.')
writer = tf.contrib.summary.create_file_writer('./data/tensorboard')
writer.set_as_default()
global_step = tf.train.get_or_create_global_step()
for batch_index in range(num_batches):
global_step.assign_add(1)
queries, responses = data_loader.get_batch(batch_size)
encoder_inputs = vocabulary.texts_to_padded_sequences(queries)
decoder_outputs = vocabulary.texts_to_padded_sequences(responses)
grads = grad(chitchat, encoder_inputs, decoder_outputs)
optimizer.apply_gradients(
grads_and_vars=zip(grads, chitchat.variables)
)
if batch_index % 10 == 0:
print("batch %d: loss %f" % (batch_index, loss(
chitchat, encoder_inputs, decoder_outputs).numpy()))
root.save('./data/save/model.ckpt')
print('checkpoint saved.')
with tf.contrib.summary.record_summaries_every_n_global_steps(1):
# your model code goes here
tf.contrib.summary.scalar('loss', loss(
chitchat, encoder_inputs, decoder_outputs).numpy())
# print('summary had been written.')
return 0
def main() -> int:
'''doc'''
return train()
main()
#! rm -fvr data/tensorboard
# ! pwd
# ! rm -frv data/save
# ! rm -fr /content/data/tensorboard
# ! kill 2823
# ! kill -9 2823
# ! ps axf | grep lt
! cat url.txt
```
### chat.py
```
'''train'''
# import tensorflow as tf
# from chit_chat import (
# ChitChat,
# DataLoader,
# Vocabulary,
# DONE,
# GO,
# )
# tf.enable_eager_execution()
def main() -> int:
'''chat main'''
data_loader = DataLoader()
vocabulary = Vocabulary(data_loader.raw_text)
print('Dataset size: {}, Vocabulary size: {}'.format(
data_loader.size,
vocabulary.size,
))
chitchat = ChitChat(vocabulary)
checkpoint = tf.train.Checkpoint(model=chitchat)
checkpoint.restore(tf.train.latest_checkpoint('./data/save'))
print('checkpoint restored.')
return cli(chitchat, vocabulary=vocabulary, data_loader=data_loader)
def cli(chitchat: ChitChat, data_loader: DataLoader, vocabulary: Vocabulary):
'''command line interface'''
index_word = vocabulary.tokenizer.index_word
word_index = vocabulary.tokenizer.word_index
query = ''
while True:
try:
# Get input sentence
query = input('> ').lower()
# Check if it is quit case
if query == 'q' or query == 'quit':
break
# Normalize sentence
query = data_loader.preprocess(query)
query = '{} {} {}'.format(GO, query, DONE)
# Evaluate sentence
query_sequence = vocabulary.texts_to_padded_sequences([query])[0]
response_sequence = chitchat.predict(query_sequence, 1)
# Format and print response sentence
response_word_list = [
index_word[indice]
for indice in response_sequence
if indice != 0 and indice != word_index[DONE]
]
print('Bot:', ' '.join(response_word_list))
except KeyError:
print("Error: Encountered unknown word.")
main()
! cat /proc/cpuinfo
```
| github_jupyter |
# Scraping de Características Generales
Se obtienen características generales del equipo de la página whoscored.
Las estadísticas son de la temporada más reciente.

```
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.support.ui import Select
option = webdriver.ChromeOptions()
option.add_argument(" — incognito")
browser = webdriver.Chrome(executable_path='./chromedriver',
chrome_options=option)
team_links = [
'https://es.whoscored.com/Teams/326/Archive/Rusia-Russia',
'https://es.whoscored.com/Teams/967/Archive/Uruguay-Uruguay',
'https://es.whoscored.com/Teams/340/Archive/Portugal-Portugal',
'https://es.whoscored.com/Teams/338/Archive/Espa%C3%B1a-Spain',
'https://es.whoscored.com/Teams/1293/Archive/Ir%C3%A1n-Iran',
'https://es.whoscored.com/Teams/341/Archive/Francia-France',
'https://es.whoscored.com/Teams/328/Archive/Australia-Australia',
'https://es.whoscored.com/Teams/416/Archive/Per%C3%BA-Peru',
'https://es.whoscored.com/Teams/425/Archive/Dinamarca-Denmark',
'https://es.whoscored.com/Teams/346/Archive/Argentina-Argentina',
'https://es.whoscored.com/Teams/770/Archive/Islandia-Iceland',
'https://es.whoscored.com/Teams/337/Archive/Croacia-Croatia',
'https://es.whoscored.com/Teams/977/Archive/Nigeria-Nigeria',
'https://es.whoscored.com/Teams/409/Archive/Brasil-Brazil',
'https://es.whoscored.com/Teams/423/Archive/Suiza-Switzerland',
'https://es.whoscored.com/Teams/970/Archive/Costa-Rica-Costa-Rica',
'https://es.whoscored.com/Teams/336/Archive/Alemania-Germany',
'https://es.whoscored.com/Teams/972/Archive/M%C3%A9xico-Mexico',
'https://es.whoscored.com/Teams/344/Archive/Suecia-Sweden',
'https://es.whoscored.com/Teams/1159/Archive/Corea-Del-sur-South-Korea',
'https://es.whoscored.com/Teams/339/Archive/B%C3%A9lgica-Belgium',
'https://es.whoscored.com/Teams/959/Archive/T%C3%BAnez-Tunisia',
'https://es.whoscored.com/Teams/345/Archive/Inglaterra-England',
'https://es.whoscored.com/Teams/342/Archive/Polonia-Poland',
'https://es.whoscored.com/Teams/957/Archive/Senegal-Senegal',
'https://es.whoscored.com/Teams/408/Archive/Colombia-Colombia',
'https://es.whoscored.com/Teams/986/Archive/Japan-Japan'
]
def wait_browser(browser, load_xpath, timeout=20):
try:
WebDriverWait(browser, timeout).until(
EC.visibility_of_element_located(
(By.XPATH, load_xpath)))
except TimeoutException:
print("Timed out waiting for page to load")
browser.quit()
import pandas as pd
data = []
for team_link in team_links:
team_data = {}
browser.get(team_link)
wait_browser(browser, '//a[@class="team-link"]')
team_data['Equipo'] = browser.find_element_by_xpath('//a[@class="team-link"]').text
sidebox = browser.find_element_by_xpath('//div[@class="team-profile-side-box"]')
team_data['rating'] = sidebox.find_element_by_xpath('//div[@class="rating"]').text
stats_container = browser.find_element_by_xpath('//div[@class="stats-container"]')
labels = stats_container.find_elements_by_tag_name('dt')
values = stats_container.find_elements_by_tag_name('dd')
for l, v in zip(labels, values):
team_data[l.text] = v.text
data.append(team_data)
df = pd.DataFrame(data)
df.head()
df
df.to_csv("características_equipos.csv", index=False)
```
# Procesamiento de datos de equipos
```
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
df = pd.read_csv("características_equipos.csv")
df['rating'] = df['rating'].apply(lambda x: x.replace(',', '.')).astype(float)
x = range(len(df['rating']))
df = df.sort_values('rating')
plt.barh(x, df['rating'])
plt.yticks(x, df['Equipo'])
```
| github_jupyter |
```
from IPython.core.debugger import set_trace
import numpy as np
import import_ipynb
from environment import *
def heuristicLWR(num_jobs, num_mc, machines, durations):
machines_ = np.array(machines)
tmp = np.zeros((num_jobs,num_mc+1), dtype=int)
tmp[:,:-1] = machines_
machines_ = tmp
durations_ = np.array(durations)
tmp = np.zeros((num_jobs,num_mc+1), dtype=int)
tmp[:,:-1] = durations_
durations_ = tmp
indices = np.zeros([num_jobs], dtype=int)
# Internal variables
previousTaskReadyTime = np.zeros([num_jobs], dtype=int)
machineReadyTime = np.zeros([num_mc], dtype=int)
placements = [[] for _ in range(num_mc)]
# While...
while(not np.array_equal(indices, np.ones([num_jobs], dtype=int)*num_mc)):
machines_Idx = machines_[range(num_jobs),indices]
durations_Idx = durations_[range(num_jobs),indices]
# 1: Check previous Task and machine availability
mask = np.zeros([num_jobs], dtype=bool)
for j in range(num_jobs):
if previousTaskReadyTime[j] == 0 and machineReadyTime[machines_Idx[j]] == 0 and indices[j]<num_mc:
mask[j] = True
# 2: Competition SPT
for m in range(num_mc):
job = None
remaining = 99999
for j in range(num_jobs):
tmp = np.sum(durations_[j][indices[j]:])
if machines_Idx[j] == m and tmp < remaining and mask[j]:
job = j
remaining = tmp
if job != None:
placements[m].append([job, indices[job]])
previousTaskReadyTime[job] += durations_Idx[job]
machineReadyTime[m] += durations_Idx[job]
indices[job] += 1
# time +1
previousTaskReadyTime = np.maximum(previousTaskReadyTime - 1 , np.zeros([num_jobs], dtype=int))
machineReadyTime = np.maximum(machineReadyTime - 1 , np.zeros([num_mc], dtype=int))
return placements
if __name__ == "__main__":
# Import environment
config = Config()
config.machine_profile = "xsmall_default"
config.job_profile = "xsmall_default"
config.reconfigure()
# Configure environment
env = Environment(config)
env.clear()
# Read problem instance
filename = "datasets/inference/dataset_xsmall.data"
with open(filename, "r") as file:
NB_JOBS, NB_MACHINES = [int(v) for v in file.readline().split()]
JOBS = [[int(v) for v in file.readline().split()] for i in range(NB_JOBS)]
#-----------------------------------------------------------------------------
# Prepare the data for modeling
#-----------------------------------------------------------------------------
# Build list of machines. MACHINES[j][s] = id of the machine for the operation s of the job j
machines = [[JOBS[j][2 * s] for s in range(NB_MACHINES)] for j in range(NB_JOBS)]
# Build list of durations. DURATION[j][s] = duration of the operation s of the job j
durations = [[JOBS[j][2 * s + 1] for s in range(NB_MACHINES)] for j in range(NB_JOBS)]
placements = heuristicLWR(NB_JOBS, NB_MACHINES, machines, durations)
env.step(machines, durations, placements)
print("Makespan: ", env.makespan)
env.plot(save=False)
```
| github_jupyter |
<img src="../../../../images/qiskit-heading.gif" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="500 px" align="left">
# _*Quantum K-Means algorithm*_
The latest version of this notebook is available on https://github.com/qiskit/qiskit-tutorial.
***
### Contributors
Shan Jin, Xi He, Xiaokai Hou, Li Sun, Dingding Wen, Shaojun Wu and Xiaoting Wang$^{1}$
1. Institute of Fundamental and Frontier Sciences, University of Electronic Science and Technology of China,Chengdu, China,610051
***
## Introduction
Clustering algorithm is a typical unsupervised learning algorithm, which is mainly used to automatically classify similar samples into one category.In the clustering algorithm, according to the similarity between the samples, the samples are divided into different categories. For different similarity calculation methods, different clustering results will be obtained. The commonly used similarity calculation method is the Euclidean distance method.
What we want to show is the quantum K-Means algorithm. The K-Means algorithm is a distance-based clustering algorithm that uses distance as an evaluation index for similarity, that is, the closer the distance between two objects is, the greater the similarity. The algorithm considers the cluster to be composed of objects that are close together, so the compact and independent cluster is the ultimate target.
#### Experiment design
The implementation of the quantum K-Means algorithm mainly uses the swap test to compare the distances among the input data points. Select K points randomly from N data points as centroids, measure the distance from each point to each centroid, and assign it to the nearest centroid- class, recalculate centroids of each class that has been obtained, and iterate 2 to 3 steps until the new centroid is equal to or less than the specified threshold, and the algorithm ends. In our example, we selected 6 data points, 2 centroids, and used the swap test circuit to calculate the distance. Finally, we obtained two clusters of data points.
$|0\rangle$ is an auxiliary qubit, through left $H$ gate, it will be changed to $\frac{1}{\sqrt{2}}(|0\rangle + |1\rangle)$. Then under the control of $|1\rangle$, the circuit will swap two vectors $|x\rangle$ and $|y\rangle$. Finally, we get the result at the right end of the circuit:
$$|0_{anc}\rangle |x\rangle |y\rangle \rightarrow \frac{1}{2}|0_{anc}\rangle(|xy\rangle + |yx\rangle) + \frac{1}{2}|1_{anc}\rangle(|xy\rangle - |yx\rangle)$$
If we measure auxiliary qubit alone, then the probability of final state in the ground state $|1\rangle$ is:
$$P(|1_{anc}\rangle) = \frac{1}{2} - \frac{1}{2}|\langle x | y \rangle|^2$$
If we measure auxiliary qubit alone, then the probability of final state in the ground state $|1\rangle$ is:
$$Euclidean \ distance = \sqrt{(2 - 2|\langle x | y \rangle|)}$$
So, we can see that the probability of measuring $|1\rangle$ has positive correlation with the Euclidean distance.
The schematic diagram of quantum K-Means is as the follow picture.[[1]](#cite)
<img src="../images/k_means_circuit.png">
To make our algorithm can be run using qiskit, we design a more detailed circuit to achieve our algorithm.
|
#### Quantum K-Means circuit
<img src="../images/k_means.png">
## Data points
<table border="1">
<tr>
<td>point num</td>
<td>theta</td>
<td>phi</td>
<td>lam</td>
<td>x</td>
<td>y</td>
</tr>
<tr>
<td>1</td>
<td>0.01</td>
<td>pi</td>
<td>pi</td>
<td>0.710633</td>
<td>0.703562</td>
</tr>
<tr>
<td>2</td>
<td>0.02</td>
<td>pi</td>
<td>pi</td>
<td>0.714142</td>
<td>0.7</td>
</tr>
<tr>
<td>3</td>
<td>0.03</td>
<td>pi</td>
<td>pi</td>
<td>0.717633</td>
<td>0.696421</td>
</tr>
<tr>
<td>4</td>
<td>0.04</td>
<td>pi</td>
<td>pi</td>
<td>0.721107</td>
<td>0.692824</td>
</tr>
<tr>
<td>5</td>
<td>0.05</td>
<td>pi</td>
<td>pi</td>
<td>0.724562</td>
<td>0.68921</td>
</tr>
<tr>
<td>6</td>
<td>1.31</td>
<td>pi</td>
<td>pi</td>
<td>0.886811</td>
<td>0.462132</td>
</tr>
<tr>
<td>7</td>
<td>1.32</td>
<td>pi</td>
<td>pi</td>
<td>0.889111</td>
<td>0.457692</td>
</tr>
<tr>
<td>8</td>
<td>1.33</td>
<td>pi</td>
<td>pi</td>
<td>0.891388</td>
<td>0.453241</td>
</tr>
<tr>
<td>9</td>
<td>1.34</td>
<td>pi</td>
<td>pi</td>
<td>0.893643</td>
<td>0.448779</td>
</tr>
<tr>
<td>10</td>
<td>1.35</td>
<td>pi</td>
<td>pi</td>
<td>0.895876</td>
<td>0.444305</td>
</tr>
## Quantum K-Means algorithm program
```
# import math lib
from math import pi
# import Qiskit
from qiskit import Aer, IBMQ, execute
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister
# import basic plot tools
from qiskit.tools.visualization import plot_histogram
# To use local qasm simulator
backend = Aer.get_backend('qasm_simulator')
```
In this section, we first judge the version of Python and import the packages of qiskit, math to implement the following code. We show our algorithm on the ibm_qasm_simulator, if you need to run it on the real quantum conputer, please remove the "#" in frint of "import Qconfig".
```
theta_list = [0.01, 0.02, 0.03, 0.04, 0.05, 1.31, 1.32, 1.33, 1.34, 1.35]
```
Here we define the number pi in the math lib, because we need to use u3 gate. And we also define a list about the parameter theta which we need to use in the u3 gate. As the same above, if you want to implement on the real quantum comnputer, please remove the symbol "#" and configure your local Qconfig.py file.
```
# create Quantum Register called "qr" with 5 qubits
qr = QuantumRegister(5, name="qr")
# create Classical Register called "cr" with 5 bits
cr = ClassicalRegister(5, name="cr")
# Creating Quantum Circuit called "qc" involving your Quantum Register "qr"
# and your Classical Register "cr"
qc = QuantumCircuit(qr, cr, name="k_means")
#Define a loop to compute the distance between each pair of points
for i in range(9):
for j in range(1,10-i):
# Set the parament theta about different point
theta_1 = theta_list[i]
theta_2 = theta_list[i+j]
#Achieve the quantum circuit via qiskit
qc.h(qr[2])
qc.h(qr[1])
qc.h(qr[4])
qc.u3(theta_1, pi, pi, qr[1])
qc.u3(theta_2, pi, pi, qr[4])
qc.cswap(qr[2], qr[1], qr[4])
qc.h(qr[2])
qc.measure(qr[2], cr[2])
qc.reset(qr)
job = execute(qc, backend=backend, shots=1024)
result = job.result()
print(result)
print('theta_1:' + str(theta_1))
print('theta_2:' + str(theta_2))
# print( result.get_data(qc))
plot_histogram(result.get_counts())
```
Here we achieve the function k_means() and the test main function to run the program. Considering the qubits controlling direction of ibmqx4, we takes the quantum register 1, 2, 4 as our working register, if you want to run this program on other computer, please redesign the circuit structure to ensure your program can be run accurately.
## Result analysis
In this program, we take the quantum register 1, 2, 4 as our operated register (considering the condition when using ibmqx4.) We take the quantum register 1, 4 storing the input information about data points, and the quantum register 2 as controlling register to decide whether to use the swap operator. To estimate the distance of any pair of data points, we use a loop to implement the K-Means Circuit. In the end, we measure the controlling register to judge the distance between two data points. The probability when we get 1 means that the distance between two data points.
## Reference
<cite>[1].Quantum algorithms for supervised and unsupervised machine learning(*see open access: [ arXiv:1307.0411v2](https://arxiv.org/abs/1307.0411)*)</cite><a id='cite'></a>
| github_jupyter |
```
from tqdm.notebook import tqdm
import math
import gym
import torch
import torch.optim as optim
from torch.utils.tensorboard import SummaryWriter
from collections import deque
from active_rl.networks.dqn_atari import ENS_DQN
from active_rl.utils.memory import LabelledReplayMemory
from active_rl.utils.optimization import AMN_optimization_ensemble
from active_rl.environments.atari_wrappers import make_atari, wrap_deepmind
from active_rl.utils.atari_utils import fp, ActionSelector, evaluate
from active_rl.utils.acquisition_functions import ens_random
env_name = 'Breakout'
env_raw = make_atari('{}NoFrameskip-v4'.format(env_name))
env = wrap_deepmind(env_raw, frame_stack=False, episode_life=False, clip_rewards=True)
c,h,w = c,h,w = fp(env.reset()).shape
n_actions = env.action_space.n
BATCH_SIZE = 64
LR = 0.0000625
GAMMA = 0.99
EPS = 0.05
NUM_STEPS = 10000000
NOT_LABELLED_CAPACITY = 1000
LABELLED_CAPACITY = 100000
INITIAL_STEPS=NOT_LABELLED_CAPACITY
TRAINING_PER_LABEL = 20.
PERCENTAGE = 0.1
TRAINING_ITER = int(TRAINING_PER_LABEL * NOT_LABELLED_CAPACITY * PERCENTAGE)
NAME = f"AMN_ens_random_breakout_original_label_percentage_{PERCENTAGE}_batch_size_{PERCENTAGE * NOT_LABELLED_CAPACITY}"
device = 'cuda:0'
# AMN_net = MC_DQN(n_actions).to(device)
AMN_net = ENS_DQN(n_actions).to(device)
expert_net = torch.load("models/dqn_expert_breakout_model", map_location=device)
AMN_net.apply(AMN_net.init_weights)
expert_net.eval()
# optimizer = optim.Adam(AMN_net.parameters(), lr=LR, eps=1.5e-4)
optimizer = optim.Adam(AMN_net.parameters(), lr=LR, eps=1.5e-4)
memory = LabelledReplayMemory(NOT_LABELLED_CAPACITY, LABELLED_CAPACITY, [5,h,w], n_actions, ens_random, AMN_net, device=device)
action_selector = ActionSelector(EPS, EPS, AMN_net, 1, n_actions, device)
steps_done = 0
writer = SummaryWriter(f'runs/{NAME}')
q = deque(maxlen=5)
done=True
eps = 0
episode_len = 0
num_labels = 0
progressive = tqdm(range(NUM_STEPS), total=NUM_STEPS, ncols=400, leave=False, unit='b')
for step in progressive:
if done:
env.reset()
sum_reward = 0
episode_len = 0
img, _, _, _ = env.step(1) # BREAKOUT specific !!!
for i in range(10): # no-op
n_frame, _, _, _ = env.step(0)
n_frame = fp(n_frame)
q.append(n_frame)
# Select and perform an action
state = torch.cat(list(q))[1:].unsqueeze(0)
action, eps = action_selector.select_action(state)
n_frame, reward, done, info = env.step(action)
n_frame = fp(n_frame)
# 5 frame as memory
q.append(n_frame)
memory.push(torch.cat(list(q)).unsqueeze(0), action, reward, done) # here the n_frame means next frame from the previous time step
episode_len += 1
# Perform one step of the optimization (on the target network)
if step % NOT_LABELLED_CAPACITY == 0 and step > 0:
num_labels += memory.label_sample(percentage=PERCENTAGE,batch_size=BATCH_SIZE);
loss = 0
for _ in range(TRAINING_ITER):
loss += AMN_optimization_ensemble(AMN_net, expert_net, optimizer, memory, batch_size=BATCH_SIZE,
device=device)
loss /= TRAINING_ITER
writer.add_scalar('Performance/loss', loss, num_labels)
if step % 10000 == 0 and step > 0:
evaluated_reward = evaluate(step, AMN_net, device, env_raw, n_actions, eps=0.05, num_episode=15)
writer.add_scalar('Performance/reward_vs_label', evaluated_reward, num_labels)
writer.add_scalar('Performance/reward_vs_step', evaluated_reward, step)
evaluated_reward_expert = evaluate(step, expert_net, device, env_raw, n_actions, eps=0.05, num_episode=15)
writer.add_scalar('Performance/reward_expert_vs_step', evaluated_reward_expert, step)
```
| github_jupyter |
# Seminar for Lecture 13 "VAE Vocoder"
In the lectures, we studied various approaches to creating vocoders. The problem of sound generation is solved by deep generative models. We've discussed autoregressive models that can be reduced to **MAF**. We've considered the reverse analogue of MAF – **IAF**. We've seen how **normalizing flows** can help us directly optimize likelihood without using autoregression. And alse we've considered a vocoder built with the **GAN** paradigm.
At this seminar we will try to apply another popular generative model: the **variational autoencoder (VAE)**. We will try to build an encoder-decoder architecture with **MAF** as encoder and **IAF** as decoder. We will train this network by maximizing ELBO with a couple of additional losses (in vocoders, you can't do without it yet 🤷♂️).
⚠️ In this seminar we call **"MAF"** not the generative model discussed on lecture, but network which architecture is like MAF's one and accepting audio as input. So we won't model data distribution with our **"MAF"**.
```
# ! pip install torch==1.7.1+cu101 torchvision==0.8.2+cu101 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html
# ! pip install numpy==1.17.5 matplotlib==3.3.3 tqdm==4.54.0
import torch
from torch import nn
from torch.nn import functional as F
from typing import Union
from math import log, pi, sqrt
from IPython.display import display, Audio
import numpy as np
import librosa
import os
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
device = torch.device("cpu")
if False and torch.cuda.is_available():
print('GPU found! 🎉')
device = torch.device("cuda")
```
Introduce auxiliary modules:
1. causal convolution – simple convolution with `kernel_size` and `dilation` hyper-parameters, but working in causal way (does not look in the future)
2. residual block – main building component of WaveNet architecture
Yes, WaveNet is everywhere. We can build MAF and IAF with any architecture, but WaveNet declared oneself as simple yet powerfull architecture. We will use WaveNet with conditioning on mel spectrograms, because we are building a vocoder.
```
class CausalConv(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size, dilation=1):
super(CausalConv, self).__init__()
self.padding = dilation * (kernel_size - 1)
self.conv = nn.Conv1d(
in_channels,
out_channels,
kernel_size,
padding=self.padding,
dilation=dilation)
self.conv = nn.utils.weight_norm(self.conv)
nn.init.kaiming_normal_(self.conv.weight)
def forward(self, x):
x = self.conv(x)
x = x[:, :, :-self.padding]
return x
class ResBlock(nn.Module):
def __init__(self, in_channels, out_channels, skip_channels, kernel_size, dilation, cin_channels):
super(ResBlock, self).__init__()
self.cin_channels = cin_channels
self.filter_conv = CausalConv(in_channels, out_channels, kernel_size, dilation)
self.gate_conv = CausalConv(in_channels, out_channels, kernel_size, dilation)
self.res_conv = nn.Conv1d(out_channels, in_channels, kernel_size=1)
self.skip_conv = nn.Conv1d(out_channels, skip_channels, kernel_size=1)
self.res_conv = nn.utils.weight_norm(self.res_conv)
self.skip_conv = nn.utils.weight_norm(self.skip_conv)
nn.init.kaiming_normal_(self.res_conv.weight)
nn.init.kaiming_normal_(self.skip_conv.weight)
self.filter_conv_c = nn.Conv1d(cin_channels, out_channels, kernel_size=1)
self.gate_conv_c = nn.Conv1d(cin_channels, out_channels, kernel_size=1)
self.filter_conv_c = nn.utils.weight_norm(self.filter_conv_c)
self.gate_conv_c = nn.utils.weight_norm(self.gate_conv_c)
nn.init.kaiming_normal_(self.filter_conv_c.weight)
nn.init.kaiming_normal_(self.gate_conv_c.weight)
def forward(self, x, c=None):
h_filter = self.filter_conv(x)
h_gate = self.gate_conv(x)
h_filter += self.filter_conv_c(c)
h_gate += self.gate_conv_c(c)
out = torch.tanh(h_filter) * torch.sigmoid(h_gate)
res = self.res_conv(out)
skip = self.skip_conv(out)
return (x + res) * sqrt(0.5), skip
```
For WaveNet it doesn't matter what it is used for: MAF or IAF - it all depends on our interpretation of the input and output variables.
Below is the WaveNet architecture that you are already familiar with from the last seminar. But this time, you will need to implement not inference but forward pass - and it's very simple 😉.
```
class WaveNet(nn.Module):
def __init__(self, params):
super(WaveNet, self). __init__()
self.front_conv = nn.Sequential(
CausalConv(1, params.residual_channels, params.front_kernel_size),
nn.ReLU())
self.res_blocks = nn.ModuleList()
for b in range(params.num_blocks):
for n in range(params.num_layers):
self.res_blocks.append(ResBlock(
in_channels=params.residual_channels,
out_channels=params.gate_channels,
skip_channels=params.skip_channels,
kernel_size=params.kernel_size,
dilation=2 ** n,
cin_channels=params.mel_channels))
self.final_conv = nn.Sequential(
nn.ReLU(),
nn.Conv1d(params.skip_channels, params.skip_channels, kernel_size=1),
nn.ReLU(),
nn.Conv1d(params.skip_channels, params.out_channels, kernel_size=1))
def forward(self, x, c):
# x: input tensor with signal or noise [B, 1, T]
# c: local conditioning [B, C_mel, T]
out = 0
################################################################################
x = self.front_conv(x)
for b in range(len(self.res_blocks)):
x, x_skip = self.res_blocks[b](x, c)
out = out + x_skip
out = self.final_conv(out)
################################################################################
return out
# check that works and gives expected output size
# full correctness we will check later, when the whole network will be assembled
class Params:
mel_channels: int = 80
num_blocks: int = 4
num_layers: int = 6
out_channels: int = 3
front_kernel_size: int = 2
residual_channels: int = 64
gate_channels: int = 64
skip_channels: int = 128
kernel_size: int = 2
net = WaveNet(Params()).to(device).eval()
with torch.no_grad():
z = torch.FloatTensor(5, 1, 4096).normal_().to(device)
c = torch.FloatTensor(5, 80, 4096).zero_().to(device)
assert list(net(z, c).size()) == [5, 3, 4096]
```
Excellent 👍! Now we are ready to get started on more complex and interesting things.
Do you remember our talks about vocoders built on IAF (Parallel WaveNet or ClariNet Vocoder)? We casually said that IAF we use not just one WaveNet (predicting mu and sigma), but a stack of WaveNets. Actually, let's implement this stack, but first, a few formulas that will help you.
Consider transformations of random variable $z^{(0)} \sim \mathcal{N}(0, I)$:
$$z^{(0)} \rightarrow z^{(1)} \rightarrow \dots \rightarrow z^{(n)}.$$
Each transformation has the form:
$$ z^{(k)} = f^{(k)}(z^{(k-1)}) = z{(k-1)} \cdot \sigma^{(k)} + \mu^{(k)},$$
where $\mu^{(k)}_t = \mu(z_{<t}^{(k-1)}; \theta_k)$ and $\sigma^{(k)}_t = \sigma(z_{<t}^{(k-1)}; \theta_k)$ – are shifting and scaling variables modeled by a Gaussan WaveNet.
It is easy to deduce that the whole transformation $f^{(k)} \circ \dots \circ f^{(2)} \circ f^{(1)}$ can be represented as $f^{(\mathrm{total})}(z) = z \cdot \sigma^{(\mathrm{total})} + \mu^{(\mathrm{total})}$, where
$$\sigma^{(\mathrm{total})} = \prod_{k=1}^n \sigma^{(k)}, ~ ~ ~ \mu^{(\mathrm{total})} = \sum_{k=1}^n \mu^{(k)} \prod_{j > k}^n \sigma^{(j)} $$
$\mu^{(\mathrm{total})}$ and $\sigma^{(\mathrm{total})}$ we will need in the future for $p(\hat x | z) estimation$.
You need to **implement** `forward` method of `WaveNetFlows` model.
📝 Notes:
1. WaveNet outputs tensor `output` of size `[B, 2, T]`, where `output[:, 0, :]` is $\mu$ and `output[:, 1, :]` is $\log \sigma$. We model logarithms of $\sigma$ insead of $\sigma$ for stable gradients.
2. As we model $\mu(z_{<t}^{(k-1)}; \theta_k)$ and $\sigma(z_{<t}^{(k-1)}; \theta_k)$ – their output we have length `T - 1`. To keep constant length `T` of modelled noise variable we need to pad it on the left side (with zero).
3. $\mu^{(\mathrm{total})}$ and $\sigma^{(\mathrm{total})}$ wil have length `T - 1`, because we do not pad distribution parameters.
```
class WaveNetFlows(nn.Module):
def __init__(self, params):
super(WaveNetFlows, self).__init__()
self.device = params.device
self.iafs = nn.ModuleList()
for i in range(params.num_flows):
self.iafs.append(WaveNet(params))
def forward(self, z, c):
# z: random sample from standart distribution [B, 1, T]
# c: local conditioning for WaveNet [B, C_mel, T]
mu_tot, logs_tot = 0., 0.
################################################################################
mus, log_sigmas = [], []
for iaf in self.iafs:
out_i = iaf(z, c)
mu = out_i[:, 0, :-1].unsqueeze(1)
mu_padded = torch.cat([torch.zeros((*(z.shape[:-1]), 1), dtype=torch.float32).to(device), mu], axis=-1)
mus.append(mu)
log_sigma = out_i[:, 1, :-1].unsqueeze(1)
log_sigma_padded = torch.cat([torch.zeros((*(z.shape[:-1]), 1), dtype=torch.float32).to(device), log_sigma], axis=-1)
log_sigmas.append(log_sigma)
z = torch.exp(log_sigma_padded) * z + mu_padded
logs_tot = torch.sum(torch.stack(log_sigmas, axis=0), axis=0)
for i in range(len(self.iafs) - 1):
mu_tot += mus[i] * torch.exp(torch.sum(torch.stack(log_sigmas[i + 1:], axis=0), axis=0))
mu_tot += mus[-1]
################################################################################
return z, mu_tot, logs_tot
class Params:
num_flows: int = 4
mel_channels: int = 80
num_blocks: int = 1
num_layers: int = 5
out_channels: int = 2
front_kernel_size: int = 2
residual_channels: int = 64
gate_channels: int = 64
skip_channels: int = 64
kernel_size: int = 3
device = device
net = WaveNetFlows(Params()).to(device)
with torch.no_grad():
z = torch.FloatTensor(3, 1, 4096).normal_().to(device)
c = torch.FloatTensor(3, 80, 4096).zero_().to(device)
z_hat, mu, log_sigma = net(z, c)
assert list(z_hat.size()) == [3, 1, 4096] # same length as input
assert list(mu.size()) == [3, 1, 4096 - 1] # shorter by one sample
assert list(log_sigma.size()) == [3, 1, 4096 - 1] # shorted by one sample
```
If you are not familiar with VAE framework, please try to figure it out. For example, please familiarize with this [blog post](https://wiseodd.github.io/techblog/2016/12/10/variational-autoencoder/).
In short, VAE – is just "modification" of AutoEncoder, which consists of encoder and decoder. VAE allows you to sample from data distribution $p(x)$ as $p(x|z)$ via its decoder, where $p(z)$ is simple and known, e.g. $\mathcal{N}(0, I)$. The interesting part is that $p(x | z)$ cannot be optimized with Maximum Likelihood Estimation, because $p(x | z)$ is not tractable.
But we can maximize Evidence Lower Bound (ELBO) which has a form:
$$\max_{\phi, \theta} \mathbb{E}_{q_{\phi}(z | x)} \log p_{\theta}(x | z) - \mathbb{D}_{KL}(q_{\phi}(z | x) || p(z))$$
where $p_{\theta}(x | z)$ is VAE decoder and $q_{\phi}(z | x)$ is VAE encoder. For more details please read mentioned blog post or any other materials on this theme.
In our case $q_{\phi}(z | x)$ is represented by MAF WaveNet, and $p_{\theta}(x | z)$ – by IAF build with WaveNet stack. To be more precise our decoder $p_{\theta}(x | z)$ is parametrised by the **one-step-ahead prediction** from an IAF.
🧑💻 **let's practice..**
We will start from easy part: generation (or sampling).
**Implement** `generate` method, which accepts mel spectrogram as conditioning tensor. Inside this method random tensor from standart distribution N(0, I) is sampled. This tensor than transformed to tensor from audio distribution via `decoder`. In the cell bellow you will see code for loading pretrained model and mel spectrogram. Listen to result – it should sound passable, but MOS 5.0 is not expected. 😄
```
class WaveNetVAE(nn.Module):
def __init__(self, encoder_params, decoder_params):
super(WaveNetVAE, self).__init__()
assert encoder_params.device == decoder_params.device
self.device = encoder_params.device
self.mse_loss = torch.nn.MSELoss()
self.encoder = WaveNet(encoder_params)
self.decoder = WaveNetFlows(decoder_params)
self.log_eps = nn.Parameter(torch.zeros(1))
self.upsample_conv = nn.ModuleList()
for s in [16, 16]:
conv = nn.ConvTranspose2d(1, 1, (3, 2 * s), padding=(1, s // 2), stride=(1, s))
conv = nn.utils.weight_norm(conv)
nn.init.kaiming_normal_(conv.weight)
self.upsample_conv.append(conv)
self.upsample_conv.append(nn.LeakyReLU(0.4))
def forward(self, x, c):
# x: audio signal [B, 1, T]
# c: mel spectrogram [B, 1, T / HOP_SIZE]
loss_rec = 0
loss_kl = 0
loss_frame_rec = 0
loss_frame_prior = 0
################################################################################
c_up = self.upsample(c)
mu_log_sigma = self.encoder(x, c_up)
mu = mu_log_sigma[:, 0, :].unsqueeze(1)
log_sigma = mu_log_sigma[:, 1, :].unsqueeze(1)
mu_whitened = (x - mu) / torch.exp(log_sigma)
eps = torch.randn_like(log_sigma).to(self.device)
z = mu_whitened + torch.exp(log_sigma / 2) * eps
x_rec, mu_tot, log_sigma_tot = self.decoder(z, c_up)
x_prior = self.generate(c)
x_rec_stft = librosa.stft(x_rec.view(-1).detach().numpy())
x_stft = librosa.stft(x.view(-1).detach().numpy())
x_prior_stft = librosa.stft(x_prior.view(-1).detach().numpy())
loss_frame_rec = self.mse_loss(torch.FloatTensor(x_stft), torch.FloatTensor(x_rec_stft))
loss_frame_prior = self.mse_loss(torch.FloatTensor(x_stft), torch.FloatTensor(x_prior_stft))
loss_kl = torch.sum(-self.log_eps + (1 / 2.0) * (torch.exp(self.log_eps) ** 2.0 - 1.0 + mu_whitened ** 2.0))
loss_rec = torch.sum(torch.normal(mu_tot, torch.exp(log_sigma_tot)))
################################################################################
alpha = 1e-9 # for annealing during training
return loss_rec + alpha * loss_kl + loss_frame_rec + loss_frame_prior
def generate(self, c):
# c: mel spectrogram [B, 80, L] where L - number of mel frames
# outputs: audio [B, 1, L * HOP_SIZE]
################################################################################
c_up = self.upsample(c)
frames_number = c_up.shape[-1]
z = torch.randn(c_up.shape[0], 1, frames_number).to(self.device)
x_sample, _, _ = self.decoder(z, c_up)
################################################################################
return x_sample
def upsample(self, c):
c = c.unsqueeze(1) # [B, 1, C, L]
for f in self.upsample_conv:
c = f(c)
c = c.squeeze(1) # [B, C, T], where T = L * HOP_SIZE
return c
# saved checkpoint model has following architecture parameters
class ParamsMAF:
mel_channels: int = 80
num_blocks: int = 2
num_layers: int = 10
out_channels: int = 2
front_kernel_size: int = 32
residual_channels: int = 128
gate_channels: int = 256
skip_channels: int = 128
kernel_size: int = 2
device: str = device
class ParamsIAF:
num_flows: int = 6
mel_channels: int = 80
num_blocks: int = 1
num_layers: int = 10
out_channels: int = 2
front_kernel_size: int = 32
residual_channels: int = 64
gate_channels: int = 128
skip_channels: int = 64
kernel_size: int = 3
device: str = device
# load checkpoint
ckpt_path = 'data/checkpoint.pth'
net = WaveNetVAE(ParamsMAF(), ParamsIAF()).eval().to(device)
ckpt = torch.load(ckpt_path, map_location='cpu')
net.load_state_dict(ckpt['state_dict'])
# load original audio and it's mel
x = torch.load('data/x.pth').to(device)
c = torch.load('data/c.pth').to(device)
# generate audio from
with torch.no_grad():
x_prior = net.generate(c.unsqueeze(0)).squeeze()
display(Audio(x_prior.cpu(), rate=22050))
```
If it sounds plausible **5 points** 🥉 are already yours 🎉! And here the most interesting and difficult part comes: loss function implementation. The `forward` method will return the loss. But lets talk more precisly about our architecture and how it was trained.
The encoder of our model $q_{\phi}(z|x)$ is parametrerized by a Gaussian autoregressive WaveNet, which maps the audio $x$ into the sample length latent representation $z$. Specifically, the Gaussian WaveNet (if we talk about **real MAF**) models $x_t$ given the previous samples $x_{<t}$ with $x_t ∼ \mathcal{N}(\mu(x_{<t}; \phi), \sigma(x_{<t}; \phi))$, where the mean $\mu(x_{<t}; \phi)$ and log-scale $\log \sigma(x_{<t}; \phi)$ are predicted by WaveNet, respectively.
Our **encoder** posterior is constructed as
$$q_{\phi}(z | x) = \prod_{t} q_{\phi}(z_t | x_{\leq t})$$
where
$$q_{\phi}(z_t | x_{\leq t}) = \mathcal{N}(\frac{x_t - \mu(x_{<t}; \phi)}{\sigma(x_{<t}; \phi)}, \varepsilon)$$
We apply the mean $\mu(x_{<t}; \phi)$ and scale $\sigma(x_{<t})$ for "whitening" the posterior distribution. Also we introduce a trainable scalar $\varepsilon > 0$ to decouple the global variation, which will make optimization process easier.
Substitution of our model formulas in $\mathbb{D}_{KL}$ formula gives:
$$\mathbb{D}_{KL}(q_{\phi}(z | x) || p(z)) = \sum_t \log\frac{1}{\varepsilon} + \frac{1}{2}(\varepsilon^2 - 1 + (\frac{x_t - \mu(x_{<t})}{\sigma(x_{<t})})^2)$$
**Implement** calculation of `loss_kld` in `forward` method as KL divergence.
---
The other term in ELBO formula can be interpreted as reconstruction loss. It can be evaluated by sampling from $p_{\theta}(x | z)$, where $z$ is from $q_{\phi}(z | \hat x)$, $\hat x$ is our ground truth audio. But sampling is not differential operation! 🤔 We can apply reparametrization trick!
**Implement** calculation of `loss_rec` in `forward` method as recontruction loss – which is just log likelihood of ground truth sample $x$ in predicted by IAF distribution $p_{\theta}(x | \hat z)$ where $\hat z \sim q_{\phi}(z | \hat x)$.
---
Vocoders without MLE are still not able to train without auxilary losses. We studied many of them, but STFT-loss is our favourite!
**Implement** calculation of `loss_frame_rec` which stands for MSE loss in STFT domain between original audio and its reconstruction.
---
We can go even further and calculate STFT loss with random sample from $p_\theta(x | z)$. Conditioning on mel spectrogram allows us to do so.
**Implement** calculation of `loss_frame_prior` which stands for MSE loss in STFT domain between original audio and sample from prior.
```
net = WaveNetVAE(ParamsMAF(), ParamsIAF()).to(device).train()
x = x[:64 * 256]
c = c[:, :64]
net.zero_grad()
loss = net.forward(x.unsqueeze(0).unsqueeze(0), c.unsqueeze(0))
loss.backward()
print(f"Initial loss: {loss.item():.2f}")
ckpt = torch.load(ckpt_path, map_location='cpu')
net.load_state_dict(ckpt['state_dict'])
net.zero_grad()
loss = net.forward(x.unsqueeze(0).unsqueeze(0), c.unsqueeze(0))
loss.backward()
print(f"Optimized loss: {loss.item():.2f}")
```
If you correctly implemented losses and the backward pass works smoothly, **8 more points**🥈 are yours 🎉!
For **2 additional points** 🥇 please write a short essay (in russian) about your thoughts on vocoders. Try to avoid obvious statements as "vocoder is very important part of TTS pipeline". We are interested in insights you've got from studying vocoders.
`YOUR TEXT HERE`
| github_jupyter |
# Trigger Examples
Triggers allow the user to specify a set of actions that are triggered by the result of a boolean expression.
They provide flexibility to adapt what analysis and visualization actions are taken in situ. Triggers leverage Ascent's Query and Expression infrastructure. See Ascent's [Triggers](https://ascent.readthedocs.io/en/latest/Actions/Triggers.html) docs for deeper details on Triggers.
```
# cleanup any old results
!./cleanup.sh
# ascent + conduit imports
import conduit
import conduit.blueprint
import ascent
import numpy as np
# helpers we use to create tutorial data
from ascent_tutorial_jupyter_utils import img_display_width
from ascent_tutorial_jupyter_utils import tutorial_gyre_example
import matplotlib.pyplot as plt
```
## Trigger Example 1
### Using triggers to render when conditions occur
```
# Use triggers to render when conditions occur
a = ascent.Ascent()
a.open()
# setup actions
actions = conduit.Node()
# declare a question to ask
add_queries = actions.append()
add_queries["action"] = "add_queries"
# add our entropy query (q1)
queries = add_queries["queries"]
queries["q1/params/expression"] = "entropy(histogram(field('gyre'), num_bins=128))"
queries["q1/params/name"] = "entropy"
# declare triggers
add_triggers = actions.append()
add_triggers["action"] = "add_triggers"
triggers = add_triggers["triggers"]
# add a simple trigger (t1_ that fires at cycle 500
triggers["t1/params/condition"] = "cycle() == 500"
triggers["t1/params/actions_file"] = "cycle_trigger_actions.yaml"
# add trigger (t2) that fires when the change in entroy exceeds 0.5
# the history function allows you to access query results of previous
# cycles. relative_index indicates how far back in history to look.
# Looking at the plot of gyre entropy in the previous notebook, we see a jump
# in entropy at cycle 200, so we expect the trigger to fire at cycle 200
triggers["t2/params/condition"] = "entropy - history(entropy, relative_index = 1) > 0.5"
triggers["t2/params/actions_file"] = "entropy_trigger_actions.yaml"
# view our full actions tree
print(actions.to_yaml())
# gyre time varying params
nsteps = 10
time = 0.0
delta_time = 0.5
for step in range(nsteps):
# call helper that generates a double gyre time varying example mesh.
# gyre ref :https://shaddenlab.berkeley.edu/uploads/LCS-tutorial/examples.html
mesh = tutorial_gyre_example(time)
# update the example cycle
cycle = 100 + step * 100
mesh["state/cycle"] = cycle
print("time: {} cycle: {}".format(time,cycle))
# publish mesh to ascent
a.publish(mesh)
# execute the actions
a.execute(actions)
# update time
time = time + delta_time
# retrieve the info node that contains the trigger and query results
info = conduit.Node()
a.info(info)
# close ascent
a.close()
# we expect our cycle trigger to render only at cycle 500
! ls cycle_trigger*.png
# show the result image from the cycle trigger
ascent.jupyter.AscentImageSequenceViewer(["cycle_trigger_out_500.png"]).show()
# we expect our entropy trigger to render only at cycle 200
! ls entropy_trigger*.png
# show the result image from the entropy trigger
ascent.jupyter.AscentImageSequenceViewer(["entropy_trigger_out_200.png"]).show()
print(info["expressions"].to_yaml())
```
| github_jupyter |
## These notebooks can be found at https://github.com/jaspajjr/pydata-visualisation if you want to follow along
https://matplotlib.org/users/intro.html
Matplotlib is a library for making 2D plots of arrays in Python.
* Has it's origins in emulating MATLAB, it can also be used in a Pythonic, object oriented way.
* Easy stuff should be easy, difficult stuff should be possible
```
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
%matplotlib inline
```
Everything in matplotlib is organized in a hierarchy. At the top of the hierarchy is the matplotlib “state-machine environment” which is provided by the matplotlib.pyplot module. At this level, simple functions are used to add plot elements (lines, images, text, etc.) to the current axes in the current figure.
Pyplot’s state-machine environment behaves similarly to MATLAB and should be most familiar to users with MATLAB experience.
The next level down in the hierarchy is the first level of the object-oriented interface, in which pyplot is used only for a few functions such as figure creation, and the user explicitly creates and keeps track of the figure and axes objects. At this level, the user uses pyplot to create figures, and through those figures, one or more axes objects can be created. These axes objects are then used for most plotting actions.
## Scatter Plot
To start with let's do a really basic scatter plot:
```
plt.plot([0, 1, 2, 3, 4, 5], [0, 2, 4, 6, 8, 10])
x = [0, 1, 2, 3, 4, 5]
y = [0, 2, 4, 6, 8, 10]
plt.plot(x, y)
```
What if we don't want a line?
```
plt.plot([0, 1, 2, 3, 4, 5],
[0, 2, 5, 7, 8, 10],
marker='o',
linestyle='')
plt.xlabel('The X Axis')
plt.ylabel('The Y Axis')
plt.show();
```
#### Simple example from matplotlib
https://matplotlib.org/tutorials/intermediate/tight_layout_guide.html#sphx-glr-tutorials-intermediate-tight-layout-guide-py
```
def example_plot(ax, fontsize=12):
ax.plot([1, 2])
ax.locator_params(nbins=5)
ax.set_xlabel('x-label', fontsize=fontsize)
ax.set_ylabel('y-label', fontsize=fontsize)
ax.set_title('Title', fontsize=fontsize)
fig, ax = plt.subplots()
example_plot(ax, fontsize=24)
fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(nrows=2, ncols=2)
# fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(nrows=2, ncols=2, sharex=True, sharey=True)
ax1.plot([0, 1, 2, 3, 4, 5],
[0, 2, 5, 7, 8, 10])
ax2.plot([0, 1, 2, 3, 4, 5],
[0, 2, 4, 9, 16, 25])
ax3.plot([0, 1, 2, 3, 4, 5],
[0, 13, 18, 21, 23, 25])
ax4.plot([0, 1, 2, 3, 4, 5],
[0, 1, 2, 3, 4, 5])
plt.tight_layout()
```
## Date Plotting
```
import pandas_datareader as pdr
df = pdr.get_data_fred('GS10')
df = df.reset_index()
print(df.info())
df.head()
fig = plt.figure(figsize=(12, 8))
ax = fig.add_subplot(111)
ax.plot_date(df['DATE'], df['GS10'])
```
## Bar Plot
```
fig = plt.figure(figsize=(12, 8))
ax = fig.add_subplot(111)
x_data = [0, 1, 2, 3, 4]
values = [20, 35, 30, 35, 27]
ax.bar(x_data, values)
ax.set_xticks(x_data)
ax.set_xticklabels(('A', 'B', 'C', 'D', 'E'))
;
```
## Matplotlib basics
http://pbpython.com/effective-matplotlib.html
### Behind the scenes
* matplotlib.backend_bases.FigureCanvas is the area onto which the figure is drawn
* matplotlib.backend_bases.Renderer is the object which knows how to draw on the FigureCanvas
* matplotlib.artist.Artist is the object that knows how to use a renderer to paint onto the canvas
The typical user will spend 95% of their time working with the Artists.
https://matplotlib.org/tutorials/intermediate/artists.html#sphx-glr-tutorials-intermediate-artists-py
```
fig, (ax1, ax2) = plt.subplots(
nrows=1,
ncols=2,
sharey=True,
figsize=(12, 8))
fig.suptitle("Main Title", fontsize=14, fontweight='bold');
x_data = [0, 1, 2, 3, 4]
values = [20, 35, 30, 35, 27]
ax1.barh(x_data, values);
ax1.set_xlim([0, 55])
#ax1.set(xlabel='Unit of measurement', ylabel='Groups')
ax1.set(title='Foo', xlabel='Unit of measurement')
ax1.grid()
ax2.barh(x_data, [y / np.sum(values) for y in values], color='r');
ax2.set_title('Transformed', fontweight='light')
ax2.axvline(x=.1, color='k', linestyle='--')
ax2.set(xlabel='Unit of measurement') # Worth noticing this
ax2.set_axis_off();
fig.savefig('example_plot.png', dpi=80, bbox_inches="tight")
```
| github_jupyter |
# Procedures and Functions Tutorial
MLDB is the Machine Learning Database, and all machine learning operations are done via Procedures and Functions. Training a model happens via Procedures, and applying a model happens via Functions.
The notebook cells below use `pymldb`'s `Connection` class to make [REST API](../../../../doc/#builtin/WorkingWithRest.md.html) calls. You can check out the [Using `pymldb` Tutorial](../../../../doc/nblink.html#_tutorials/Using pymldb Tutorial) for more details.
```
from pymldb import Connection
mldb = Connection("http://localhost")
```
## Loading a Dataset
The classic [Iris Flower Dataset](http://en.wikipedia.org/wiki/Iris_flower_data_set) isn't very big but it's well-known and easy to reason about so it's a good example dataset to use for machine learning examples.
We can import it directly from a remote URL:
```
mldb.put('/v1/procedures/import_iris', {
"type": "import.text",
"params": {
"dataFileUrl": "file://mldb/mldb_test_data/iris.data",
"headers": [ "sepal length", "sepal width", "petal length", "petal width", "class" ],
"outputDataset": "iris",
"runOnCreation": True
}
})
```
## A quick look at the data
We can use the [Query API](../../../../doc/#builtin/sql/QueryAPI.md.html) to get the data into a Pandas DataFrame to take a quick look at it.
```
df = mldb.query("select * from iris")
df.head()
%matplotlib inline
import seaborn as sns, pandas as pd
sns.pairplot(df, hue="class", size=2.5)
```
## Unsupervised Machine Learning with a `kmeans.train` Procedure
We will create and run a [Procedure](../../../../doc/#builtin/procedures/Procedures.md.html) of type [`kmeans.train`](../../../../doc/#builtin/procedures/KmeansProcedure.md.html). This will train an unsupervised K-Means model and use it to assign each row in the input to a cluster, in the output dataset.
```
mldb.put('/v1/procedures/iris_train_kmeans', {
'type' : 'kmeans.train',
'params' : {
'trainingData' : 'select * EXCLUDING(class) from iris',
'outputDataset' : 'iris_clusters',
'numClusters' : 3,
'metric': 'euclidean',
"runOnCreation": True
}
})
```
Now we can look at the output dataset and compare the clusters the model learned with the three types of flower in the dataset.
```
mldb.query("""
select pivot(class, num) as *
from (
select cluster, class, count(*) as num
from merge(iris_clusters, iris)
group by cluster, class
)
group by cluster
""")
```
As you can see, the K-means algorithm doesn't do a great job of clustering this data (as is mentioned in the Wikipedia article!).
## Supervised Machine Learning with `classifier.train` and `.test` Procedures
We will now create and run a [Procedure](../../../../doc/#builtin/procedures/Procedures.md.html) of type [`classifier.train`](../../../../doc/#builtin/procedures/Classifier.md.html). The configuration below will use 20% of the data to train a decision tree to classify rows into the three classes of Iris. The output of this procedure is a [Function](../../../../doc/#builtin/functions/Functions.md.html), which we will be able to call from REST or SQL.
```
mldb.put('/v1/procedures/iris_train_classifier', {
'type' : 'classifier.train',
'params' : {
'trainingData' : """
select
{* EXCLUDING(class)} as features,
class as label
from iris
where rowHash() % 5 = 0
""",
"algorithm": "dt",
"modelFileUrl": "file://models/iris.cls",
"mode": "categorical",
"functionName": "iris_classify",
"runOnCreation": True
}
})
```
We can now test the classifier we just trained on the subset of the data we didn't use for training. To do so we use a procedure of type [`classifier.test`](../../../../doc/#builtin/procedures/Accuracy.md.html).
```
rez = mldb.put('/v1/procedures/iris_test_classifier', {
'type' : 'classifier.test',
'params' : {
'testingData' : """
select
iris_classify({
features: {* EXCLUDING(class)}
}) as score,
class as label
from iris
where rowHash() % 5 != 0
""",
"mode": "categorical",
"runOnCreation": True
}
})
runResults = rez.json()["status"]["firstRun"]["status"]
print rez
```
The procedure returns a confusion matrix, which you can compare with the one that resulted from the K-means procedure.
```
pd.DataFrame(runResults["confusionMatrix"])\
.pivot_table(index="actual", columns="predicted", fill_value=0)
```
As you can see, the decision tree does a much better job of classifying the data than the K-means model, using 20% of the examples as training data.
The procedure also returns standard classification statistics on how the classifier performed on the test set. Below are performance statistics for each label:
```
pd.DataFrame.from_dict(runResults["labelStatistics"]).transpose()
```
They are also available, averaged over all labels:
```
pd.DataFrame.from_dict({"weightedStatistics": runResults["weightedStatistics"]})
```
### Scoring new examples
We can call the Function REST API endpoint to classify a never-before-seen set of measurements like this:
```
mldb.get('/v1/functions/iris_classify/application', input={
"features":{
"petal length": 1,
"petal width": 2,
"sepal length": 3,
"sepal width": 4
}
})
```
## Where to next?
Check out the other [Tutorials and Demos](../../../../doc/#builtin/Demos.md.html).
You can also take a look at the [`classifier.experiment`](../../../../doc/#builtin/procedures/ExperimentProcedure.md.html) procedure type that can be used to train and test a classifier in a single call.
| github_jupyter |
# Assignment 2 - Elementary Probability and Information Theory
# Boise State University NLP - Dr. Kennington
### Instructions and Hints:
* This notebook loads some data into a `pandas` dataframe, then does a small amount of preprocessing. Make sure your data can load by stepping through all of the cells up until question 1.
* Most of the questions require you to write some code. In many cases, you will write some kind of probability function like we did in class using the data.
* Some of the questions only require you to write answers, so be sure to change the cell type to markdown or raw text
* Don't worry about normalizing the text this time (e.g., lowercase, etc.). Just focus on probabilies.
* Most questions can be answered in a single cell, but you can make as many additional cells as you need.
* Follow the instructions on the corresponding assignment Trello card for submitting your assignment.
```
import pandas as pd
data = pd.read_csv('pnp-train.txt',delimiter='\t',encoding='latin-1', # utf8 encoding didn't work for this
names=['type','name']) # supply the column names for the dataframe
# this next line creates a new column with the lower-cased first word
data['first_word'] = data['name'].map(lambda x: x.lower().split()[0])
data[:10]
data.describe()
```
## 1. Write a probability function/distribution $P(T)$ over the types.
Hints:
* The Counter library might be useful: `from collections import Counter`
* Write a function `def P(T='')` that returns the probability of the specific value for T
* You can access the types from the dataframe by calling `data['type']`
```
from collections import Counter
def P(T=''):
global counts
global data
counts = Counter(data['type'])
return counts[T] / len(data['type'])
counts
```
## 2. What is `P(T='movie')` ?
```
P(T='movie')
```
## 3. Show that your probability distribution sums to one.
```
import numpy as np
round(np.sum([P(T=x) for x in set(data['type'])]), 4)
```
## 4. Write a joint distribution using the type and the first word of the name
Hints:
* The function is $P2(T,W_1)$
* You will need to count up types AND the first words, for example: ('person','bill)
* Using the [itertools.product](https://docs.python.org/2/library/itertools.html#itertools.product) function was useful for me here
```
def P2(T='', W1=''):
global count
count = data[['type', 'first_word']]
return len(count.loc[(count['type'] == T) & (count['first_word'] == W1)]) / len(count)
```
## 5. What is P2(T='person', W1='bill')? What about P2(T='movie',W1='the')?
```
P2(T='person', W1='bill')
P2(T='movie', W1='the')
```
## 6. Show that your probability distribution P(T,W1) sums to one.
```
types = Counter(data['type'])
words = Counter(data['first_word'])
retVal = 0
for x in types:
for y in words:
retVal = retVal + P2(T=x,W1=y)
print(round(retVal,4))
```
## 7. Make a new function Q(T) from marginalizing over P(T,W1) and make sure that Q(T) sums to one.
Hints:
* Your Q function will call P(T,W1)
* Your check for the sum to one should be the same answer as Question 3, only it calls Q instead of P.
```
def Q(T=''):
words = Counter(data['first_word'])
retVal = 0
for x in words:
retVal = retVal + P2(T,W1=x)
return retVal
Q('movie')
round(np.sum([Q(T=x) for x in set(data['type'])]), 4)
```
## 8. What is the KL Divergence of your Q function and your P function for Question 1?
* Even if you know the answer, you still need to write code that computes it.
I wasn't quite sure how to properly do this question so it's kind of just half implemented. Although I do know that it should be 0.0
```
import math
(P('drug') * math.log(P('drug') / Q('drug')) + P('movie') * math.log(P('movie') / Q('movie')))
```
## 9. Convert from P(T,W1) to P(W1|T)
Hints:
* Just write a comment cell, no code this time.
* Note that $P(T,W1) = P(W1,T)$
Given that P(T,W1) = P(W1,T) then we can infer that P(W1|T) = (P(W1,T)/P(T))
(try to use markdown math formating, answer in this cell)
## 10. Write a function `Pwt` (that calls the functions you already have) to compute $P(W_1|T)$.
* This will be something like the multiplication rule, but you may need to change something
```
def Pwt(W1='',T=''):
return P2(T=T,W1=W1)/P(T=T)
```
## 11. What is P(W1='the'|T='movie')?
```
Pwt(W1='the',T='movie')
```
## 12. Use Baye's rule to convert from P(W1|T) to P(T|W1). Write a function Ptw to reflect this.
Hints:
* Call your other functions.
* You may need to write a function for P(W1) and you may need a new counter for `data['first_word']`
```
def Pw(W1=''):
words = Counter(data['first_word'])
return words[W1] / len(data['first_word'])
def Ptw(T='',W1=''):
return (Pwt(W1=W1,T=T)*P(T=T))/Pw(W1=W1)
```
## 13
### What is P(T='movie'|W1='the')?
### What about P(T='person'|W1='the')?
### What about P(T='drug'|W1='the')?
### What about P(T='place'|W1='the')
### What about P(T='company'|W1='the')
```
Ptw(T='movie',W1='the')
Ptw(T='person',W1='the')
Ptw(T='drug',W1='the')
Ptw(T='place',W1='the')
Ptw(T='company',W1='the')
```
## 14 Given this, if the word 'the' is found in a name, what is the most likely type?
```
Pwt('the', 'movie')
```
## 15. Is Ptw(T='movie'|W1='the') the same as Pwt(W1='the'|T='movie') the same? Why or why not?
```
Ptw(T='movie',W1='the')
Pwt(W1='the', T='movie')
```
They are not the same because it's basically the probability of getting a movie type and then getting the word 'the' after, whereas the other way is getting the word 'the' and then getting a movie type after.
## 16. Do you think modeling Ptw(T|W1) would be better with a continuous function like a Gaussian? Why or why not?
- Answer in a markdown cell
No, I don't think modeling the Ptw(T|W1) would be better with a continuous function like a Gaussian function. This is
because the set of data that we're given is finite and is likely not completely randomly distributed and likely has bias somewhere in it. Because of this fact using a continuous function might not yield the results we're looking for.
| github_jupyter |
_Lambda School Data Science - Model Validation_
# Feature Selection
Objectives:
* Feature importance
* Feature selection
## Yesterday we saw that...
## Less isn't always more (but sometimes it is)
## More isn't always better (but sometimes it is)

Saavas, Ando [Feature Selection (4 parts)](https://blog.datadive.net/selecting-good-features-part-i-univariate-selection/)
>There are in general two reasons why feature selection is used:
1. Reducing the number of features, to reduce overfitting and improve the generalization of models.
2. To gain a better understanding of the features and their relationship to the response variables.
>These two goals are often at odds with each other and thus require different approaches: depending on the data at hand a feature selection method that is good for goal (1) isn’t necessarily good for goal (2) and vice versa. What seems to happen often though is that people use their favourite method (or whatever is most conveniently accessible from their tool of choice) indiscriminately, especially methods more suitable for (1) for achieving (2).
While they are not always mutually exclusive, here's a little bit about what's going on with these two goals
### Goal 1: Reducing Features, Reducing Overfitting, Improving Generalization of Models
This is when you're actually trying to engineer a packaged, machine learning pipeline that is streamlined and highly generalizable to novel data as more is collected, and you don't really care "how" it works as long as it does work.
Approaches that are good at this tend to fail at Goal 2 because they handle multicollinearity by (sometime randomly) choosing/indicating just one of a group of strongly correlated features. This is good to reduce redundancy, but bad if you want to interpret the data.
### Goal 2: Gaining a Better Understanding of the Features and their Relationships
This is when you want a good, interpretable model or you're doing data science more for analysis than engineering. Company asks you "How do we increase X?" and you can tell them all the factors that correlate to it and their predictive power.
Approaches that are good at this tend to fail at Goal 1 because, well, they *don't* handle the multicollinearity problem. If three features are all strongly correlated to each other as well as the output, they will all have high scores. But including all three features in a model is redundant.
### Each part in Saavas's Blog series describes an increasingly complex (and computationally costly) set of methods for feature selection and interpretation.
The ultimate comparison is completed using an adaptation of a dataset called Friedman's 1 regression dataset from Friedman, Jerome H.'s '[Multivariate Adaptive Regression Splines](http://www.stat.ucla.edu/~cocteau/stat204/readings/mars.pdf).
>The data is generated according to formula $y=10sin(πX_1X_2)+20(X_3–0.5)^2+10X_4+5X_5+ϵ$, where the $X_1$ to $X_5$ are drawn from uniform distribution and ϵ is the standard normal deviate N(0,1). Additionally, the original dataset had five noise variables $X_6,…,X_{10}$, independent of the response variable. We will increase the number of variables further and add four variables $X_{11},…,X_{14}$ each of which are very strongly correlated with $X_1,…,X_4$, respectively, generated by $f(x)=x+N(0,0.01)$. This yields a correlation coefficient of more than 0.999 between the variables. This will illustrate how different feature ranking methods deal with correlations in the data.
**Okay, that's a lot--here's what you need to know:**
1. $X_1$ and $X_2$ have the same non-linear relationship to $Y$ -- though together they do have a not-quite-linear relationship to $Y$ (with sinusoidal noise--but the range of the values doesn't let it get negative)
2. $X_3$ has a quadratic relationship with $Y$
3. $X_4$ and $X_5$ have linear relationships to $Y$, with $X_4$ being weighted twice as heavily as $X_5$
4. $X_6$ through $X_{10}$ are random and have NO relationship to $Y$
5. $X_{11}$ through $X_{14}$ correlate strongly to $X_1$ through $X_4$ respectively (and thus have the same respective relationships with $Y$)
This will help us see the difference between the models in selecting features and interpreting features
* how well they deal with multicollinearity (#5)
* how well they identify noise (#4)
* how well they identify different kinds of relationships
* how well they identify/interpret predictive power of individual variables.
```
# import
import numpy as np
# Create the dataset
# from https://blog.datadive.net/selecting-good-features-part-iv-stability-selection-rfe-and-everything-side-by-side/
np.random.seed(42)
size = 1500 # I increased the size from what's given in the link
Xs = np.random.uniform(0, 1, (size, 14))
# Changed variable name to Xs to use X later
#"Friedamn #1” regression problem
Y = (10 * np.sin(np.pi*Xs[:,0]*Xs[:,1]) + 20*(Xs[:,2] - .5)**2 +
10*Xs[:,3] + 5*Xs[:,4] + np.random.normal(0,1))
#Add 4 additional correlated variables (correlated with X1-X4)
Xs[:,10:] = Xs[:,:4] + np.random.normal(0, .025, (size,4))
names = ["X%s" % i for i in range(1,15)]
# Putting it into pandas--because... I like pandas. And usually you'll be
# working with dataframes not arrays (you'll care what the column titles are)
import pandas as pd
friedmanX = pd.DataFrame(data=Xs, columns=names)
friedmanY = pd.Series(data=Y, name='Y')
friedman = friedmanX.join(friedmanY)
friedman.head()
```
We want to be able to look at classification problems too, so let's bin the Y values to create a categorical feature from the Y values. It should have *roughly* similar relationships to the X features as Y does.
```
# First, let's take a look at what Y looks like
import matplotlib.pyplot as plt
import seaborn as sns
sns.distplot(friedmanY);
```
That's pretty normal, let's make two binary categories--one balanced, one unbalanced, to see the difference.
* balanced binary variable will be split evenly in half
* unbalanced binary variable will indicate whether $Y <5$.
```
friedman['Y_bal'] = friedman['Y'].apply(lambda y: 1 if (y < friedman.Y.median()) else 0)
friedman['Y_un'] = friedman['Y'].apply(lambda y: 1 if (y < 5) else 0)
print(friedman.Y_bal.value_counts(), '\n\n', friedman.Y_un.value_counts())
friedman.head()
# Finally, let's put it all into our usual X and y's
# (I already have the X dataframe as friedmanX, but I'm working backward to
# follow a usual flow)
X = friedman.drop(columns=['Y', 'Y_bal', 'Y_un'])
y = friedman.Y
y_bal = friedman.Y_bal
y_un = friedman.Y_un
```
#### Alright! Let's get to it! Remember, with each part, we are increasing complexity of the analysis and thereby increasing the computational costs and runtime.
So even before univariate selection--which compares each feature to the output feature one by one--there is a [VarianceThreshold](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.VarianceThreshold.html#sklearn.feature_selection.VarianceThreshold) object in sklearn.feature_selection. It defaults to getting rid of any features that are the same across all samples. Great for cleaning data in that respect.
The `threshold` parameter defaults to `0` to show the above behavior. if you change it, make sure you have good reason. Use with caution.
## Part 1: univariate selection
* Best for goal 2 - getting "a better understanding of the data, its structure and characteristics"
* unable to remove redundancy (for example selecting only the best feature among a subset of strongly correlated features)
* Super fast - can be used for baseline models or just after baseline
[sci-kit's univariariate feature selection objects and techniques](https://scikit-learn.org/stable/modules/feature_selection.html#univariate-feature-selection)
#### Y (continuous output)
options (they do what they sound like they do)
* SelectKBest
* SelectPercentile
both take the same parameter options for `score_func`
* `f_regression`: scores by correlation coefficient, f value, p value--basically automates what you can do by looking at a correlation matrix except without the ability to recognize collinearity
* `mutual_info_regression`: can capture non-linear correlations, but doesn't handle noise well
Let's take a look at mutual information (MI)
```
import sklearn.feature_selection as fe
MIR = fe.SelectKBest(fe.mutual_info_regression, k='all').fit(X, y)
MIR_scores = pd.Series(data=MIR.scores_, name='MI_Reg_Scores', index=names)
MIR_scores
```
#### Y_bal (balanced binary output)
options
* SelectKBest
* SelectPercentile
these options will cut out features with error rates above a certain tolerance level, define in parameter -`alpha`
* SelectFpr (false positive rate--false positives predicted/total negatives in dataset)
* SelectFdr (false discovery rate--false positives predicted/total positives predicted)
* ~~SelectFwe (family-wise error--for multinomial classification tasks)~~
all have the same optons for parameter `score_func`
* `chi2`
* `f_classif`
* `mutual_info_classif`
```
MIC_b = fe.SelectFpr(fe.mutual_info_classif).fit(X, y_bal)
MIC_b_scores = pd.Series(data=MIC_b.scores_,
name='MIC_Bal_Scores', index=names)
MIC_b_scores
```
#### Y_un (unbalanced binary output)
```
MIC_u = fe.SelectFpr(fe.mutual_info_classif).fit(X, y_un)
MIC_u_scores = pd.Series(data=MIC_u.scores_,
name='MIC_Unbal_Scores', index=names)
MIC_u_scores
```
## Part 2: linear models and regularization
* L1 Regularization (Lasso for regression) is best for goal 1: "produces sparse solutions and as such is very useful selecting a strong subset of features for improving model performance" (forces coefficients to zero, telling you which you could remove--but doesn't handle multicollinearity)
* L2 Regularization (Ridge for regression) is best for goal 2: "can be used for data interpretation due to its stability and the fact that useful features tend to have non-zero coefficients
* Also fast
[sci-kit's L1 feature selection](https://scikit-learn.org/stable/modules/feature_selection.html#l1-based-feature-selection) (can easily be switched to L2 using the parameter `penalty='l2'` for categorical targets or using `Ridge` instead of Lasso for continuous targets)
We won't do this here, because
1. You know regression
2. The same principles apply as shown in Part 3 below with `SelectFromModel`
3. There's way cooler stuff coming up
## Part 3: random forests
* Best for goal 1, not 2 because:
* strong features can end up with low scores
* biased towards variables with many categories
* "require very little feature engineering and parameter tuning"
* Takes a little more time depending on your dataset - but a popular technique
[sci-kit's implementation of tree-based feature selection](https://scikit-learn.org/stable/modules/feature_selection.html#tree-based-feature-selection)
#### Y
```
from sklearn.ensemble import RandomForestRegressor as RFR
# Fitting a random forest regression
rfr = RFR().fit(X, y)
# Creating scores from feature_importances_ ranking (some randomness here)
rfr_scores = pd.Series(data=rfr.feature_importances_, name='RFR', index=names)
rfr_scores
```
#### Y_bal
```
from sklearn.ensemble import RandomForestClassifier as RFC
# Fitting a Random Forest Classifier
rfc_b = RFC().fit(X, y_bal)
# Creating scores from feature_importances_ ranking (some randomness here)
rfc_b_scores = pd.Series(data=rfc_b.feature_importances_, name='RFC_bal',
index=names)
rfc_b_scores
```
#### Y_un
```
# Fitting a Random Forest Classifier
rfc_u = RFC().fit(X, y_un)
# Creating scores from feature_importances_ ranking (some randomness here)
rfc_u_scores = pd.Series(data=rfc_u.feature_importances_,
name='RFC_unbal', index=names)
rfc_u_scores
```
### SelectFromModel
is a meta-transformer that can be used along with any estimator that has a `coef_` or `feature_importances_` attribute after fitting. The features are considered unimportant and removed, if the corresponding `coef_` or `feature_importances_` values are below the provided `threshold` parameter. Apart from specifying the `threshold` numerically, there are built-in heuristics for finding a `threshold` using a string argument. Available heuristics are `'mean'`, `'median'` and float multiples of these like `'0.1*mean'`.
```
# Random forest regression transformation of X (elimination of least important
# features)
rfr_transform = fe.SelectFromModel(rfr, prefit=True)
X_rfr = rfr_transform.transform(X)
# Random forest classifier transformation of X_bal (elimination of least important
# features)
rfc_b_transform = fe.SelectFromModel(rfc_b, prefit=True)
X_rfc_b = rfc_b_transform.transform(X)
# Random forest classifier transformation of X_un (elimination of least important
# features)
rfc_u_transform = fe.SelectFromModel(rfc_u, prefit=True)
X_rfc_u = rfc_u_transform.transform(X)
RF_comparisons = pd.DataFrame(data=np.array([rfr_transform.get_support(),
rfc_b_transform.get_support(),
rfc_u_transform.get_support()]).T,
columns=['RF_Regressor', 'RF_balanced_classifier',
'RF_unbalanced_classifier'],
index=names)
RF_comparisons
```
## Part 4: stability selection, RFE, and everything side by side
* These methods take longer since they are *wrapper methods* and build multiple ML models before giving results. "They both build on top of other (model based) selection methods such as regression or SVM, building models on different subsets of data and extracting the ranking from the aggregates."
* Stability selection is good for both goal 1 and 2: "among the top performing methods for many different datasets and settings"
* For categorical targets
* ~~[RandomizedLogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.RandomizedLogisticRegression.html)~~ (Deprecated) use [RandomizedLogisticRegression](https://thuijskens.github.io/stability-selection/docs/randomized_lasso.html#stability_selection.randomized_lasso.RandomizedLogisticRegression)
* [ExtraTreesClassifier](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.ExtraTreesClassifier.html#sklearn.ensemble.ExtraTreesClassifier)
* For continuous targets
* ~~[RandomizedLasso](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.RandomizedLasso.html)~~ (Deprecated) use [RandomizedLasso](https://thuijskens.github.io/stability-selection/docs/randomized_lasso.html#stability_selection.randomized_lasso.RandomizedLogisticRegression)
* [ExtraTreesRegressor](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.ExtraTreesRegressor.html#sklearn.ensemble.ExtraTreesRegressor)
Welcome to open-source, folks! [Here](https://github.com/scikit-learn/scikit-learn/issues/8995) is the original discussion to deprecate `RandomizedLogisticRegression` and `RandomizedLasso`. [Here](https://github.com/scikit-learn/scikit-learn/issues/9657) is a failed attempt to resurrect it. It looks like it'll be gone for good soon. So we shouldn't get dependent on it.
The alternatives from the deprecated scikit objects come from an official scikit-learn-contrib module called [stability_selection](https://github.com/scikit-learn-contrib/stability-selection). They also have a `StabilitySelection` object that acts similarly scikit's `SelectFromModel`.
* recursive feature elimination (RFE) is best for goal 1
* [sci-kit's RFE and RFECV (RFE with built-in cross-validation)](https://scikit-learn.org/stable/modules/feature_selection.html#recursive-feature-elimination)
```
!pip install git+https://github.com/scikit-learn-contrib/stability-selection.git
```
Okay, I tried this package... it seems to have some problems... hopefully a good implementation of stability selection for Lasso and Logistic Regression will be created soon! In the meantime, scikit's RandomLasso and RandomLogisticRegression have not been removed, so you can fiddle some! Just alter the commented out code!
* import from scikit instead of stability-selection
* use scikit's `SelectFromModel` as shown above!
Ta Da!
#### Y
```
'''from stability_selection import (RandomizedLogisticRegression,
RandomizedLasso, StabilitySelection,
plot_stability_path)
# Stability selection using randomized lasso method
rl = RandomizedLasso(max_iter=2000)
rl_selector = StabilitySelection(base_estimator=rl, lambda_name='alpha',
n_jobs=2)
rl_selector.fit(X, y);
'''
from sklearn.ensemble import ExtraTreesRegressor as ETR
# Stability selection using randomized decision trees
etr = ETR(n_estimators=50).fit(X, y)
# Creating scores from feature_importances_ ranking (some randomness here)
etr_scores = pd.Series(data=etr.feature_importances_,
name='ETR', index=names)
etr_scores
from sklearn.linear_model import LinearRegression
# Recursive feature elimination with cross validaiton using linear regression
# as the model
lr = LinearRegression()
# rank all features, i.e continue the elimination until the last one
rfe = fe.RFECV(lr)
rfe.fit(X, y)
rfe_score = pd.Series(data=(-1*rfe.ranking_), name='RFE', index=names)
rfe_score
```
#### Y_bal
```
# stability selection using randomized logistic regression
'''rlr_b = RandomizedLogisticRegression()
rlr_b_selector = StabilitySelection(base_estimator=rlr_b, lambda_name='C',
n_jobs=2)
rlr_b_selector.fit(X, y_bal);'''
from sklearn.ensemble import ExtraTreesClassifier as ETC
# Stability selection using randomized decision trees
etc_b = ETC(n_estimators=50).fit(X, y_bal)
# Creating scores from feature_importances_ ranking (some randomness here)
etc_b_scores = pd.Series(data=etc_b.feature_importances_,
name='ETC_bal', index=names)
etc_b_scores
from sklearn.linear_model import LogisticRegression
# Recursive feature elimination with cross validaiton using logistic regression
# as the model
logr_b = LogisticRegression(solver='lbfgs')
# rank all features, i.e continue the elimination until the last one
rfe_b = fe.RFECV(logr_b)
rfe_b.fit(X, y_bal)
rfe_b_score = pd.Series(data=(-1*rfe_b.ranking_), name='RFE_bal', index=names)
rfe_b_score
```
#### Y_un
```
# stability selection uisng randomized logistic regression
'''rlr_u = RandomizedLogisticRegression(max_iter=2000)
rlr_u_selector = StabilitySelection(base_estimator=rlr_u, lambda_name='C')
rlr_u_selector.fit(X, y_un);'''
# Stability selection using randomized decision trees
etc_u = ETC(n_estimators=50).fit(X, y_un)
# Creating scores from feature_importances_ ranking (some randomness here)
etc_u_scores = pd.Series(data=etc_u.feature_importances_,
name='ETC_unbal', index=names)
etc_u_scores
# Recursive feature elimination with cross validaiton using logistic regression
# as the model
logr_u = LogisticRegression(solver='lbfgs')
# rank all features, i.e continue the elimination until the last one
rfe_u = fe.RFECV(logr_u)
rfe_u.fit(X, y_un)
rfe_u_score = pd.Series(data=(-1*rfe_u.ranking_), name='RFE_unbal', index=names)
rfe_u_score
'''RL_comparisons = pd.DataFrame(data=np.array([rl_selector.get_support(),
rlr_b_selector.get_support(),
rlr_u_selector.get_support()]).T,
columns=['RandomLasso', 'RandomLog_bal',
'RandomLog_unbal'],
index=names)
RL_comparisons'''
comparisons = pd.concat([MIR_scores, MIC_b_scores, MIC_u_scores, rfr_scores,
rfc_b_scores, rfc_u_scores, etr_scores, etc_b_scores,
etc_u_scores, rfe_score, rfe_b_score, rfe_u_score],
axis=1)
comparisons
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
scaled_df = scaler.fit_transform(comparisons)
scaled_comparisons = pd.DataFrame(scaled_df, columns=comparisons.columns,
index=names)
scaled_comparisons
```
### What do you notice from the diagram below?
```
sns.heatmap(scaled_comparisons);
```
Head, Tim [Cross Validation Gone Wrong](https://betatim.github.io/posts/cross-validation-gone-wrong/)
>Choosing your input features is just one of the many choices you have to make when building your machine-learning application. Remember to make all decisions during the cross validation, otherwise you are in for a rude awakening when your model is confronted with unseen data for the first time.

\[Image by Angarita, Natalia for [Capgemini](https://www.capgemini.com/2016/05/machine-learning-has-transformed-many-aspects-of-our-everyday-life/)\]
**I don't fully support this diagram for a few reasons.**
* I would replace "Feature Engineering" with "Data Cleaning"
* Feature engineering can be done alongside either data cleaning or training your model--it can be done before *or* after splitting your data. (But it will need to be part of the final pipeline.)
* Any feature standardization happens **after** the split
* And you can use cross validation instead of an independent validation set
However **feature selection (Goal 1) is part of choosing and training a model and should happen *after* splitting**. Feature selection belongs safely **inside the dotted line**.
"But doesn't it make sense to make your decisions based on all the information?"
NO! Mr. Head has a point!
## The number of features you end up using *is* a hyperparameter. Don't cross the dotted line while hyperparameter tuning!!! Work on goal 1 AFTER splitting.
I know you want to see how your model is performing... "just real quick"... but don't do it!
...
Don't!
*(Kaggle does the initial train-test split for you. It doesn't even let you **see** the target values for the test data. How you like dem apples?)*

What you **can** do is create multiple "final" models by hyperparameter tuning different types of models (all inside the dotted line!), then use the final hold-out test to see which does best.
**All this is said with the caveat that you have a large enough dataset to support three way validation or a test set plus cross-validation**
## On the flip side, feature *interpretation* (Goal 2) can be done with all the data, before splitting, since you are looking to get a full understanding underlying the relationships in the dataset.
| github_jupyter |
# Scraping Amazon Reviews using Scrapy in Python Part 2
> Are you looking for a method of scraping Amazon reviews and do not know where to begin with? In that case, you may find this blog very useful in scraping Amazon reviews.
- toc: true
- badges: true
- comments: true
- author: Zeyu Guan
- categories: [spaCy, Python, Machine Learning, Data Mining, NLP, RandomForest]
- annotations: true
- image: https://www.freecodecamp.org/news/content/images/2020/09/wall-5.jpeg
- hide: false
## Required Packages
[wordcloud](https://github.com/amueller/word_cloud),
[geopandas](https://geopandas.org/en/stable/getting_started/install.html),
[nbformat](https://pypi.org/project/nbformat/),
[seaborn](https://seaborn.pydata.org/installing.html),
[scikit-learn](https://scikit-learn.org/stable/install.html)
## Now let's get started!
First thing first, you need to load all the necessary libraries:
```
import pandas as pd
from matplotlib import pyplot as plt
import numpy as np
from wordcloud import WordCloud
from wordcloud import STOPWORDS
import re
import plotly.graph_objects as go
import seaborn as sns
```
# Data Cleaning
Following the previous blog, the raw data we scraped from Amazon look like below.

Even thought that looks relatively clean, but there are still some inperfections such as star1 and star2 need to be combined, date need to be splited, and etc. The whole process could be found from my [github notebooks](https://github.com/christopherGuan/sample-ds-blog).
Below is data after cleaning. It contains 6 columns and more than 500 rows.

# EDA
Below are the questions I curioused about, and the result generated by doing data analysis.
- Which rating (1-5) got the most and least?

- Which country are they targeting?

- Which month people prefer to give a higher rating?

- Which month people leave commons the most?

- What are the useful words that people mentioned in the reviews?

# Sentiment Analysis (Method 1)
## What is sentiment analysis?
Essentially, sentiment analysis or sentiment classification fall under the broad category of text classification tasks in which you are given a phrase or a list of phrases and your classifier is expected to determine whether the sentiment behind that phrase is positive, negative, or neutral. To keep the problem as a binary classification problem, the third attribute is sometimes ignored. Recent tasks have taken into account sentiments such as "somewhat positive" and "somewhat negative."
In this specific case, we catogrize 4 and 5 stars to the positive group and 1 & 2 stars to the negative gorup.

Below are the most frequently words in reviews from positive group and negative group respectively.
Positive review

Negative review

## Build up the first model
Now we can build up a easy model that, as input, it will accept reviews. It will then predict whether the review will be positive or negative.
Because this is a classification task, we will train a simple logistic regression model.
- **Clean Data**
First, we create a new function to remove all punctuations from the data for later use.
```
def remove_punctuation(text):
final = "".join(u for u in text if u not in ("?", ".", ";", ":", "!",'"'))
return final
```
- **Split the Dataframe**
Now, we split 80% of the dataset for training and 20% for testing. Meanwhile, each dataset should contain only two variables, one is to indicate positive or negative and another one is the reviews.

```
df['random_number'] = np.random.randn(len(index))
train = df[df['random_number'] <= 0.8]
test = df[df['random_number'] > 0.8]
```
- **Create a bag of words**
Here I would like to introduce a new package.
[Scikit-learn](https://scikit-learn.org/stable/install.html) is an open source machine learning library that supports supervised and unsupervised learning. It also provides various tools for model fitting, data preprocessing, model selection, model evaluation, and many other utilities.
In this example, we are going to use [sklearn.feature_extraction.text.CountVectorizer](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html?highlight=countvectorizer#sklearn.feature_extraction.text.CountVectorizer) to convert a collection of text documents to a matrix of token counts.
The reason why we need to convert the text into a bag-of-words model is because the logistic regression algorithm cannot understand text.
```
train_matrix = vectorizer.fit_transform(train['title'])
test_matrix = vectorizer.transform(test['title'])
```
- **Import Logistic Regression**
```
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
```
- **Split target and independent variables**
```
X_train = train_matrix
X_test = test_matrix
y_train = train['sentiment']
y_test = test['sentiment']
```
- **Fit model on data**
```
lr.fit(X_train,y_train)
```
- **Make predictionsa**
```
predictions = lr.predict(X_test)
```
The output will be either 1 or -1. As we assumed before, 1 presents the model predict the review is a positive review and vice versa.
## Testing
Now, we can test the accuracy of our model!
```
from sklearn.metrics import confusion_matrix, classification_report
new = np.asarray(y_test)
confusion_matrix(predictions,y_test)
print(classification_report(predictions,y_test))
```

The accuracy is as high as 89%!
# Sentiment Analysis (Method 2)
In this process, you will learn how to build your own sentiment analysis classifier using Python and understand the basics of NLP (natural language processing). First, let's try to use a quick and dirty method to utilize the [Naive Bayes classifier](https://www.datacamp.com/community/tutorials/simplifying-sentiment-analysis-python) to predict the sentiments of Amazon product review.
Based on the application's requirements, we should first put each review in a txt file and catogorize them as negative or positive review in different folder.
```
#Find all negative review
neg = df[df["sentiment"] == -1].review
#Reset the index
neg.index = range(len(neg.index))
## Write each DataFrame to separate txt
for i in range(len(neg)):
data = neg[i]
with open(str(i) + ".txt","w") as file:
file.write(data + "\n")
```
Next, we sort the order of the official data and remove all the content. In other words, we only keep the file name.
```
import os
import pandas as pd
#Get file names
file_names = os.listdir('/Users/zeyu/nltk_data/corpora/movie_reviews/neg')
#Convert pandas
neg_df = pd.DataFrame (file_names, columns = ['file_name'])
#split to sort
neg_df[['number','id']] = neg_df.file_name.apply(
lambda x: pd.Series(str(x).split("_")))
#change the number to be the index
neg_df_index = neg_df.set_index('number')
neg_org = neg_df_index.sort_index(ascending=True)
#del neg["id"]
neg_org.reset_index(inplace=True)
neg_org = neg_org.drop([0], axis=0).reset_index(drop=True)
neg_names = neg_org['file_name']
for file_name in neg_names:
t = open(f'/Users/zeyu/nltk_data/corpora/movie_reviews/neg/{file_name}', 'w')
t.write("")
t.close()
```
Next, we insert the content of amazon review to the official files with their original file names.
```
#Get file names
file_names = os.listdir('/Users/zeyu/Desktop/DS/neg')
#Convert pandas
pos_df = pd.DataFrame (file_names, columns = ['file_name'])
pos_names = pos_df['file_name']
for index, file_name in enumerate(pos_names):
try:
t = open(f'/Users/zeyu/Desktop/DS/neg/{file_name}', 'r')
# t.write("")
t_val = ascii(t.read())
t.close()
writefname = pos_names_org[index]
t = open(f'/Users/zeyu/nltk_data/corpora/movie_reviews/neg/{writefname}', 'w')
t.write(t_val)
t.close()
except:
print(f'{index} Reading/writing Error')
```
Eventually, we can just run these few lines to predict the sentiments of Amazon product review.
```
import nltk
from nltk.corpus import movie_reviews
import random
documents = [(list(movie_reviews.words(fileid)), category)
for category in movie_reviews.categories()
for fileid in movie_reviews.fileids(category)]
#All words, not unique.
random.shuffle(documents)
#Change to lower case. Count word appears.
all_words = nltk.FreqDist(w.lower() for w in movie_reviews.words())
#Only show first 2000.
word_features = list(all_words)[:2000]
def document_features(document):
document_words = set(document)
features = {}
for word in word_features:
features['contains({})'.format(word)] = (word in document_words)
return features
#Calculate the accuracy of the given.
featuresets = [(document_features(d), c) for (d,c) in documents]
train_set, test_set = featuresets[100:], featuresets[:100]
classifier = nltk.NaiveBayesClassifier.train(train_set)
print(nltk.classify.accuracy(classifier, test_set))
classifier.show_most_informative_features(5)
```
| github_jupyter |
### Analysis of motifs using Motif Miner (RINGS tool that employs alpha frequent subtree mining)
```
csv_files = ["ABA_14361_100ug_v5.0_DATA.csv",
"ConA_13799-10ug_V5.0_DATA.csv",
'PNA_14030_10ug_v5.0_DATA.csv',
"RCAI_10ug_14110_v5.0_DATA.csv",
"PHA-E-10ug_13853_V5.0_DATA.csv",
"PHA-L-10ug_13856_V5.0_DATA.csv",
"LCA_10ug_13934_v5.0_DATA.csv",
"SNA_10ug_13631_v5.0_DATA.csv",
"MAL-I_10ug_13883_v5.0_DATA.csv",
"MAL_II_10ug_13886_v5.0_DATA.csv",
"GSL-I-B4_10ug_13920_v5.0_DATA.csv",
"jacalin-1ug_14301_v5.0_DATA.csv",
'WGA_14057_1ug_v5.0_DATA.csv',
"UEAI_100ug_13806_v5.0_DATA.csv",
"SBA_14042_10ug_v5.0_DATA.csv",
"DBA_100ug_13897_v5.0_DATA.csv",
"PSA_14040_10ug_v5.0_DATA.csv",
"HA_PuertoRico_8_34_13829_v5_DATA.csv",
'H3N8-HA_16686_v5.1_DATA.csv',
"Human-DC-Sign-tetramer_15320_v5.0_DATA.csv"]
csv_file_normal_names = [
r"\textit{Agaricus bisporus} agglutinin (ABA)",
r"Concanavalin A (Con A)",
r'Peanut agglutinin (PNA)',
r"\textit{Ricinus communis} agglutinin I (RCA I/RCA\textsubscript{120})",
r"\textit{Phaseolus vulgaris} erythroagglutinin (PHA-E)",
r"\textit{Phaseolus vulgaris} leucoagglutinin (PHA-L)",
r"\textit{Lens culinaris} agglutinin (LCA)",
r"\textit{Sambucus nigra} agglutinin (SNA)",
r"\textit{Maackia amurensis} lectin I (MAL-I)",
r"\textit{Maackia amurensis} lectin II (MAL-II)",
r"\textit{Griffonia simplicifolia} Lectin I isolectin B\textsubscript{4} (GSL I-B\textsubscript{4})",
r"Jacalin",
r'Wheat germ agglutinin (WGA)',
r"\textit{Ulex europaeus} agglutinin I (UEA I)",
r"Soybean agglutinin (SBA)",
r"\textit{Dolichos biflorus} agglutinin (DBA)",
r"\textit{Pisum sativum} agglutinin (PSA)",
r"Influenza hemagglutinin (HA) (A/Puerto Rico/8/34) (H1N1)",
r'Influenza HA (A/harbor seal/Massachusetts/1/2011) (H3N8)',
r"Human DC-SIGN tetramer"]
import sys
import os
import pandas as pd
import numpy as np
from scipy import interp
sys.path.append('..')
from ccarl.glycan_parsers.conversions import kcf_to_digraph, cfg_to_kcf
from ccarl.glycan_plotting import draw_glycan_diagram
from ccarl.glycan_graph_methods import generate_digraph_from_glycan_string
from ccarl.glycan_features import generate_features_from_subtrees
import ccarl.glycan_plotting
from sklearn.linear_model import LogisticRegressionCV, LogisticRegression
from sklearn.metrics import matthews_corrcoef, make_scorer, roc_curve, auc
import matplotlib.pyplot as plt
from collections import defaultdict
aucs = defaultdict(list)
ys = defaultdict(list)
probs = defaultdict(list)
motifs = defaultdict(list)
for fold in [1,2,3,4,5]:
print(f"Running fold {fold}...")
for csv_file in csv_files:
alpha = 0.8
minsup = 0.2
input_file = f'./temp_{csv_file}'
training_data = pd.read_csv(f"../Data/CV_Folds/fold_{fold}/training_set_{csv_file}")
test_data = pd.read_csv(f"../Data/CV_Folds/fold_{fold}/test_set_{csv_file}")
pos_glycan_set = training_data['glycan'][training_data.binding == 1].values
kcf_string = '\n'.join([cfg_to_kcf(x) for x in pos_glycan_set])
with open(input_file, 'w') as f:
f.write(kcf_string)
min_sup = int(len(pos_glycan_set) * minsup)
subtrees = os.popen(f"ruby Miner_cmd.rb {min_sup} {alpha} {input_file}").read()
subtree_graphs = [kcf_to_digraph(x) for x in subtrees.split("///")[0:-1]]
motifs[csv_file].append(subtree_graphs)
os.remove(input_file)
binding_class = training_data.binding.values
glycan_graphs = [generate_digraph_from_glycan_string(x, parse_linker=True,
format='CFG')
for x in training_data.glycan]
glycan_graphs_test = [generate_digraph_from_glycan_string(x, parse_linker=True,
format='CFG')
for x in test_data.glycan]
features = [generate_features_from_subtrees(subtree_graphs, glycan) for
glycan in glycan_graphs]
features_test = [generate_features_from_subtrees(subtree_graphs, glycan) for
glycan in glycan_graphs_test]
logistic_clf = LogisticRegression(penalty='l2', C=100, solver='lbfgs',
class_weight='balanced', max_iter=1000)
X = features
y = binding_class
logistic_clf.fit(X, y)
y_test = test_data.binding.values
X_test = features_test
fpr, tpr, _ = roc_curve(y_test, logistic_clf.predict_proba(X_test)[:,1], drop_intermediate=False)
aucs[csv_file].append(auc(fpr, tpr))
ys[csv_file].append(y_test)
probs[csv_file].append(logistic_clf.predict_proba(X_test)[:,1])
# Assess the number of subtrees generated for each CV round.
subtree_lengths = defaultdict(list)
for fold in [1,2,3,4,5]:
print(f"Running fold {fold}...")
for csv_file in csv_files:
alpha = 0.8
minsup = 0.2
input_file = f'./temp_{csv_file}'
training_data = pd.read_csv(f"../Data/CV_Folds/fold_{fold}/training_set_{csv_file}")
test_data = pd.read_csv(f"../Data/CV_Folds/fold_{fold}/test_set_{csv_file}")
pos_glycan_set = training_data['glycan'][training_data.binding == 1].values
kcf_string = '\n'.join([cfg_to_kcf(x) for x in pos_glycan_set])
with open(input_file, 'w') as f:
f.write(kcf_string)
min_sup = int(len(pos_glycan_set) * minsup)
subtrees = os.popen(f"ruby Miner_cmd.rb {min_sup} {alpha} {input_file}").read()
subtree_graphs = [kcf_to_digraph(x) for x in subtrees.split("///")[0:-1]]
subtree_lengths[csv_file].append(len(subtree_graphs))
os.remove(input_file)
subtree_lengths = [y for x in subtree_lengths.values() for y in x]
print(np.mean(subtree_lengths))
print(np.max(subtree_lengths))
print(np.min(subtree_lengths))
def plot_multiple_roc(data):
'''Plot multiple ROC curves.
Prints out key AUC values (mean, median etc).
Args:
data (list): A list containing [y, probs] for each model, where:
y: True class labels
probs: Predicted probabilities
Returns:
Figure, Axes, Figure, Axes
'''
mean_fpr = np.linspace(0, 1, 100)
fig, axes = plt.subplots(figsize=(4, 4))
ax = axes
ax.set_title('')
#ax.legend(loc="lower right")
ax.set_xlabel('False Positive Rate')
ax.set_ylabel('True Positive Rate')
ax.set_aspect('equal', adjustable='box')
auc_values = []
tpr_list = []
for y, probs in data:
#data_point = data[csv_file]
#y = data_point[7] # test binding
#X = data_point[8] # test features
#logistic_clf = data_point[0] # model
fpr, tpr, _ = roc_curve(y, probs, drop_intermediate=False)
tpr_list.append(interp(mean_fpr, fpr, tpr))
auc_values.append(auc(fpr, tpr))
ax.plot(fpr, tpr, color='blue', alpha=0.1, label=f'ROC curve (area = {auc(fpr, tpr): 2.3f})')
ax.plot([0,1], [0,1], linestyle='--', color='grey', linewidth=0.8, dashes=(5, 10))
mean_tpr = np.mean(tpr_list, axis=0)
median_tpr = np.median(tpr_list, axis=0)
upper_tpr = np.percentile(tpr_list, 75, axis=0)
lower_tpr = np.percentile(tpr_list, 25, axis=0)
ax.plot(mean_fpr, median_tpr, color='black')
ax.fill_between(mean_fpr, lower_tpr, upper_tpr, color='grey', alpha=.5,
label=r'$\pm$ 1 std. dev.')
fig.savefig("Motif_Miner_CV_ROC_plot_all_curves.svg")
fig2, ax2 = plt.subplots(figsize=(4, 4))
ax2.hist(auc_values, range=[0.5,1], bins=10, rwidth=0.9, color=(0, 114/255, 178/255))
ax2.set_xlabel("AUC value")
ax2.set_ylabel("Counts")
fig2.savefig("Motif_Miner_CV_AUC_histogram.svg")
print(f"Mean AUC value: {np.mean(auc_values): 1.3f}")
print(f"Median AUC value: {np.median(auc_values): 1.3f}")
print(f"IQR of AUC values: {np.percentile(auc_values, 25): 1.3f} - {np.percentile(auc_values, 75): 1.3f}")
return fig, axes, fig2, ax2, auc_values
# Plot ROC curves for all test sets
roc_data = [[y, prob] for y_fold, prob_fold in zip(ys.values(), probs.values()) for y, prob in zip(y_fold, prob_fold)]
_, _, _, _, auc_values = plot_multiple_roc(roc_data)
auc_values_ccarl = [0.950268817204301,
0.9586693548387097,
0.9559811827956988,
0.8686155913978494,
0.9351222826086956,
0.989010989010989,
0.9912587412587414,
0.9090909090909092,
0.9762626262626264,
0.9883597883597884,
0.9065533980582524,
0.9417475728155339,
0.8268608414239482,
0.964349376114082,
0.9322638146167558,
0.9178037686809616,
0.96361273554256,
0.9362139917695472,
0.9958847736625515,
0.9526748971193415,
0.952300785634119,
0.9315375982042648,
0.9705387205387206,
0.9865319865319865,
0.9849773242630385,
0.9862385321100917,
0.9862385321100918,
0.9606481481481481,
0.662037037037037,
0.7796296296296297,
0.9068627450980392,
0.915032679738562,
0.9820261437908496,
0.9893790849673203,
0.9882988298829882,
0.9814814814814815,
1.0,
0.8439153439153441,
0.9859813084112149,
0.9953271028037383,
0.8393308080808081,
0.8273358585858586,
0.7954545454545453,
0.807070707070707,
0.8966329966329966,
0.8380952380952381,
0.6201058201058202,
0.7179894179894181,
0.6778846153846154,
0.75,
0.9356060606060607,
0.8619528619528619,
0.8787878787878789,
0.9040816326530613,
0.7551020408163266,
0.9428694158075602,
0.9226804123711341,
0.8711340206185567,
0.7840909090909091,
0.8877840909090909,
0.903225806451613,
0.8705594120049,
0.9091465904450796,
0.8816455696202531,
0.8521097046413502,
0.8964521452145213,
0.9294554455445544,
0.8271452145214522,
0.8027272727272727,
0.8395454545454546,
0.8729967948717949,
0.9306891025641025,
0.9550970873786407,
0.7934686672550749,
0.8243601059135041,
0.8142100617828772,
0.9179611650485436,
0.8315533980582525,
0.7266990291262136,
0.9038834951456312,
0.9208916083916084,
0.7875,
0.9341346153846154,
0.9019230769230768,
0.9086538461538461,
0.9929245283018868,
0.9115566037735848,
0.9952830188679246,
0.9658018867924528,
0.7169811320754716,
0.935981308411215,
0.9405660377358491,
0.9905660377358491,
0.9937106918238994,
0.9302935010482181,
0.7564814814814815,
0.9375,
0.8449074074074074,
0.8668981481481483,
0.7978971962616823]
auc_value_means = [np.mean(auc_values[x*5:x*5+5]) for x in range(int(len(auc_values) / 5))]
auc_value_means_ccarl = [np.mean(auc_values_ccarl[x*5:x*5+5]) for x in range(int(len(auc_values_ccarl) / 5))]
auc_value_mean_glymmr = np.array([0.6067939 , 0.76044574, 0.66786624, 0.69578298, 0.81659623,
0.80536403, 0.77231548, 0.96195032, 0.70013384, 0.60017685,
0.77336818, 0.78193305, 0.66269668, 0.70333122, 0.54247748,
0.63003707, 0.79619231, 0.85141509, 0.9245296 , 0.63366329])
auc_value_mean_glymmr_best = np.array([0.77559242, 0.87452658, 0.75091636, 0.7511371 , 0.87450697,
0.82895628, 0.81083123, 0.96317065, 0.75810185, 0.82680149,
0.84747054, 0.8039597 , 0.69651882, 0.73431593, 0.582194 ,
0.67407767, 0.83049825, 0.88891509, 0.9345188 , 0.72702016])
auc_value_motiffinder = [0.9047619047619048, 0.9365601503759399, 0.6165413533834586, 0.9089068825910931,
0.4962962962962963, 0.6358816964285713, 0.8321078431372548, 0.8196576151121606, 0.8725400457665904,
0.830220713073005, 0.875, 0.7256367663344407, 0.8169291338582677, 0.9506818181818182, 0.7751351351351351,
0.9362947658402204, 0.6938461538461539, 0.6428571428571428, 0.7168021680216802, 0.5381136950904392] #Note, only from a single test-train split.
import seaborn as sns
sns.set(style="ticks")
plot_data = np.array([auc_value_mean_glymmr, auc_value_mean_glymmr_best, auc_value_motiffinder, auc_value_means, auc_value_means_ccarl]).T
ax = sns.violinplot(data=plot_data, cut=2, inner='quartile')
sns.swarmplot(data=plot_data, color='black')
ax.set_ylim([0.5, 1.05])
ax.set_xticklabels(["GLYMMR\n(mean)", "GLYMMR\n(best)", "MotifFinder", "Glycan\nMiner Tool", "CCARL"])
#ax.grid('off')
ax.set_ylabel("AUC")
ax.figure.savefig('method_comparison_violin_plot.svg')
auc_value_means_ccarl
print("CCARL Performance")
print(f"Median AUC value: {np.median(auc_value_means_ccarl): 1.3f}")
print(f"IQR of AUC values: {np.percentile(auc_value_means_ccarl, 25): 1.3f} - {np.percentile(auc_value_means_ccarl, 75): 1.3f}")
print("Glycan Miner Tool Performance")
print(f"Median AUC value: {np.median(auc_value_means): 1.3f}")
print(f"IQR of AUC values: {np.percentile(auc_value_means, 25): 1.3f} - {np.percentile(auc_value_means, 75): 1.3f}")
print("Glycan Miner Tool Performance")
print(f"Median AUC value: {np.median(auc_value_mean_glymmr_best): 1.3f}")
print(f"IQR of AUC values: {np.percentile(auc_value_mean_glymmr_best, 25): 1.3f} - {np.percentile(auc_value_mean_glymmr_best, 75): 1.3f}")
print("Glycan Miner Tool Performance")
print(f"Median AUC value: {np.median(auc_value_mean_glymmr): 1.3f}")
print(f"IQR of AUC values: {np.percentile(auc_value_mean_glymmr, 25): 1.3f} - {np.percentile(auc_value_mean_glymmr, 75): 1.3f}")
from matplotlib.backends.backend_pdf import PdfPages
sns.reset_orig()
import networkx as nx
for csv_file in csv_files:
with PdfPages(f"./motif_miner_motifs/glycan_motif_miner_motifs_{csv_file}.pdf") as pdf:
for motif in motifs[csv_file][0]:
fig, ax = plt.subplots()
ccarl.glycan_plotting.draw_glycan_diagram(motif, ax)
pdf.savefig(fig)
plt.close(fig)
glymmr_mean_stdev = np.array([0.15108904, 0.08300011, 0.11558078, 0.05259819, 0.061275 ,
0.09541182, 0.09239553, 0.05114523, 0.05406571, 0.16180131,
0.10345311, 0.06080207, 0.0479003 , 0.09898648, 0.06137992,
0.09813596, 0.07010635, 0.14010784, 0.05924527, 0.13165457])
glymmr_best_stdev = np.array([0.08808868, 0.04784959, 0.13252895, 0.03163248, 0.04401516,
0.08942411, 0.08344247, 0.05714308, 0.05716086, 0.05640053,
0.08649275, 0.05007289, 0.05452531, 0.05697662, 0.0490626 ,
0.1264917 , 0.04994508, 0.1030053 , 0.03359648, 0.12479809])
auc_value_std_ccarl = [np.std(auc_values_ccarl[x*5:x*5+5]) for x in range(int(len(auc_values_ccarl) / 5))]
print(r"Lectin & GLYMMR(mean) & GLYMMR(best) & Glycan Miner Tool & MotifFinder & CCARL \\ \hline")
for i, csv_file, name in zip(list(range(len(csv_files))), csv_files, csv_file_normal_names):
print(f"{name} & {auc_value_mean_glymmr[i]:0.3f} ({glymmr_mean_stdev[i]:0.3f}) & {auc_value_mean_glymmr_best[i]:0.3f} ({glymmr_best_stdev[i]:0.3f}) \
& {np.mean(aucs[csv_file]):0.3f} ({np.std(aucs[csv_file]):0.3f}) & {auc_value_motiffinder[i]:0.3f} & {auc_value_means_ccarl[i]:0.3f} ({auc_value_std_ccarl[i]:0.3f}) \\\\")
```
| github_jupyter |
```
from __future__ import division
from __future__ import print_function
import sys
import math
import pickle
import copy
import numpy as np
import cv2
import matplotlib.pyplot as plt
from DataLoader import Batch
from Model import Model, DecoderType
from SamplePreprocessor import preprocess
# constants like filepaths
class Constants:
"filenames and paths to data"
fnCharList = '../model/charList.txt'
fnAnalyze = '../data/analyze.png'
fnPixelRelevance = '../data/pixelRelevance.npy'
fnTranslationInvariance = '../data/translationInvariance.npy'
fnTranslationInvarianceTexts = '../data/translationInvarianceTexts.pickle'
gtText = 'are'
distribution = 'histogram' # 'histogram' or 'uniform'
def odds(val):
return val / (1 - val)
def weightOfEvidence(origProb, margProb):
return math.log2(odds(origProb)) - math.log2(odds(margProb))
def analyzePixelRelevance():
"simplified implementation of paper: Zintgraf et al - Visualizing Deep Neural Network Decisions: Prediction Difference Analysis"
# setup model
model = Model(open(Constants.fnCharList).read(), DecoderType.BestPath, mustRestore=True)
# read image and specify ground-truth text
img = cv2.imread(Constants.fnAnalyze, cv2.IMREAD_GRAYSCALE)
(w, h) = img.shape
assert Model.imgSize[1] == w
# compute probability of gt text in original image
batch = Batch([Constants.gtText], [preprocess(img, Model.imgSize)])
(_, probs) = model.inferBatch(batch, calcProbability=True, probabilityOfGT=True)
origProb = probs[0]
grayValues = [0, 63, 127, 191, 255]
if Constants.distribution == 'histogram':
bins = [0, 31, 95, 159, 223, 255]
(hist, _) = np.histogram(img, bins=bins)
pixelProb = hist / sum(hist)
elif Constants.distribution == 'uniform':
pixelProb = [1.0 / len(grayValues) for _ in grayValues]
else:
raise Exception('unknown value for Constants.distribution')
# iterate over all pixels in image
pixelRelevance = np.zeros(img.shape, np.float32)
for x in range(w):
for y in range(h):
# try a subset of possible grayvalues of pixel (x,y)
imgsMarginalized = []
for g in grayValues:
imgChanged = copy.deepcopy(img)
imgChanged[x, y] = g
imgsMarginalized.append(preprocess(imgChanged, Model.imgSize))
# put them all into one batch
batch = Batch([Constants.gtText]*len(imgsMarginalized), imgsMarginalized)
# compute probabilities
(_, probs) = model.inferBatch(batch, calcProbability=True, probabilityOfGT=True)
# marginalize over pixel value (assume uniform distribution)
margProb = sum([probs[i] * pixelProb[i] for i in range(len(grayValues))])
pixelRelevance[x, y] = weightOfEvidence(origProb, margProb)
print(x, y, pixelRelevance[x, y], origProb, margProb)
np.save(Constants.fnPixelRelevance, pixelRelevance)
def analyzeTranslationInvariance():
# setup model
model = Model(open(Constants.fnCharList).read(), DecoderType.BestPath, mustRestore=True)
# read image and specify ground-truth text
img = cv2.imread(Constants.fnAnalyze, cv2.IMREAD_GRAYSCALE)
(w, h) = img.shape
assert Model.imgSize[1] == w
imgList = []
for dy in range(Model.imgSize[0]-h+1):
targetImg = np.ones((Model.imgSize[1], Model.imgSize[0])) * 255
targetImg[:,dy:h+dy] = img
imgList.append(preprocess(targetImg, Model.imgSize))
# put images and gt texts into batch
batch = Batch([Constants.gtText]*len(imgList), imgList)
# compute probabilities
(texts, probs) = model.inferBatch(batch, calcProbability=True, probabilityOfGT=True)
# save results to file
f = open(Constants.fnTranslationInvarianceTexts, 'wb')
pickle.dump(texts, f)
f.close()
np.save(Constants.fnTranslationInvariance, probs)
def showResults():
# 1. pixel relevance
pixelRelevance = np.load(Constants.fnPixelRelevance)
plt.figure('Pixel relevance')
plt.imshow(pixelRelevance, cmap=plt.cm.jet, vmin=-0.25, vmax=0.25)
plt.colorbar()
img = cv2.imread(Constants.fnAnalyze, cv2.IMREAD_GRAYSCALE)
plt.imshow(img, cmap=plt.cm.gray, alpha=.4)
# 2. translation invariance
probs = np.load(Constants.fnTranslationInvariance)
f = open(Constants.fnTranslationInvarianceTexts, 'rb')
texts = pickle.load(f)
texts = ['%d:'%i + texts[i] for i in range(len(texts))]
f.close()
plt.figure('Translation invariance')
plt.plot(probs, 'o-')
plt.xticks(np.arange(len(texts)), texts, rotation='vertical')
plt.xlabel('horizontal translation and best path')
plt.ylabel('text probability of "%s"'%Constants.gtText)
# show both plots
plt.show()
if __name__ == '__main__':
if len(sys.argv)>1:
if sys.argv[1]=='--relevance':
print('Analyze pixel relevance')
analyzePixelRelevance()
elif sys.argv[1]=='--invariance':
print('Analyze translation invariance')
analyzeTranslationInvariance()
else:
print('Show results')
showResults()
```
| github_jupyter |
# Descripción del TP
## Introducción
En este práctico se pide desarrollar un programa que calcule parámetros de interés para un circuito RLC serie.
Tomando como datos de entrada:
* εmax
* la frecuencia de la fuente en Hz
* los valores de R, L y C:
El programa debe determinar:
* Imax
* la diferencia de potencial de cada componente
* la potencia media
* el factor Q
* el factor de potencia (cosφ)
* la impedancia compleja Z
* La frecuencia de resonancia del sistema.
* Cuál de las tres impedancias domina el comportamiento del circuito
También debe representar gráficamente:
* El diagrama de fasores de tensiones del circuito RLC (`función quiver`).
* Las tensiones VL, VC, VR y ε superpuestas en el dominio del tiempo.
* Las ecuaciones de la FEM y la corriente con los valores correspondientes a los datos de entrada.
## Requerimientos de la implementación
1. El desarrollo debe realizarse empleando el entorno Google Collaboratory (BTW donde estás leyendo ahora), el cual está basado en el lenguaje de programación Python.
2. Crear un nuevo notebook para este práctico. Cuando se entregue el práctico el notebook debe estar con visibilidad pública.
* Durante el desarrollo debe compartirse con [email protected], [email protected] y [email protected], con permiso de visualización y generar comentarios *(no edición)*
* Para crear un notebook nuevo:
>Archivo → Bloc de notas nuevo
3. Cada cambio significativo debe guardarse fijando una revisión del archivo `Archivo → Guardar y fijar revisión`.
4. Cada bloque de código debe de estar documentado mediante comentarios que expliquen cómo funciona. No es necesario comentar cada sentencia de programación, pero sí es necesario entender qué es lo que hacen.
5. La implementación de cada funcionalidad del programa debe estar implementada en un bloque de código diferente.
6. Cada bloque de código debe ir acompañado por un título introductorio a su funcionalidad (introduciendo un bloque de texto antes).
7. El ingreso de datos para la ejecución del programa debe hacerse mediante un [formulario interactivo](https://colab.research.google.com/notebooks/forms.ipynb)
8. Incluir un esquema del circuito, se puede dibujar en https://www.circuit-diagram.org/editor/#
#Tutorial
##Google Colaboratory (Colab) y Jupyter Notebook
El espacio en el que vamos a trabajar se llama Google Colaboratory, esta es una plataforma de cómputo en la nube. La forma que dispone para interactuar con ella es [Jupiter Notebook](https://jupyter.org/), que permite de manera interactiva escribir texto y ejecutar código en Python.
## Usando Markdown
Markdown son un conjunto de instrucciones mediante las cuales le indicamos a Jupyter Notebook qué formato darle al texto. ***Doble click en cualquier parte del texto de este notebook ver cómo está escrito usando markdown***.
## Trabajando en equipo
Para trabajar colaborativamente, hacer click en el botón `Compartir` del panel superior.
# Programación en Python
##Importar librerías
```
#hola, soy un comentario
import numpy #importamos la librería de funciones matemáticas
from matplotlib import pyplot #importamos la librería de gráficos cartesianos
```
## Trabajar con variables
Para instanciar o asignarle valor a una variable usamos el signo "="
Las variables persisten en todo el notebook, se pueden usar en cualquier bloque de código una vez que son instanciadas
```
a=7 #creamos la variable "a" y le asignamos el valor 7
#se puede destinar un bloque de código para
```
En el bloque de abajo usamos la variable "a" de arriba
```
a+5 #realizamos una operación con la variable "a", debajo aparece el resultado de la operación
```
##Ingresar datos usando formularios
Los formularios permiten capturar datos de usuario para usarlos posteriormente en la ejecución de código.
* Para insertar un formulario se debe presionar el botón más acciones de celda (**⋮**) a la derecha de la celda de código. *No se debe usar `Insertar→Agregar campo de formulario` porque actualmente no funciona esta opción*.
* Ejemplos del uso de formularios se pueden encontrar [aquí](https://colab.research.google.com/notebooks/forms.ipynb#scrollTo=_7gRpQLXQSID).
* Desde (**⋮**) se puede ocultar el código asociado a un formulario.
```
#@title Ejemplos de formulario
valor_formulario_numerico = 5 #@param {type:"number"}
valor_formulario_slider = 16 #@param {type:"slider", min:0, max:100, step:1}
```
##Funciones matemáticas
* Las funciones aritméticas se encuentran incluidas en Python mediante [operadores](https://www.aprendeprogramando.es/cursos-online/python/operadores-aritmeticos/operadores-aritmeticos)
* Funciones adicionales como las trigonométricas se invocan desde la librería numpy. Una vez importada, la podemos usar mediante la sintaxis:
>`numpy.nombre de la función`
*[Tutorial de funciones de la librería numpy](https://www.interactivechaos.com/manual/tutorial-de-numpy/funciones-universales-trigonometricas)
```
#Ejemplos de uso de funciones matemáticas
#La función print muestra en pantalla un mensaje suministrado por el programador,
#el contenido de una variable o alguna operación más compleja con variables
print(f'Valor del formulario numerico: {valor_formulario_numerico}')
print(f'Valor del formulario numerico al cuadrado: {valor_formulario_numerico**2}')
print(f'Raiz cuadrada del formulario numerico al cuadrado: {numpy.sqrt(valor_formulario_numerico)}')
print(f'Seno del formulario slider al cuadrado: {numpy.sin(valor_formulario_slider)}')
```
##Generando gráficas
```
pi=numpy.pi #Obtenemos el valor de π desde numpy y lo guardamos en la variable pi
t = [i for i in numpy.arange(0,2*pi,pi/1000)] #generamos un vector de 2000 elementos,
#con valores entre 0 y 2π en pasos de π/1000
corriente=numpy.sin(t); #calculamos el seno de los valores de "t"
tension=numpy.sin(t+numpy.ones(len(t))*pi/2); #calculamos el seno de los valores
#de "t" y le sumamos un ángulo de fase de π/2
pyplot.plot(t,corriente,t,tension, '-') #generamos un gráfico de tensión y corriente
#en función de t grafic
pyplot.xlabel("Eje X") #damos nombre al eje X
pyplot.ylabel("Eje Y") #damos nombre al eje Y
pyplot.title("Onda seno") #agregamos el título al gráfico
pyplot.legend(["Corriente","Tensión"])
pyplot.figure(dpi=100) #ajustamos la resolución de la figura generada
pyplot.show() #mostramos el gráfico cuando ya lo tenemos listo
```
### Mostrar vectores
Para mostrar vectores usamos la funcíon quiver:
`(origen, coordenadas X de las puntas, coordenadas Y de las flechas, color=colores de las flechas)`
```
V = numpy.array([[1,1],[-2,2],[4,-7]])
origin = [0,0,0], [0,0,0] # origin point
pyplot.grid()
pyplot.quiver([0,0,0], [0,0,0], V[:,0], V[:,1], color=['r','b','g'], scale=23)
pyplot.show()
```
##Mostrar fórmulas
Para mostrar fórmulas podemos acudir a pyplot `(se podría usar otra librería u otra función si se quisiera)`
Las fórmulas introducidas de esta forma se escriben en LaTex
> Un buen editor de fórmulas de LaTex: https://www.latex4technics.com/
Para generar la fórmula usamos pyplot.text
* `pyplot.text(pos X, pos Y, r 'formula en latex', fontsize=tamaño de fuente)`
> * En el ejemplo %i indica que se debe reemplazar este símbolo por un valor entero o el contenido de una variable a la derecha del texto
> * Cambiando %i por %f indicaría que se debe interpretar el número externo al texto como valor con coma
* `pyplot.axis('off')` indica que no se debe mostrar nada más que el texto
* Finalmente, `pyplot.show()` muestra la ecuación
```
#add text
pyplot.text(1, 1,r'$\alpha > \beta > %i$'% valor_formulario_slider,fontsize=40)
pyplot.axis('off')
pyplot.show() #or savefig
```
| github_jupyter |
```
import json
import random
import numpy as np
import tensorflow as tf
from collections import deque
from keras.models import Sequential
from keras.optimizers import RMSprop
from keras.layers import Dense, Flatten
from keras.layers.convolutional import Conv2D
from keras import backend as K
import datetime
import itertools
import matplotlib.pyplot as plt
import pandas as pd
import scipy as sp
import time
import math
from matplotlib.colors import LinearSegmentedColormap
import colorsys
import numpy as np
from data_retrieval_relocation_3ksol_reloc import INSTANCEProvider
from kbh_yard_b2b_relocation import KBH_Env #This is the environment of the shunting yard
from dqn_kbh_colfax_relocation_test_agent import DQNAgent
# this function returns random colors for visualisation of learning.
def rand_cmap(nlabels, type='soft', first_color_black=True, last_color_black=False):
# Generate soft pastel colors, by limiting the RGB spectrum
if type == 'soft':
low = 0.6
high = 0.95
randRGBcolors = [(np.random.uniform(low=low, high=high),
np.random.uniform(low=low, high=high),
np.random.uniform(low=low, high=high)) for i in range(nlabels)]
if first_color_black:
randRGBcolors[0] = [0, 0, 0]
if last_color_black:
randRGBcolors[-1] = [0, 0, 0]
random_colormap = LinearSegmentedColormap.from_list('new_map', randRGBcolors, N=nlabels)
return random_colormap
#1525445230 is the 185k expensive relocation model.
for model_nr in ['1525445230']:
#which model to load.
test_case = model_nr
#LOAD THE INSTANCE PROVIDER
ig = INSTANCEProvider()
instances = ig.instances
# Create environment KBH
yrd = KBH_Env()
# Create the DQNAgent with the CNN approximation of the Q-function and its experience replay and training functions.
# load the trained model.
agent = DQNAgent(yrd, True, test_case)
# set epsilon to 0 to act just greedy
agent.epsilon = 0
#new_cmap = rand_cmap(200, type='soft', first_color_black=True, last_color_black=False, verbose=True)
visualization = False
n = len(instances)
# result vectors
original_lengths = []
terminated_at_step = []
success = []
relocations = []
print_count = 0
# train types different tracks?
type_step_track = []
for instance in instances:
nr_relocations = 0
if print_count % 100 == 0:
print(print_count)
print_count = print_count + 1
#Initialize problem
event_list = ig.get_instance(instance)
steps, t, total_t, score= len(event_list), 0, 0, 0
state = yrd.reset(event_list) # Get first observation based on the first train arrival.
history = np.reshape(state, (
1, yrd.shape[0], yrd.shape[1], yrd.shape[2])) # reshape state into tensor, which we call history.
done, busy_relocating = False, False
if visualization:
#visualize learning
new_cmap = rand_cmap(200, type='soft', first_color_black=True, last_color_black=False)
if visualization == True:
plt.imshow(np.float32(history[0][0]), cmap=new_cmap, interpolation='nearest')
plt.show()
while not done:
action = agent.get_action(history) # RL choose action based on observation
if visualization == True:
print(agent.model.predict(history))
print(action+1)
# # RL take action and get next observation and reward
# # note the +1 at action
# save for arrival activities the parking location
event_list_temp = event_list.reset_index(drop=True).copy()
if event_list_temp.event_type[0]=='arrival':
train_type = event_list_temp.composition[0]
type_step_track.append({'type': train_type, 'action': action+1, 'step':t, 'instance_id': instance})
# based on that action now let environment go to new state
event = event_list.iloc[0]
# check if after this we are done...
done_ = True if len(event_list) == 1 else False # then there is no next event
# if done_:
# print("Reached the end of a problem!")
if busy_relocating:
# here we do not drop an event from the event list.
coming_arrivals = event_list.loc[event_list['event_type'] == 'arrival'].reset_index(drop=True)
coming_departures = event_list.loc[event_list['event_type'] == 'departure'].reset_index(drop=True)
next_state, reward, done = yrd.reloc_destination_step(event, event_list, action+1, coming_arrivals, coming_departures, done_)
nr_relocations += 1
busy_relocating = False
else:
# These operations below are expensive: maybe just use indexing.
event_list.drop(event_list.index[:1], inplace=True)
coming_arrivals = event_list.loc[event_list['event_type'] == 'arrival'].reset_index(drop=True)
coming_departures = event_list.loc[event_list['event_type'] == 'departure'].reset_index(drop=True)
# do step
next_state, reward, done = yrd.step(action+1, coming_arrivals, coming_departures, event, event_list, done_)
busy_relocating = True if reward == -0.5 else False
history_ = np.float32(np.reshape(next_state, (1, yrd.shape[0], yrd.shape[1], yrd.shape[2])))
score += reward # log direct reward of action
if visualization == True:
#show action
plt.imshow(np.float32(history_[0][0]), cmap=new_cmap, interpolation='nearest')
plt.show()
time.sleep(0.05)
if reward == -1:
time.sleep(1)
print(reward)
if done: # based on what the environment returns.
#print('ended at step' , t+1)
#print('original length', steps)
original_lengths.append(steps)
terminated_at_step.append(t+1)
relocations.append(nr_relocations)
if int(np.unique(history_)[0]) == 1: #then we are in win state
success.append(1)
else:
success.append(0)
break;
history = history_ # next state now becomes the current state.
t += 1 # next step in this episode
#save data needed for Entropy calculations.
df_type_step_track = pd.DataFrame.from_records(type_step_track)
df_type_step_track['strtype'] = df_type_step_track.apply(lambda row: str(row.type), axis = 1)
df_type_step_track.strtype = df_type_step_track.strtype.astype('category')
filename = 'data_'+model_nr+'_relocation_arrival_actions.csv'
df_type_step_track.to_csv(filename)
# analysis_runs = pd.DataFrame(
# {'instance_id': instances,
# 'original_length': original_lengths,
# 'terminated_at_step': terminated_at_step
# })
# analysis_runs['solved'] = analysis_runs.apply(lambda row: 1 if row.original_length == row.terminated_at_step else 0, axis =1 )
# analysis_runs['tried'] = analysis_runs.apply(lambda row: 1 if row.terminated_at_step != -1 else 0, axis =1)
# analysis_runs['percentage'] = analysis_runs.apply(lambda row: row.solved/755, axis=1)
# analysis_runs.to_csv('best_model_solved_instances.csv')
# print('Model: ', model_nr)
# summary = analysis_runs.groupby('original_length', as_index=False)[['solved', 'tried', 'percentage']].sum()
# print(summary)
# #print hist
# %matplotlib inline
# #%%
# # analyse the parking actions per step and train type
# df_type_step_track = pd.DataFrame.from_records(type_step_track)
# bins = [1,2,3,4,5,6,7,8,9,10]
# plt.hist(df_type_step_track.action, bins, align='left')
# #prepare for save
# df_type_step_track['strtype'] = df_type_step_track.apply(lambda row: str(row.type), axis = 1)
# df_type_step_track.strtype = df_type_step_track.strtype.astype('category')
# filename = 'data_'+model_nr+'_paper.csv'
# df_type_step_track.to_csv(filename)
analysis_runs = pd.DataFrame(
{'instance_id': instances,
'original_length': original_lengths,
'terminated_at_step': terminated_at_step,
'success': success,
'nr_relocations': relocations
})
analysis_runs.sort_values('terminated_at_step')
print(analysis_runs.loc[analysis_runs.success == 0].instance_id.to_string(index=False))
analysis_runs.loc[analysis_runs.success == 1].copy().groupby('nr_relocations')[['instance_id']].count()
summary = analysis_runs.groupby('original_length', as_index=False)[['success']].sum()
print(summary)
summary = analysis_runs.groupby('original_length', as_index=False)[['success']].mean()
print(summary)
max_reloc = max(analysis_runs.nr_relocations)
print(max_reloc)
plt.hist(analysis_runs.nr_relocations, bins=range(0,max_reloc+2), align='left')
import seaborn as sns
sns.set(style="darkgrid")
g = sns.FacetGrid(analysis_runs, col="original_length", margin_titles=True)
bins = range(0,max_reloc+2)
g.map(plt.hist, "nr_relocations", color="steelblue", bins=bins, lw=0, align='left')
print(analysis_runs.loc[analysis_runs.success == 1].groupby('original_length', as_index=False)[['nr_relocations']].mean())
```
# CODE HAS BEEN RUN UNTILL HERE.
.
.
.
.
.
.
.
.
v
# analysis of mistakes
```
analysis_runs.loc[analysis_runs.success == 0].sort_values('terminated_at_step')
#plt.hist(analysis_runs.loc[analysis_runs.success == 0].terminated_at_step, bins=8)
len(analysis_runs.loc[analysis_runs.success == 0])
analysis_runs['instance_size'] = analysis_runs.apply(lambda row: str(row.original_length).replace('37', '14').replace('41', '15').replace('43', '16').replace('46','17'), axis=1)
import seaborn as sns
sns.set(style="darkgrid")
bins = [0,5,10,15,20,25,30,35,40,45,50]
g = sns.FacetGrid(analysis_runs.loc[analysis_runs.success == 0], col="instance_size", margin_titles=True)
g.set(ylim=(0, 100), xlim=(0,50))
g.map(plt.hist, "terminated_at_step", color="steelblue", bins=bins, lw=0)
sns.plt.savefig('185k_failures.eps')
```
| github_jupyter |
<a href="https://colab.research.google.com/github/dauparas/tensorflow_examples/blob/master/VAE_cell_cycle.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
https://github.com/PMBio/scLVM/blob/master/tutorials/tcell_demo.ipynb
Variational Autoencoder Model (VAE) with latent subspaces based on:
https://arxiv.org/pdf/1812.06190.pdf
```
#Step 1: import dependencies
from tensorflow.keras import layers
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import tensorflow as tf
from keras import regularizers
import time
from __future__ import division
import tensorflow as tf
import tensorflow_probability as tfp
tfd = tfp.distributions
%matplotlib inline
plt.style.use('dark_background')
import pandas as pd
import os
from matplotlib import cm
import h5py
import scipy as SP
import pylab as PL
data = os.path.join('data_Tcells_normCounts.h5f')
f = h5py.File(data,'r')
Y = f['LogNcountsMmus'][:] # gene expression matrix
tech_noise = f['LogVar_techMmus'][:] # technical noise
genes_het_bool=f['genes_heterogen'][:] # index of heterogeneous genes
geneID = f['gene_names'][:] # gene names
cellcyclegenes_filter = SP.unique(f['cellcyclegenes_filter'][:].ravel() -1) # idx of cell cycle genes from GO
cellcyclegenes_filterCB = f['ccCBall_gene_indices'][:].ravel() -1 # idx of cell cycle genes from cycle base ...
# filter cell cycle genes
idx_cell_cycle = SP.union1d(cellcyclegenes_filter,cellcyclegenes_filterCB)
# determine non-zero counts
idx_nonzero = SP.nonzero((Y.mean(0)**2)>0)[0]
idx_cell_cycle_noise_filtered = SP.intersect1d(idx_cell_cycle,idx_nonzero)
# subset gene expression matrix
Ycc = Y[:,idx_cell_cycle_noise_filtered]
plt = PL.subplot(1,1,1);
PL.imshow(Ycc,cmap=cm.RdBu,vmin=-3,vmax=+3,interpolation='None');
#PL.colorbar();
plt.set_xticks([]);
plt.set_yticks([]);
PL.xlabel('genes');
PL.ylabel('cells');
X = np.delete(Y, idx_cell_cycle_noise_filtered, axis=1)
X = Y #base case
U = Y[:,idx_cell_cycle_noise_filtered]
mean = np.mean(X, axis=0)
variance = np.var(X, axis=0)
indx_small_mean = np.argwhere(mean < 0.00001)
X = np.delete(X, indx_small_mean, axis=1)
mean = np.mean(X, axis=0)
variance = np.var(X, axis=0)
fano = variance/mean
print(fano.shape)
indx_small_fano = np.argwhere(fano < 1.0)
X = np.delete(X, indx_small_fano, axis=1)
mean = np.mean(X, axis=0)
variance = np.var(X, axis=0)
fano = variance/mean
print(fano.shape)
#Reconstruction loss
def x_given_z(z, output_size):
with tf.variable_scope('M/x_given_w_z'):
act = tf.nn.leaky_relu
h = z
h = tf.layers.dense(h, 8, act)
h = tf.layers.dense(h, 16, act)
h = tf.layers.dense(h, 32, act)
h = tf.layers.dense(h, 64, act)
h = tf.layers.dense(h, 128, act)
h = tf.layers.dense(h, 256, act)
loc = tf.layers.dense(h, output_size)
#log_variance = tf.layers.dense(x, latent_size)
#scale = tf.nn.softplus(log_variance)
scale = 0.01*tf.ones(tf.shape(loc))
return tfd.MultivariateNormalDiag(loc, scale)
#KL term for z
def z_given_x(x, latent_size): #+
with tf.variable_scope('M/z_given_x'):
act = tf.nn.leaky_relu
h = x
h = tf.layers.dense(h, 256, act)
h = tf.layers.dense(h, 128, act)
h = tf.layers.dense(h, 64, act)
h = tf.layers.dense(h, 32, act)
h = tf.layers.dense(h, 16, act)
h = tf.layers.dense(h, 8, act)
loc = tf.layers.dense(h,latent_size)
log_variance = tf.layers.dense(h, latent_size)
scale = tf.nn.softplus(log_variance)
# scale = 0.01*tf.ones(tf.shape(loc))
return tfd.MultivariateNormalDiag(loc, scale)
def z_given(latent_size):
with tf.variable_scope('M/z_given'):
loc = tf.zeros(latent_size)
scale = 0.01*tf.ones(tf.shape(loc))
return tfd.MultivariateNormalDiag(loc, scale)
#Connect encoder and decoder and define the loss function
tf.reset_default_graph()
x_in = tf.placeholder(tf.float32, shape=[None, X.shape[1]], name='x_in')
x_out = tf.placeholder(tf.float32, shape=[None, X.shape[1]], name='x_out')
z_latent_size = 2
beta = 0.000001
#KL_z
zI = z_given(z_latent_size)
zIx = z_given_x(x_in, z_latent_size)
zIx_sample = zIx.sample()
zIx_mean = zIx.mean()
#kl_z = tf.reduce_mean(zIx.log_prob(zIx_sample)- zI.log_prob(zIx_sample))
kl_z = tf.reduce_mean(tfd.kl_divergence(zIx, zI)) #analytical
#Reconstruction
xIz = x_given_z(zIx_sample, X.shape[1])
rec_out = xIz.mean()
rec_loss = tf.losses.mean_squared_error(x_out, rec_out)
loss = rec_loss + beta*kl_z
optimizer = tf.train.AdamOptimizer(0.001).minimize(loss)
#Helper function
def batch_generator(features, x, u, batch_size):
"""Function to create python generator to shuffle and split features into batches along the first dimension."""
idx = np.arange(features.shape[0])
np.random.shuffle(idx)
for start_idx in range(0, features.shape[0], batch_size):
end_idx = min(start_idx + batch_size, features.shape[0])
part = idx[start_idx:end_idx]
yield features[part,:], x[part,:] , u[part, :]
n_epochs = 5000
batch_size = X.shape[0]
start = time.time()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(n_epochs):
gen = batch_generator(X, X, U, batch_size) #create batch generator
rec_loss_ = 0
kl_z_ = 0
for j in range(np.int(X.shape[0]/batch_size)):
x_in_batch, x_out_batch, u_batch = gen.__next__()
_, rec_loss__, kl_z__= sess.run([optimizer, rec_loss, kl_z], feed_dict={x_in: x_in_batch, x_out: x_out_batch})
rec_loss_ += rec_loss__
kl_z_ += kl_z__
if (i+1)% 50 == 0 or i == 0:
zIx_mean_, rec_out_= sess.run([zIx_mean, rec_out], feed_dict ={x_in:X, x_out:X})
end = time.time()
print('epoch: {0}, rec_loss: {1:.3f}, kl_z: {2:.2f}'.format((i+1), rec_loss_/(1+np.int(X.shape[0]/batch_size)), kl_z_/(1+np.int(X.shape[0]/batch_size))))
start = time.time()
from sklearn.decomposition import TruncatedSVD
svd = TruncatedSVD(n_components=2, n_iter=7, random_state=42)
svd.fit(U.T)
print(svd.explained_variance_ratio_)
print(svd.explained_variance_ratio_.sum())
print(svd.singular_values_)
U_ = svd.components_
U_ = U_.T
import matplotlib.pyplot as plt
fig, axs = plt.subplots(1, 2, figsize=(14,5))
axs[0].scatter(zIx_mean_[:,0],zIx_mean_[:,1], c=U_[:,0], cmap='viridis', s=5.0);
axs[0].set_xlabel('z1')
axs[0].set_ylabel('z2')
fig.suptitle('X1')
plt.show()
fig, axs = plt.subplots(1, 2, figsize=(14,5))
axs[0].scatter(wIxy_mean_[:,0],wIxy_mean_[:,1], c=U_[:,1], cmap='viridis', s=5.0);
axs[0].set_xlabel('w1')
axs[0].set_ylabel('w2')
axs[1].scatter(zIx_mean_[:,0],zIx_mean_[:,1], c=U_[:,1], cmap='viridis', s=5.0);
axs[1].set_xlabel('z1')
axs[1].set_ylabel('z2')
fig.suptitle('X1')
plt.show()
error = np.abs(X-rec_out_)
plt.plot(np.reshape(error, -1), '*', markersize=0.1);
plt.hist(np.reshape(error, -1), bins=50);
```
| github_jupyter |
# Predicting Boston Housing Prices
## Using XGBoost in SageMaker (Hyperparameter Tuning)
_Deep Learning Nanodegree Program | Deployment_
---
As an introduction to using SageMaker's Low Level API for hyperparameter tuning, we will look again at the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass.
The documentation reference for the API used in this notebook is the [SageMaker Developer's Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/)
## General Outline
Typically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.
1. Download or otherwise retrieve the data.
2. Process / Prepare the data.
3. Upload the processed data to S3.
4. Train a chosen model.
5. Test the trained model (typically using a batch transform job).
6. Deploy the trained model.
7. Use the deployed model.
In this notebook we will only be covering steps 1 through 5 as we are only interested in creating a tuned model and testing its performance.
```
# Make sure that we use SageMaker 1.x
!pip install sagemaker==1.72.0
```
## Step 0: Setting up the notebook
We begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.
```
%matplotlib inline
import os
import time
from time import gmtime, strftime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
import sklearn.model_selection
```
In addition to the modules above, we need to import the various bits of SageMaker that we will be using.
```
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
# This is an object that represents the SageMaker session that we are currently operating in. This
# object contains some useful information that we will need to access later such as our region.
session = sagemaker.Session()
# This is an object that represents the IAM role that we are currently assigned. When we construct
# and launch the training job later we will need to tell it what IAM role it should have. Since our
# use case is relatively simple we will simply assign the training job the role we currently have.
role = get_execution_role()
```
## Step 1: Downloading the data
Fortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.
```
boston = load_boston()
```
## Step 2: Preparing and splitting the data
Given that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.
```
# First we package up the input data and the target variable (the median value) as pandas dataframes. This
# will make saving the data to a file a little easier later on.
X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)
Y_bos_pd = pd.DataFrame(boston.target)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)
```
## Step 3: Uploading the data files to S3
When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details.
### Save the data locally
First we need to create the test, train and validation csv files which we will then upload to S3.
**My Comment**:
To solve the "no space left" issue: remove `-sagemaker-deployment/cache/sentiment_analysis`.
```
# This is our local data directory. We need to make sure that it exists.
data_dir = '../data/boston'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header
# information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and
# validation data, it is assumed that the first entry in each row is the target variable.
X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
```
### Upload to S3
Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.
```
prefix = 'boston-xgboost-tuning-LL'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
```
## Step 4: Train and construct the XGBoost model
Now that we have the training and validation data uploaded to S3, we can construct our XGBoost model and train it. Unlike in the previous notebooks, instead of training a single model, we will use SageMakers hyperparameter tuning functionality to train multiple models and use the one that performs the best on the validation set.
### Set up the training job
First, we will set up a training job for our model. This is very similar to the way in which we constructed the training job in previous notebooks. Essentially this describes the *base* training job from which SageMaker will create refinements by changing some hyperparameters during the hyperparameter tuning job.
```
# We will need to know the name of the container that we want to use for training. SageMaker provides
# a nice utility method to construct this for us.
container = get_image_uri(session.boto_region_name, 'xgboost')
# We now specify the parameters we wish to use for our training job
training_params = {}
# We need to specify the permissions that this training job will have. For our purposes we can use
# the same permissions that our current SageMaker session has.
training_params['RoleArn'] = role
# Here we describe the algorithm we wish to use. The most important part is the container which
# contains the training code.
training_params['AlgorithmSpecification'] = {
"TrainingImage": container,
"TrainingInputMode": "File"
}
# We also need to say where we would like the resulting model artifacts stored.
training_params['OutputDataConfig'] = {
"S3OutputPath": "s3://" + session.default_bucket() + "/" + prefix + "/output"
}
# We also need to set some parameters for the training job itself. Namely we need to describe what sort of
# compute instance we wish to use along with a stopping condition to handle the case that there is
# some sort of error and the training script doesn't terminate.
training_params['ResourceConfig'] = {
"InstanceCount": 1,
"InstanceType": "ml.m4.xlarge",
"VolumeSizeInGB": 5
}
training_params['StoppingCondition'] = {
"MaxRuntimeInSeconds": 86400
}
# Next we set the algorithm specific hyperparameters. In this case, since we are setting up
# a training job which will serve as the base training job for the eventual hyperparameter
# tuning job, we only specify the _static_ hyperparameters. That is, the hyperparameters that
# we do _not_ want SageMaker to change.
training_params['StaticHyperParameters'] = {
"gamma": "4",
"subsample": "0.8",
"objective": "reg:linear",
"early_stopping_rounds": "10",
"num_round": "200"
}
# Now we need to tell SageMaker where the data should be retrieved from.
training_params['InputDataConfig'] = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": train_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": val_location,
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "csv",
"CompressionType": "None"
}
]
```
### Set up the tuning job
Now that the *base* training job has been set up, we can describe the tuning job that we would like SageMaker to perform. In particular, like in the high level notebook, we will specify which hyperparameters we wish SageMaker to change and what range of values they may take on.
In addition, we specify the *number* of models to construct (`max_jobs`) and the number of those that can be trained in parallel (`max_parallel_jobs`). In the cell below we have chosen to train `20` models, of which we ask that SageMaker train `3` at a time in parallel. Note that this results in a total of `20` training jobs being executed which can take some time, in this case almost a half hour. With more complicated models this can take even longer so be aware!
```
# We need to construct a dictionary which specifies the tuning job we want SageMaker to perform
tuning_job_config = {
# First we specify which hyperparameters we want SageMaker to be able to vary,
# and we specify the type and range of the hyperparameters.
"ParameterRanges": {
"CategoricalParameterRanges": [],
"ContinuousParameterRanges": [
{
"MaxValue": "0.5",
"MinValue": "0.05",
"Name": "eta"
},
],
"IntegerParameterRanges": [
{
"MaxValue": "12",
"MinValue": "3",
"Name": "max_depth"
},
{
"MaxValue": "8",
"MinValue": "2",
"Name": "min_child_weight"
}
]},
# We also need to specify how many models should be fit and how many can be fit in parallel
"ResourceLimits": {
"MaxNumberOfTrainingJobs": 20,
"MaxParallelTrainingJobs": 3
},
# Here we specify how SageMaker should update the hyperparameters as new models are fit
"Strategy": "Bayesian",
# And lastly we need to specify how we'd like to determine which models are better or worse
"HyperParameterTuningJobObjective": {
"MetricName": "validation:rmse",
"Type": "Minimize"
}
}
```
### Execute the tuning job
Now that we've built the data structures that describe the tuning job we want SageMaker to execute, it is time to actually start the job.
```
# First we need to choose a name for the job. This is useful for if we want to recall information about our
# tuning job at a later date. Note that SageMaker requires a tuning job name and that the name needs to
# be unique, which we accomplish by appending the current timestamp.
tuning_job_name = "tuning-job" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# And now we ask SageMaker to create (and execute) the training job
session.sagemaker_client.create_hyper_parameter_tuning_job(HyperParameterTuningJobName = tuning_job_name,
HyperParameterTuningJobConfig = tuning_job_config,
TrainingJobDefinition = training_params)
```
The tuning job has now been created by SageMaker and is currently running. Since we need the output of the tuning job, we may wish to wait until it has finished. We can do so by asking SageMaker to output the logs generated by the tuning job and continue doing so until the job terminates.
```
session.wait_for_tuning_job(tuning_job_name)
```
### Build the model
Now that the tuning job has finished, SageMaker has fit a number of models, the results of which are stored in a data structure which we can access using the name of the tuning job.
```
tuning_job_info = session.sagemaker_client.describe_hyper_parameter_tuning_job(HyperParameterTuningJobName=tuning_job_name)
```
Among the pieces of information included in the `tuning_job_info` object is the name of the training job which performed best out of all of the models that SageMaker fit to our data. Using this training job name we can get access to the resulting model artifacts, from which we can construct a model.
```
# We begin by asking SageMaker to describe for us the results of the best training job. The data
# structure returned contains a lot more information than we currently need, try checking it out
# yourself in more detail.
best_training_job_name = tuning_job_info['BestTrainingJob']['TrainingJobName']
training_job_info = session.sagemaker_client.describe_training_job(TrainingJobName=best_training_job_name)
model_artifacts = training_job_info['ModelArtifacts']['S3ModelArtifacts']
# Just like when we created a training job, the model name must be unique
model_name = best_training_job_name + "-model"
# We also need to tell SageMaker which container should be used for inference and where it should
# retrieve the model artifacts from. In our case, the xgboost container that we used for training
# can also be used for inference.
primary_container = {
"Image": container,
"ModelDataUrl": model_artifacts
}
# And lastly we construct the SageMaker model
model_info = session.sagemaker_client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
```
## Step 5: Testing the model
Now that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. In other words, we need to set up and execute a batch transform job, similar to the way that we constructed the training job earlier.
### Set up the batch transform job
Just like when we were training our model, we first need to provide some information in the form of a data structure that describes the batch transform job which we wish to execute.
We will only be using some of the options available here but to see some of the additional options please see the SageMaker documentation for [creating a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html).
```
# Just like in each of the previous steps, we need to make sure to name our job and the name should be unique.
transform_job_name = 'boston-xgboost-batch-transform-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
# Now we construct the data structure which will describe the batch transform job.
transform_request = \
{
"TransformJobName": transform_job_name,
# This is the name of the model that we created earlier.
"ModelName": model_name,
# This describes how many compute instances should be used at once. If you happen to be doing a very large
# batch transform job it may be worth running multiple compute instances at once.
"MaxConcurrentTransforms": 1,
# This says how big each individual request sent to the model should be, at most. One of the things that
# SageMaker does in the background is to split our data up into chunks so that each chunks stays under
# this size limit.
"MaxPayloadInMB": 6,
# Sometimes we may want to send only a single sample to our endpoint at a time, however in this case each of
# the chunks that we send should contain multiple samples of our input data.
"BatchStrategy": "MultiRecord",
# This next object describes where the output data should be stored. Some of the more advanced options which
# we don't cover here also describe how SageMaker should collect output from various batches.
"TransformOutput": {
"S3OutputPath": "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
},
# Here we describe our input data. Of course, we need to tell SageMaker where on S3 our input data is stored, in
# addition we need to detail the characteristics of our input data. In particular, since SageMaker may need to
# split our data up into chunks, it needs to know how the individual samples in our data file appear. In our
# case each line is its own sample and so we set the split type to 'line'. We also need to tell SageMaker what
# type of data is being sent, in this case csv, so that it can properly serialize the data.
"TransformInput": {
"ContentType": "text/csv",
"SplitType": "Line",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": test_location,
}
}
},
# And lastly we tell SageMaker what sort of compute instance we would like it to use.
"TransformResources": {
"InstanceType": "ml.m4.xlarge",
"InstanceCount": 1
}
}
```
### Execute the batch transform job
Now that we have created the request data structure, it is time to as SageMaker to set up and run our batch transform job. Just like in the previous steps, SageMaker performs these tasks in the background so that if we want to wait for the transform job to terminate (and ensure the job is progressing) we can ask SageMaker to wait of the transform job to complete.
```
transform_response = session.sagemaker_client.create_transform_job(**transform_request)
transform_desc = session.wait_for_transform_job(transform_job_name)
```
### Analyze the results
Now that the transform job has completed, the results are stored on S3 as we requested. Since we'd like to do a bit of analysis in the notebook we can use some notebook magic to copy the resulting output from S3 and save it locally.
```
transform_output = "s3://{}/{}/batch-bransform/".format(session.default_bucket(),prefix)
!aws s3 cp --recursive $transform_output $data_dir
```
To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement.
```
Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
plt.scatter(Y_test, Y_pred)
plt.xlabel("Median Price")
plt.ylabel("Predicted Price")
plt.title("Median Price vs Predicted Price")
```
## Optional: Clean up
The default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
```
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
```
| github_jupyter |
```
import DSGRN
import cProfile
import sys
sys.setrecursionlimit(10**9)
sys.path.insert(0,'/home/elizabeth/Desktop/GIT/dsgrn_acdc/src')
import PhenotypeGraphviz
import PhenotypeGraphFun
import CondensationGraph_iter
database = Database("/home/elizabeth/Desktop/ACDC/ACDC_Fullconn.db")
network = Network("/home/elizabeth/Desktop/ACDC/ACDC_Fullconn")
parameter_graph = ParameterGraph(network)
print(parameter_graph.size())
AP35 = {"Hb":[0,2], "Gt":2, "Kr":0, "Kni":0}
AP37 = {"Hb":2, "Gt":[0,2], "Kr":0, "Kni":0}
AP40 = {"Hb":2, "Gt":0, "Kr":[0,2], "Kni":0}
AP45 = {"Hb":[0,2], "Gt":0, "Kr":2, "Kni":0}
AP47 = {"Hb":[0,2], "Gt":0, "Kr":2, "Kni":0}
AP51 = {"Hb":0, "Gt":0, "Kr":2, "Kni":[0,2]}
AP57 = {"Hb":0, "Gt":0, "Kr":[0,2], "Kni":2}
AP61 = {"Hb":0, "Gt":0, "Kr":[0,2], "Kni":2}
AP63 = {"Hb":0, "Gt":[0,2], "Kr":0, "Kni":2}
AP67 = {"Hb":0, "Gt":2, "Kr":0, "Kni":[0,2]}
E = [[AP37], [AP40]]
paramslist = get_paramslist_optimized(database, E, '=')
```
We can see in the next two cells that the parallelized code took off 16 minutes of computation time here. I haven't run on anything larger, so I dont know how it scales yet. There is another example past this as well.
```
cProfile.runctx('get_phenotype_graph_optimized(database, paramslist)', globals(), {'database':database,'paramslist':paramslist})
cProfile.runctx('get_phenotype_graph_parallel(database, paramslist, num_processes)', globals(), {'database':database,'paramslist':paramslist, 'num_processes':8})
database = Database("/home/elizabeth/Desktop/ACDC/ACDC_FullconnE.db")
network = Network("/home/elizabeth/Desktop/ACDC/ACDC_FullconnE")
parameter_graph = ParameterGraph(network)
print(parameter_graph.size())
AP35 = {"Hb":[0,2], "Gt":2, "Kr":0, "Kni":0}
AP37 = {"Hb":2, "Gt":[0,1], "Kr":0, "Kni":0}
AP40 = {"Hb":2, "Gt":1, "Kr":[0,1], "Kni":0} #edit
AP45 = {"Hb":[0,1], "Gt":1, "Kr":2, "Kni":0} #edit
AP47 = {"Hb":[0,1], "Gt":0, "Kr":2, "Kni":0}
AP51 = {"Hb":1, "Gt":0, "Kr":2, "Kni":[0,1]} #edit
AP57 = {"Hb":1, "Gt":0, "Kr":[0,1], "Kni":2} #edit
AP61 = {"Hb":0, "Gt":0, "Kr":[0,1], "Kni":2}
AP63 = {"Hb":0, "Gt":[0,1], "Kr":1, "Kni":2} #edit
AP67 = {"Hb":0, "Gt":2, "Kr":1, "Kni":[0,1]} #edit
E = [[AP37], [AP40], [AP45], [AP47], [AP51], [AP57], [AP61], [AP63], [AP67]]
paramslist = get_paramslist(database, E, '<')
```
This one is smaller than the first example, but I feel shows promising results in speeding up computation time.
```
cProfile.runctx('get_phenotype_graph_optimized(database, paramslist)', globals(), {'database':database,'paramslist':paramslist})
cProfile.runctx('get_phenotype_graph_parallel(database, paramslist, num_processes)', globals(), {'database':database,'paramslist':paramslist, 'num_processes':8})
```
| github_jupyter |
```
from FC_RNN_Evaluater.FC_RNN_Evaluater import *
from FC_RNN_Evaluater.Stateful_FC_RNN_Configuration import *
from FC_RNN_Evaluater.runFC_RNN_Experiment import *
from keras import Model
from keras.layers import TimeDistributed, LSTM, Dense, Dropout, Flatten, Input
def getFinalModel(timesteps = timesteps, lstm_nodes = lstm_nodes, lstm_dropout = lstm_dropout,
lstm_recurrent_dropout = lstm_recurrent_dropout, num_outputs = num_outputs,
lr = learning_rate, include_vgg_top = include_vgg_top, use_vgg16 = use_vgg16):
if use_vgg16:
modelID = 'VGG16'
inp = (224, 224, 3)
modelPackage = vgg16
margins = (8, 8, 48, 48)
Target_Frame_Shape = (240, 320, 3)
cnn_model = vgg16.VGG16(weights='imagenet', input_shape = inp, include_top=include_vgg_top)
def preprocess_input(imagePath): return preprocess_input_for_model(imagePath, Target_Frame_Shape, margins, modelPackage)
if include_vgg_top:
modelID = modelID + '_inc_top'
cnn_model.layers.pop()
cnn_model.outputs = [cnn_model.layers[-1].output]
cnn_model.output_layers = [cnn_model.layers[-1]]
cnn_model.layers[-1].outbound_nodes = []
x = cnn_model.layers[-1].output
x = Dense(1024, activation='relu', name='predictions')(x)
model1 = Model(input=cnn_model.input,output=x)
cnn_model.summary()
rnn = Sequential()
rnn.add(TimeDistributed(model1, batch_input_shape=(train_batch_size, timesteps, inp[0], inp[1], inp[2]), name = 'tdCNN'))
if not include_vgg_top:
rnn.add(TimeDistributed(Flatten()))
"""
cnn_model.pop()
rnn.add(TimeDistributed(Dropout(0.25), name = 'dropout025_conv'))
rnn.add(TimeDistributed(Dense(1024), name = 'fc1024')) # , activation='relu', activation='relu', kernel_regularizer=regularizers.l2(0.001)
rnn.add(TimeDistributed(Dropout(0.25), name = 'dropout025'))
"""
rnn.add(TimeDistributed(Dense(num_outputs), name = 'fc3'))
rnn.add(LSTM(lstm_nodes, dropout=lstm_dropout, recurrent_dropout=lstm_recurrent_dropout, stateful=True))#, activation='relu'
#model = Model(inputs=cnn_model.input, outputs=rnn(TimeDistributed(cnn_model.output)))
modelID = modelID + '_seqLen%d' % timesteps
modelID = modelID + '_stateful'
modelID = modelID + '_lstm%d' % lstm_nodes
rnn.add(Dense(num_outputs))
modelID = modelID + '_output%d' % num_outputs
modelID = modelID + '_BatchSize%d' % train_batch_size
modelID = modelID + '_inEpochs%d' % in_epochs
modelID = modelID + '_outEpochs%d' % out_epochs
for layer in rnn.layers[:1]:
layer.trainable = False
adam = Adam(lr=lr)
modelID = modelID + '_AdamOpt_lr-%f' % lr
rnn.compile(optimizer=adam, loss='mean_absolute_error') #'mean_squared_error', metrics=['mae'])#
modelID = modelID + '_%s' % now()[:-7].replace(' ', '_').replace(':', '-')
return cnn_model, rnn, modelID, preprocess_input
vgg_model, full_model, modelID, preprocess_input = getFinalModel(timesteps = timesteps, lstm_nodes = lstm_nodes, lstm_dropout = lstm_dropout, lstm_recurrent_dropout = lstm_recurrent_dropout,
num_outputs = num_outputs, lr = learning_rate, include_vgg_top = include_vgg_top)
full_model = trainCNN_LSTM(full_model, modelID, out_epochs, trainingSubjects, timesteps, output_begin, num_outputs,
batch_size = train_batch_size, in_epochs = in_epochs, stateful = STATEFUL, preprocess_input = preprocess_input)
def unscaleEstimations(test_labels, predictions, scalers, output_begin, num_outputs):
"""* label_rescaling_factor * label_rescaling_factor
"""
sclrs = [scalers[0][output_begin:output_begin+num_outputs], scalers[1][output_begin:output_begin+num_outputs]]
test_labels = unscaleAnnoByScalers(test_labels, sclrs)
predictions = unscaleAnnoByScalers(predictions, sclrs)
return test_labels, predictions
def evaluateSubject(full_model, subject, test_gen, test_labels, timesteps, output_begin, num_outputs, angles, batch_size, stateful = False, record = False):
if num_outputs == 1: angles = ['Yaw']
printLog('For the Subject %d (%s):' % (subject, BIWI_Subject_IDs[subject]), record = record)
predictions = full_model.predict_generator(test_gen, steps = int(len(test_labels)/batch_size), verbose = 1)
#kerasEval = full_model.evaluate_generator(test_gen)
test_labels, predictions = unscaleEstimations(test_labels, predictions, BIWI_Lebel_Scalers, output_begin, num_outputs)
full_model.reset_states()
outputs = []
for i in range(num_outputs):
if stateful:
start_index = (test_labels.shape[0] % batch_size) if batch_size > 1 else 0
matrix = numpy.concatenate((test_labels[start_index:, i:i+1], predictions[:, i:i+1]), axis=1)
differences = (test_labels[start_index:, i:i+1] - predictions[:, i:i+1])
else:
print(test_labels[:, i:i+1].shape, predictions[:, i:i+1].shape)
matrix = numpy.concatenate((test_labels[:, i:i+1], predictions[:, i:i+1]), axis=1)
differences = (test_labels[:, i:i+1] - predictions[:, i:i+1])
absolute_mean_error = np.abs(differences).mean()
printLog("\tThe absolute mean error on %s angle estimation: %.2f Degree" % (angles[i], absolute_mean_error), record = record)
outputs.append((matrix, absolute_mean_error))
return full_model, outputs
def evaluateCNN_LSTM(full_model, label_rescaling_factor, testSubjects, timesteps, output_begin,
num_outputs, batch_size, angles, stateful = False, record = False, preprocess_input = None):
if num_outputs == 1: angles = ['Yaw']
test_generators, test_labelSets = getTestBiwiForImageModel(testSubjects, timesteps, False, output_begin, num_outputs,
batch_size = batch_size, stateful = stateful, record = record, preprocess_input = preprocess_input)
results = []
for subject, test_gen, test_labels in zip(testSubjects, test_generators, test_labelSets):
full_model, outputs = evaluateSubject(full_model, subject, test_gen, test_labels, timesteps, output_begin, num_outputs, angles, batch_size = batch_size, stateful = stateful, record = record)
results.append((subject, outputs))
means = evaluateAverage(results, angles, num_outputs, record = record)
return full_model, means, results
full_model, means, results = evaluateCNN_LSTM(full_model, label_rescaling_factor = label_rescaling_factor,
testSubjects = testSubjects, timesteps = timesteps, output_begin = output_begin,
num_outputs = num_outputs, batch_size = test_batch_size, angles = angles, stateful = STATEFUL, preprocess_input = preprocess_input)
test_generators, test_labelSets = getTestBiwiForImageModel(testSubjects, timesteps, False, output_begin, num_outputs,
batch_size = 1, stateful = True, preprocess_input = preprocess_input)
test_gen, test_labels = test_generators[0], test_labelSets[0]
test_labels
```
| github_jupyter |
```
import csv
from io import StringIO
with open('/Users/yetongxue/Downloads/网站管理.csv')
reader = csv.reader(f, delimiter=',')
for index,row in enumerate(reader):
if index == 0:
continue
print(row)
import tldextract
result = tldextract.extract('http://*.sometime')
result
from queue import PriorityQueue
import numpy as np
q = PriorityQueue(5)
arr = np.random.randint(1, 20, size=5)
arr
a = [[1,2], [3,4]]
a.reverse()
a
bin(int('2000000000000000004008020', 16))
len('10000000000000000000000000000000000000000000000000000000000000000000000100000000001000000000100000')
bin(int('00000000000004008020', 16))
len('100000000001000000000100000')
root = {}
node = root
for char in 'qwerasdf':
node = node.setdefault(char, {})
node['#'] = '#'
root
import datetime
t = 1531732030883 * 0.001
datetime.datetime.fromtimestamp(1531732030883 * 0.001)
today
import collections
dic = collections.OrderedDict()
dic[1] = 1
dic[2] = 2
dic
dic.popitem(last=False)
import datetime
today = datetime.datetime.now().date()
update_time = datetime.datetime.strptime('2020-08-07 23:20:10', '%Y-%m-%d %H:%M:%S').date()
if today - update_time > datetime.timedelta(days=7):
print(1)
(today - update_time).days
'1'.strip().startswith('=')
def check_row(row):
data = []
for i in row:
i = i.strip()
while i.startswith('='):
i = i[1:]
data.append(i)
return data
check_row(['12', ' ==12', '1=1'])
import re
res = re.match('^=+\s.', '=ab')
res
cc_ai = {
'normal': 'JS',
'low': 'JS',
'middle': 'JS',
'high': 'JSPAGE'
}
cc_ai
import datetime
start = datetime.datetime.now()
end = datetime.datetime.strptime('2020-09-24T07:14:04.252394', '%Y-%m-%dT%H:%M:%S.%f')
datetime.datetime.strptime(str(start.date()), '%Y-%m-%d') - datetime.timedelta(days=7)
isinstance(datetime.datetime.now(), datetime.datetime)
''.join('\t\ra-_sdf\n a sdf '.split())
SITE_MODULE = [
'app_ipv6', 'site_shield', 'subdomain_limit', 'custom_page',
'app_extra', 'service_charge', 'app_ipv6_defend', 'privacy_shield',
'app_ipv6_custom', 'app_subdomain_limit', 'app_ip_limit', 'app_bandwidth',
'app_subuser', 'batch', 'service_charge', 'cn2_line', 'site_shield_monitor',
'app_httpdns_1k', 'app_domain_limit'
]
SITE_MODULE
''.join('a, \rd'.replace(',', ',').split())
from docxtpl import DocxTemplate
path = '/Users/yetongxue/Downloads/test1.docx'
save_path = '/Users/yetongxue/Downloads/test1_save.docx'
template = DocxTemplate(path)
data = {'sites': [{'domain':'asdf'},{'domain':'asdf2323'}]}
template.render(context=data)
template.save(save_path)
import json
json.dumps([{'chargeType': 'plan', 'number': 1, 'expireTime': '2021-04-23', 'chargeId': '604d75cc261b3b0011a1a496'}, {'chargeType': 'additionPackage', 'number': 1, 'expireTime': '2021-04-23', 'chargeId': '604d75cd261b3b0011a1a498'}])
import commonds
status, output = commands.getstatusoutput('date')
# 2016年 06月 30日 星期四 19:26:21 CST
status, output
import os
res = os.popen('dig -x 2001:4d0:9700:903:198:9:3:30').read()
if 'intra.jiasule.com' in res:
print('ok')
res
```
| github_jupyter |
### Cell Painting morphological (CP) and L1000 gene expression (GE) profiles for the following datasets:
- **CDRP**-BBBC047-Bray-CP-GE (Cell line: U2OS) :
* $\bf{CP}$ There are 30,430 unique compounds for CP dataset, median number of replicates --> 4
* $\bf{GE}$ There are 21,782 unique compounds for GE dataset, median number of replicates --> 3
* 20,131 compounds are present in both datasets.
- **CDRP-bio**-BBBC036-Bray-CP-GE (Cell line: U2OS) :
* $\bf{CP}$ There are 2,242 unique compounds for CP dataset, median number of replicates --> 8
* $\bf{GE}$ There are 1,917 unique compounds for GE dataset, median number of replicates --> 2
* 1916 compounds are present in both datasets.
- **LUAD**-BBBC041-Caicedo-CP-GE (Cell line: A549) :
* $\bf{CP}$ There are 593 unique alleles for CP dataset, median number of replicates --> 8
* $\bf{GE}$ There are 529 unique alleles for GE dataset, median number of replicates --> 8
* 525 alleles are present in both datasets.
- **TA-ORF**-BBBC037-Rohban-CP-GE (Cell line: U2OS) :
* $\bf{CP}$ There are 323 unique alleles for CP dataset, median number of replicates --> 5
* $\bf{GE}$ There are 327 unique alleles for GE dataset, median number of replicates --> 2
* 150 alleles are present in both datasets.
- **LINCS**-Pilot1-CP-GE (Cell line: U2OS) :
* $\bf{CP}$ There are 1570 unique compounds across 7 doses for CP dataset, median number of replicates --> 5
* $\bf{GE}$ There are 1402 unique compounds for GE dataset, median number of replicates --> 3
* $N_{p/d}$: 6984 compounds are present in both datasets.
--------------------------------------------
#### Link to the processed profiles:
https://cellpainting-datasets.s3.us-east-1.amazonaws.com/Rosetta-GE-CP
```
%matplotlib notebook
%load_ext autoreload
%autoreload 2
import numpy as np
import scipy.spatial
import pandas as pd
import sklearn.decomposition
import matplotlib.pyplot as plt
import seaborn as sns
import os
from cmapPy.pandasGEXpress.parse import parse
from utils.replicateCorrs import replicateCorrs
from utils.saveAsNewSheetToExistingFile import saveAsNewSheetToExistingFile,saveDF_to_CSV_GZ_no_timestamp
from importlib import reload
from utils.normalize_funcs import standardize_per_catX
# sns.set_style("whitegrid")
# np.__version__
pd.__version__
```
### Input / ouput files:
- **CDRPBIO**-BBBC047-Bray-CP-GE (Cell line: U2OS) :
* $\bf{CP}$
* Input:
* Output:
* $\bf{GE}$
* Input: .mat files that are generated using https://github.com/broadinstitute/2014_wawer_pnas
* Output:
- **LUAD**-BBBC041-Caicedo-CP-GE (Cell line: A549) :
* $\bf{CP}$
* Input:
* Output:
* $\bf{GE}$
* Input:
* Output:
- **TA-ORF**-BBBC037-Rohban-CP-GE (Cell line: U2OS) :
* $\bf{CP}$
* Input:
* Output:
* $\bf{GE}$
* Input: https://data.broadinstitute.org/icmap/custom/TA/brew/pc/TA.OE005_U2OS_72H/
* Output:
### Reformat Cell-Painting Data Sets
- CDRP and TA-ORF are in /storage/data/marziehhaghighi/Rosetta/raw-profiles/
- Luad is already processed by Juan, source of the files is at /storage/luad/profiles_cp
in case you want to reformat
```
fileName='RepCorrDF'
### dirs on gpu cluster
# rawProf_dir='/storage/data/marziehhaghighi/Rosetta/raw-profiles/'
# procProf_dir='/home/marziehhaghighi/workspace_rosetta/workspace/'
### dirs on ec2
rawProf_dir='/home/ubuntu/bucket/projects/2018_04_20_Rosetta/workspace/raw-profiles/'
# procProf_dir='./'
procProf_dir='/home/ubuntu/bucket/projects/2018_04_20_Rosetta/workspace/'
# s3://imaging-platform/projects/2018_04_20_Rosetta/workspace/preprocessed_data
# aws s3 sync preprocessed_data s3://cellpainting-datasets/Rosetta-GE-CP/preprocessed_data --profile jumpcpuser
filename='../../results/RepCor/'+fileName+'.xlsx'
# ls ../../
# https://cellpainting-datasets.s3.us-east-1.amazonaws.com/
```
# CDRP-BBBC047-Bray
### GE - L1000 - CDRP
```
os.listdir(rawProf_dir+'/l1000_CDRP/')
cdrp_dataDir=rawProf_dir+'/l1000_CDRP/'
cpd_info = pd.read_csv(cdrp_dataDir+"/compounds.txt", sep="\t", dtype=str)
cpd_info.columns
from scipy.io import loadmat
x = loadmat(cdrp_dataDir+'cdrp.all.prof.mat')
k1=x['metaWell']['pert_id'][0][0]
k2=x['metaGen']['AFFX_PROBE_ID'][0][0]
k3=x['metaWell']['pert_dose'][0][0]
k4=x['metaWell']['det_plate'][0][0]
# pert_dose
# x['metaWell']['pert_id'][0][0][0][0][0]
pertID = []
probID=[]
for r in range(len(k1)):
v = k1[r][0][0]
pertID.append(v)
# probID.append(k2[r][0][0])
for r in range(len(k2)):
probID.append(k2[r][0][0])
pert_dose=[]
det_plate=[]
for r in range(len(k3)):
pert_dose.append(k3[r][0])
det_plate.append(k4[r][0][0])
dataArray=x['pclfc'];
cdrp_l1k_rep = pd.DataFrame(data=dataArray,columns=probID)
cdrp_l1k_rep['pert_id']=pertID
cdrp_l1k_rep['pert_dose']=pert_dose
cdrp_l1k_rep['det_plate']=det_plate
cdrp_l1k_rep['BROAD_CPD_ID']=cdrp_l1k_rep['pert_id'].str[:13]
cdrp_l1k_rep2=pd.merge(cdrp_l1k_rep, cpd_info, how='left',on=['BROAD_CPD_ID'])
l1k_features_cdrp=cdrp_l1k_rep2.columns[cdrp_l1k_rep2.columns.str.contains("_at")]
cdrp_l1k_rep2['pert_id_dose']=cdrp_l1k_rep2['BROAD_CPD_ID']+'_'+cdrp_l1k_rep2['pert_dose'].round(2).astype(str)
cdrp_l1k_rep2['pert_sample_dose']=cdrp_l1k_rep2['pert_id']+'_'+cdrp_l1k_rep2['pert_dose'].round(2).astype(str)
# cdrp_l1k_df.head()
print(cpd_info.shape,cdrp_l1k_rep.shape,cdrp_l1k_rep2.shape)
cdrp_l1k_rep2['pert_id_dose']=cdrp_l1k_rep2['pert_id_dose'].replace('DMSO_-666.0', 'DMSO')
cdrp_l1k_rep2['pert_sample_dose']=cdrp_l1k_rep2['pert_sample_dose'].replace('DMSO_-666.0', 'DMSO')
saveDF_to_CSV_GZ_no_timestamp(cdrp_l1k_rep2,procProf_dir+'preprocessed_data/CDRP-BBBC047-Bray/L1000/replicate_level_l1k.csv.gz');
# cdrp_l1k_rep2.head()
# cpd_info
```
### CP - CDRP
```
profileType=['_augmented','_normalized']
bioactiveFlag="";# either "-bioactive" or ""
plates=os.listdir(rawProf_dir+'/CDRP'+bioactiveFlag+'/')
for pt in profileType[1:2]:
repLevelCDRP0=[]
for p in plates:
# repLevelCDRP0.append(pd.read_csv(rawProf_dir+'/CDRP/'+p+'/'+p+pt+'.csv'))
repLevelCDRP0.append(pd.read_csv(rawProf_dir+'/CDRP'+bioactiveFlag+'/'+p+'/'+p+pt+'.csv')) #if bioactive
repLevelCDRP = pd.concat(repLevelCDRP0)
metaCDRP1=pd.read_csv(rawProf_dir+'/CP_CDRP/metadata/metadata_CDRP.csv')
# metaCDRP1=metaCDRP1.rename(columns={"PlateName":"Metadata_Plate_Map_Name",'Well':'Metadata_Well'})
# metaCDRP1['Metadata_Well']=metaCDRP1['Metadata_Well'].str.lower()
repLevelCDRP2=pd.merge(repLevelCDRP, metaCDRP1, how='left',on=['Metadata_broad_sample'])
# repLevelCDRP2['Metadata_Sample_Dose']=repLevelCDRP2['Metadata_broad_sample']+'_'+repLevelCDRP2['Metadata_mmoles_per_liter'].round(0).astype(int).astype(str)
# repLevelCDRP2['Metadata_Sample_Dose']=repLevelCDRP2['Metadata_pert_id']+'_'+(repLevelCDRP2['Metadata_mmoles_per_liter']*2).round(0).astype(int).astype(str)
repLevelCDRP2["Metadata_mmoles_per_liter2"]=(repLevelCDRP2["Metadata_mmoles_per_liter"]*2).round(2)
repLevelCDRP2['Metadata_Sample_Dose']=repLevelCDRP2['Metadata_broad_sample']+'_'+repLevelCDRP2['Metadata_mmoles_per_liter2'].astype(str)
repLevelCDRP2['Metadata_Sample_Dose']=repLevelCDRP2['Metadata_Sample_Dose'].replace('DMSO_0.0', 'DMSO')
repLevelCDRP2['Metadata_pert_id']=repLevelCDRP2['Metadata_pert_id'].replace(np.nan, 'DMSO')
# repLevelCDRP2.to_csv(procProf_dir+'preprocessed_data/CDRPBIO-BBBC036-Bray/CellPainting/replicate_level_cp'+pt+'.csv.gz',index=False,compression='gzip')
# ,
if bioactiveFlag:
dataFolderName='CDRPBIO-BBBC036-Bray'
saveDF_to_CSV_GZ_no_timestamp(repLevelCDRP2,procProf_dir+'preprocessed_data/'+dataFolderName+\
'/CellPainting/replicate_level_cp'+pt+'.csv.gz')
else:
# sgfsgf
dataFolderName='CDRP-BBBC047-Bray'
saveDF_to_CSV_GZ_no_timestamp(repLevelCDRP2,procProf_dir+'preprocessed_data/'+dataFolderName+\
'/CellPainting/replicate_level_cp'+pt+'.csv.gz')
print(metaCDRP1.shape,repLevelCDRP.shape,repLevelCDRP2.shape)
dataFolderName='CDRP-BBBC047-Bray'
cp_feats=repLevelCDRP.columns[repLevelCDRP.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")].tolist()
features_to_remove =find_correlation(repLevelCDRP2[cp_feats], threshold=0.9, remove_negative=False)
repLevelCDRP2_var_sel=repLevelCDRP2.drop(columns=features_to_remove)
saveDF_to_CSV_GZ_no_timestamp(repLevelCDRP2_var_sel,procProf_dir+'preprocessed_data/'+dataFolderName+\
'/CellPainting/replicate_level_cp'+'_normalized_variable_selected'+'.csv.gz')
# features_to_remove
# features_to_remove
# features_to_remove
repLevelCDRP2['Nuclei_Texture_Variance_RNA_3_0']
# repLevelCDRP2.shape
# cp_scaled.columns[cp_scaled.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")].tolist()
```
# CDRP-bio-BBBC036-Bray
### GE - L1000 - CDRPBIO
```
bioactiveFlag="-bioactive";# either "-bioactive" or ""
plates=os.listdir(rawProf_dir+'/CDRP'+bioactiveFlag+'/')
# plates
cdrp_l1k_rep2_bioactive=cdrp_l1k_rep2[cdrp_l1k_rep2["pert_sample_dose"].isin(repLevelCDRP2.Metadata_Sample_Dose.unique().tolist())]
cdrp_l1k_rep.det_plate
```
### CP - CDRPBIO
```
profileType=['_augmented','_normalized','_normalized_variable_selected']
bioactiveFlag="-bioactive";# either "-bioactive" or ""
plates=os.listdir(rawProf_dir+'/CDRP'+bioactiveFlag+'/')
for pt in profileType:
repLevelCDRP0=[]
for p in plates:
# repLevelCDRP0.append(pd.read_csv(rawProf_dir+'/CDRP/'+p+'/'+p+pt+'.csv'))
repLevelCDRP0.append(pd.read_csv(rawProf_dir+'/CDRP'+bioactiveFlag+'/'+p+'/'+p+pt+'.csv')) #if bioactive
repLevelCDRP = pd.concat(repLevelCDRP0)
metaCDRP1=pd.read_csv(rawProf_dir+'/CP_CDRP/metadata/metadata_CDRP.csv')
# metaCDRP1=metaCDRP1.rename(columns={"PlateName":"Metadata_Plate_Map_Name",'Well':'Metadata_Well'})
# metaCDRP1['Metadata_Well']=metaCDRP1['Metadata_Well'].str.lower()
repLevelCDRP2=pd.merge(repLevelCDRP, metaCDRP1, how='left',on=['Metadata_broad_sample'])
# repLevelCDRP2['Metadata_Sample_Dose']=repLevelCDRP2['Metadata_broad_sample']+'_'+repLevelCDRP2['Metadata_mmoles_per_liter'].round(0).astype(int).astype(str)
# repLevelCDRP2['Metadata_Sample_Dose']=repLevelCDRP2['Metadata_pert_id']+'_'+(repLevelCDRP2['Metadata_mmoles_per_liter']*2).round(0).astype(int).astype(str)
repLevelCDRP2["Metadata_mmoles_per_liter2"]=(repLevelCDRP2["Metadata_mmoles_per_liter"]*2).round(2)
repLevelCDRP2['Metadata_Sample_Dose']=repLevelCDRP2['Metadata_broad_sample']+'_'+repLevelCDRP2['Metadata_mmoles_per_liter2'].astype(str)
repLevelCDRP2['Metadata_Sample_Dose']=repLevelCDRP2['Metadata_Sample_Dose'].replace('DMSO_0.0', 'DMSO')
repLevelCDRP2['Metadata_pert_id']=repLevelCDRP2['Metadata_pert_id'].replace(np.nan, 'DMSO')
# repLevelCDRP2.to_csv(procProf_dir+'preprocessed_data/CDRPBIO-BBBC036-Bray/CellPainting/replicate_level_cp'+pt+'.csv.gz',index=False,compression='gzip')
# ,
if bioactiveFlag:
dataFolderName='CDRPBIO-BBBC036-Bray'
saveDF_to_CSV_GZ_no_timestamp(repLevelCDRP2,procProf_dir+'preprocessed_data/'+dataFolderName+\
'/CellPainting/replicate_level_cp'+pt+'.csv.gz')
else:
dataFolderName='CDRP-BBBC047-Bray'
saveDF_to_CSV_GZ_no_timestamp(repLevelCDRP2,procProf_dir+'preprocessed_data/'+dataFolderName+\
'/CellPainting/replicate_level_cp'+pt+'.csv.gz')
print(metaCDRP1.shape,repLevelCDRP.shape,repLevelCDRP2.shape)
```
# LUAD-BBBC041-Caicedo
### GE - L1000 - LUAD
```
os.listdir(rawProf_dir+'/l1000_LUAD/input/')
os.listdir(rawProf_dir+'/l1000_LUAD/output/')
luad_dataDir=rawProf_dir+'/l1000_LUAD/'
luad_info1 = pd.read_csv(luad_dataDir+"/input/TA.OE014_A549_96H.map", sep="\t", dtype=str)
luad_info2 = pd.read_csv(luad_dataDir+"/input/TA.OE015_A549_96H.map", sep="\t", dtype=str)
luad_info=pd.concat([luad_info1, luad_info2], ignore_index=True)
luad_info.head()
luad_l1k_df = parse(luad_dataDir+"/output/high_rep_A549_8reps_141230_ZSPCINF_n4232x978.gctx").data_df.T.reset_index()
luad_l1k_df=luad_l1k_df.rename(columns={"cid":"id"})
# cdrp_l1k_df['XX']=cdrp_l1k_df['cid'].str[0]
# cdrp_l1k_df['BROAD_CPD_ID']=cdrp_l1k_df['cid'].str[2:15]
luad_l1k_df2=pd.merge(luad_l1k_df, luad_info, how='inner',on=['id'])
luad_l1k_df2=luad_l1k_df2.rename(columns={"x_mutation_status":"allele"})
l1k_features=luad_l1k_df2.columns[luad_l1k_df2.columns.str.contains("_at")]
luad_l1k_df2['allele']=luad_l1k_df2['allele'].replace('UnTrt', 'DMSO')
print(luad_info.shape,luad_l1k_df.shape,luad_l1k_df2.shape)
saveDF_to_CSV_GZ_no_timestamp(luad_l1k_df2,procProf_dir+'/preprocessed_data/LUAD-BBBC041-Caicedo/L1000/replicate_level_l1k.csv.gz')
luad_l1k_df_scaled = standardize_per_catX(luad_l1k_df2,'det_plate',l1k_features.tolist());
x_l1k_luad=replicateCorrs(luad_l1k_df_scaled.reset_index(drop=True),'allele',l1k_features,1)
# x_l1k_luad=replicateCorrs(luad_l1k_df2[luad_l1k_df2['allele']!='DMSO'].reset_index(drop=True),'allele',l1k_features,1)
# saveAsNewSheetToExistingFile(filename,x_l1k_luad[2],'l1k-luad')
```
### CP - LUAD
```
profileType=['_augmented','_normalized','_normalized_variable_selected']
plates=os.listdir('/storage/luad/profiles_cp/LUAD-BBBC043-Caicedo/')
for pt in profileType[1:2]:
repLevelLuad0=[]
for p in plates:
repLevelLuad0.append(pd.read_csv('/storage/luad/profiles_cp/LUAD-BBBC043-Caicedo/'+p+'/'+p+pt+'.csv'))
repLevelLuad = pd.concat(repLevelLuad0)
metaLuad1=pd.read_csv(rawProf_dir+'/CP_LUAD/metadata/combined_platemaps_AHB_20150506_ssedits.csv')
metaLuad1=metaLuad1.rename(columns={"PlateName":"Metadata_Plate_Map_Name",'Well':'Metadata_Well'})
metaLuad1['Metadata_Well']=metaLuad1['Metadata_Well'].str.lower()
# metaLuad2=pd.read_csv('~/workspace_rosetta/workspace/raw_profiles/CP_LUAD/metadata/barcode_platemap.csv')
# Y[Y['Metadata_Well']=='g05']['Nuclei_Texture_Variance_Mito_5_0']
repLevelLuad2=pd.merge(repLevelLuad, metaLuad1, how='inner',on=['Metadata_Plate_Map_Name','Metadata_Well'])
repLevelLuad2['x_mutation_status']=repLevelLuad2['x_mutation_status'].replace(np.nan, 'DMSO')
cp_features=repLevelLuad2.columns[repLevelLuad2.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")]
# repLevelLuad2.to_csv(procProf_dir+'preprocessed_data/LUAD-BBBC041-Caicedo/CellPainting/replicate_level_cp'+pt+'.csv.gz',index=False,compression='gzip')
saveDF_to_CSV_GZ_no_timestamp(repLevelLuad2,procProf_dir+'preprocessed_data/LUAD-BBBC041-Caicedo/CellPainting/replicate_level_cp'+pt+'.csv.gz')
print(metaLuad1.shape,repLevelLuad.shape,repLevelLuad2.shape)
pt=['_normalized']
# Read save data
repLevelLuad2=pd.read_csv('./preprocessed_data/LUAD-BBBC041-Caicedo/CellPainting/replicate_level_cp'+pt[0]+'.csv.gz')
# repLevelTA.head()
cp_features=repLevelLuad2.columns[repLevelLuad2.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")]
cols2remove0=[i for i in cp_features if ((repLevelLuad2[i].isnull()).sum(axis=0)/repLevelLuad2.shape[0])>0.05]
print(cols2remove0)
repLevelLuad2=repLevelLuad2.drop(cols2remove0, axis=1);
cp_features=repLevelLuad2.columns[repLevelLuad2.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")]
repLevelLuad2 = repLevelLuad2.interpolate()
repLevelLuad2 = standardize_per_catX(repLevelLuad2,'Metadata_Plate',cp_features.tolist());
df1=repLevelLuad2[~repLevelLuad2['x_mutation_status'].isnull()].reset_index(drop=True)
x_cp_luad=replicateCorrs(df1,'x_mutation_status',cp_features,1)
saveAsNewSheetToExistingFile(filename,x_cp_luad[2],'cp-luad')
```
# TA-ORF-BBBC037-Rohban
### GE - L1000
```
taorf_datadir=rawProf_dir+'/l1000_TA_ORF/'
gene_info = pd.read_csv(taorf_datadir+"TA.OE005_U2OS_72H.map.txt", sep="\t", dtype=str)
# gene_info.columns
# TA.OE005_U2OS_72H_INF_n729x22268.gctx
# TA.OE005_U2OS_72H_QNORM_n729x978.gctx
# TA.OE005_U2OS_72H_ZSPCINF_n729x22268.gctx
# TA.OE005_U2OS_72H_ZSPCQNORM_n729x978.gctx
taorf_l1k0 = parse(taorf_datadir+"TA.OE005_U2OS_72H_ZSPCQNORM_n729x978.gctx")
# taorf_l1k0 = parse(taorf_datadir+"TA.OE005_U2OS_72H_QNORM_n729x978.gctx")
taorf_l1k_df0=taorf_l1k0.data_df
taorf_l1k_df=taorf_l1k_df0.T.reset_index()
l1k_features=taorf_l1k_df.columns[taorf_l1k_df.columns.str.contains("_at")]
taorf_l1k_df=taorf_l1k_df.rename(columns={"cid":"id"})
taorf_l1k_df2=pd.merge(taorf_l1k_df, gene_info, how='inner',on=['id'])
# print(taorf_l1k_df.shape,gene_info.shape,taorf_l1k_df2.shape)
taorf_l1k_df2.head()
# x_genesymbol_mutation
taorf_l1k_df2['pert_id']=taorf_l1k_df2['pert_id'].replace('CMAP-000', 'DMSO')
# compression_opts = dict(method='zip',archive_name='out.csv')
# taorf_l1k_df2.to_csv(procProf_dir+'preprocessed_data/TA-ORF-BBBC037-Rohban/L1000/replicate_level_l1k.csv.gz',index=False,compression=compression_opts)
saveDF_to_CSV_GZ_no_timestamp(taorf_l1k_df2,procProf_dir+'preprocessed_data/TA-ORF-BBBC037-Rohban/L1000/replicate_level_l1k.csv.gz')
print(gene_info.shape,taorf_l1k_df.shape,taorf_l1k_df2.shape)
# gene_info.head()
taorf_l1k_df2.groupby(['x_genesymbol_mutation']).size().describe()
taorf_l1k_df2.groupby(['pert_id']).size().describe()
```
#### Check Replicate Correlation
```
# df1=taorf_l1k_df2[taorf_l1k_df2['pert_id']!='CMAP-000']
df1_scaled = standardize_per_catX(taorf_l1k_df2,'det_plate',l1k_features.tolist());
df1_scaled2=df1_scaled[df1_scaled['pert_id']!='DMSO']
x=replicateCorrs(df1_scaled2,'pert_id',l1k_features,1)
```
### CP - TAORF
```
profileType=['_augmented','_normalized','_normalized_variable_selected']
plates=os.listdir(rawProf_dir+'TA-ORF-BBBC037-Rohban/')
for pt in profileType[0:1]:
repLevelTA0=[]
for p in plates:
repLevelTA0.append(pd.read_csv(rawProf_dir+'TA-ORF-BBBC037-Rohban/'+p+'/'+p+pt+'.csv'))
repLevelTA = pd.concat(repLevelTA0)
metaTA1=pd.read_csv(rawProf_dir+'/CP_TA_ORF/metadata/metadata_TA.csv')
metaTA2=pd.read_csv(rawProf_dir+'/CP_TA_ORF/metadata/metadata_TA_2.csv')
# metaTA2=metaTA2.rename(columns={"Metadata_broad_sample":"Metadata_broad_sample_2",'Metadata_Treatment':'Gene Allele Name'})
metaTA=pd.merge(metaTA2, metaTA1, how='left',on=['Metadata_broad_sample'])
# metaTA2=metaTA2.rename(columns={"Metadata_Treatment":"Metadata_pert_name"})
# repLevelTA2=pd.merge(repLevelTA, metaTA2, how='left',on=['Metadata_pert_name'])
repLevelTA2=pd.merge(repLevelTA, metaTA, how='left',on=['Metadata_broad_sample'])
# repLevelTA2=repLevelTA2.rename(columns={"Gene Allele Name":"Allele"})
repLevelTA2['Metadata_broad_sample']=repLevelTA2['Metadata_broad_sample'].replace(np.nan, 'DMSO')
saveDF_to_CSV_GZ_no_timestamp(repLevelTA2,procProf_dir+'/preprocessed_data/TA-ORF-BBBC037-Rohban/CellPainting/replicate_level_cp'+pt+'.csv.gz')
print(metaTA.shape,repLevelTA.shape,repLevelTA2.shape)
# repLevelTA.head()
cp_features=repLevelTA2.columns[repLevelTA2.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")]
cols2remove0=[i for i in cp_features if ((repLevelTA2[i].isnull()).sum(axis=0)/repLevelTA2.shape[0])>0.05]
print(cols2remove0)
repLevelTA2=repLevelTA2.drop(cols2remove0, axis=1);
# cp_features=list(set(cp_features)-set(cols2remove0))
# repLevelTA2=repLevelTA2.replace('nan', np.nan)
repLevelTA2 = repLevelTA2.interpolate()
cp_features=repLevelTA2.columns[repLevelTA2.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")]
repLevelTA2 = standardize_per_catX(repLevelTA2,'Metadata_Plate',cp_features.tolist());
df1=repLevelTA2[~repLevelTA2['Metadata_broad_sample'].isnull()].reset_index(drop=True)
x_taorf_cp=replicateCorrs(df1,'Metadata_broad_sample',cp_features,1)
# saveAsNewSheetToExistingFile(filename,x_taorf_cp[2],'cp-taorf')
# plates
```
# LINCS-Pilot1
### GE - L1000 - LINCS
```
os.listdir(rawProf_dir+'/l1000_LINCS/2016_04_01_a549_48hr_batch1_L1000/')
os.listdir(rawProf_dir+'/l1000_LINCS/metadata/')
data_meta_match_ls=[['level_3','level_3_q2norm_n27837x978.gctx','col_meta_level_3_REP.A_A549_only_n27837.txt'],
['level_4W','level_4W_zspc_n27837x978.gctx','col_meta_level_3_REP.A_A549_only_n27837.txt'],
['level_4','level_4_zspc_n27837x978.gctx','col_meta_level_3_REP.A_A549_only_n27837.txt'],
['level_5_modz','level_5_modz_n9482x978.gctx','col_meta_level_5_REP.A_A549_only_n9482.txt'],
['level_5_rank','level_5_rank_n9482x978.gctx','col_meta_level_5_REP.A_A549_only_n9482.txt']]
lincs_dataDir=rawProf_dir+'/l1000_LINCS/'
lincs_pert_info = pd.read_csv(lincs_dataDir+"/metadata/REP.A_A549_pert_info.txt", sep="\t", dtype=str)
lincs_meta_level3 = pd.read_csv(lincs_dataDir+"/metadata/col_meta_level_3_REP.A_A549_only_n27837.txt", sep="\t", dtype=str)
# lincs_info1 = pd.read_csv(lincs_dataDir+"/metadata/REP.A_A549_pert_info.txt", sep="\t", dtype=str)
print(lincs_meta_level3.shape)
lincs_meta_level3.head()
# lincs_info2 = pd.read_csv(lincs_dataDir+"/input/TA.OE015_A549_96H.map", sep="\t", dtype=str)
# lincs_info=pd.concat([lincs_info1, lincs_info2], ignore_index=True)
# lincs_info.head()
# lincs_meta_level3.groupby('distil_id').size()
lincs_meta_level3['distil_id'].unique().shape
# lincs_meta_level3.columns.tolist()
# lincs_meta_level3.pert_id
ls /home/ubuntu/workspace_rosetta/workspace/software/2018_04_20_Rosetta/preprocessed_data/LINCS-Pilot1/CellPainting
# procProf_dir+'preprocessed_data/LINCS-Pilot1/'
procProf_dir
for el in data_meta_match_ls:
lincs_l1k_df=parse(lincs_dataDir+"/2016_04_01_a549_48hr_batch1_L1000/"+el[1]).data_df.T.reset_index()
lincs_meta0 = pd.read_csv(lincs_dataDir+"/metadata/"+el[2], sep="\t", dtype=str)
lincs_meta=pd.merge(lincs_meta0, lincs_pert_info, how='left',on=['pert_id'])
lincs_meta=lincs_meta.rename(columns={"distil_id":"cid"})
lincs_l1k_df2=pd.merge(lincs_l1k_df, lincs_meta, how='inner',on=['cid'])
lincs_l1k_df2['pert_id_dose']=lincs_l1k_df2['pert_id']+'_'+lincs_l1k_df2['nearest_dose'].astype(str)
lincs_l1k_df2['pert_id_dose']=lincs_l1k_df2['pert_id_dose'].replace('DMSO_-666', 'DMSO')
# lincs_l1k_df2.to_csv(procProf_dir+'preprocessed_data/LINCS-Pilot1/L1000/'+el[0]+'.csv.gz',index=False,compression='gzip')
saveDF_to_CSV_GZ_no_timestamp(lincs_l1k_df2,procProf_dir+'preprocessed_data/LINCS-Pilot1/L1000/'+el[0]+'.csv.gz')
# lincs_l1k_df2
lincs_l1k_rep['pert_id_dose'].unique()
lincs_l1k_rep = pd.read_csv(procProf_dir+'preprocessed_data/LINCS-Pilot1/L1000/'+data_meta_match_ls[1][0]+'.csv.gz')
# l1k_features=lincs_l1k_rep.columns[lincs_l1k_rep.columns.str.contains("_at")]
# x=replicateCorrs(lincs_l1k_rep[lincs_l1k_rep['pert_iname_x']!='DMSO'].reset_index(drop=True),'pert_id',l1k_features,1)
# # saveAsNewSheetToExistingFile(filename,x[2],'l1k-lincs')
# # lincs_l1k_rep.head()
lincs_l1k_rep.pert_id.unique().shape
lincs_l1k_rep = pd.read_csv(procProf_dir+'preprocessed_data/LINCS-Pilot1/L1000/'+data_meta_match_ls[2][0]+'.csv.gz')
lincs_l1k_rep.columns[lincs_l1k_rep.columns.str.contains('dose')]
lincs_l1k_rep[['pert_dose', 'pert_dose_unit', 'pert_idose', 'nearest_dose']]
lincs_l1k_rep['nearest_dose'].unique()
# lincs_l1k_rep.rna_plate.unique()
lincs_l1k_rep = pd.read_csv(procProf_dir+'preprocessed_data/LINCS-Pilot1/L1000/'+data_meta_match_ls[2][0]+'.csv.gz')
l1k_features=lincs_l1k_rep.columns[lincs_l1k_rep.columns.str.contains("_at")]
lincs_l1k_rep = standardize_per_catX(lincs_l1k_rep,'det_plate',l1k_features.tolist());
x=replicateCorrs(lincs_l1k_rep[lincs_l1k_rep['pert_iname_x']!='DMSO'].reset_index(drop=True),'pert_id',l1k_features,1)
lincs_l1k_rep = pd.read_csv(procProf_dir+'preprocessed_data/LINCS-Pilot1/L1000/'+data_meta_match_ls[2][0]+'.csv.gz')
l1k_features=lincs_l1k_rep.columns[lincs_l1k_rep.columns.str.contains("_at")]
lincs_l1k_rep = standardize_per_catX(lincs_l1k_rep,'det_plate',l1k_features.tolist());
x_l1k_lincs=replicateCorrs(lincs_l1k_rep[lincs_l1k_rep['pert_iname_x']!='DMSO'].reset_index(drop=True),'pert_id_dose',l1k_features,1)
saveAsNewSheetToExistingFile(filename,x_l1k_lincs[2],'l1k-lincs')
lincs_l1k_rep = pd.read_csv(procProf_dir+'preprocessed_data/LINCS-Pilot1/L1000/'+data_meta_match_ls[2][0]+'.csv.gz')
l1k_features=lincs_l1k_rep.columns[lincs_l1k_rep.columns.str.contains("_at")]
lincs_l1k_rep = standardize_per_catX(lincs_l1k_rep,'det_plate',l1k_features.tolist());
x_l1k_lincs=replicateCorrs(lincs_l1k_rep[lincs_l1k_rep['pert_iname_x']!='DMSO'].reset_index(drop=True),'pert_id_dose',l1k_features,1)
saveAsNewSheetToExistingFile(filename,x_l1k_lincs[2],'l1k-lincs')
saveAsNewSheetToExistingFile(filename,x[2],'l1k-lincs')
```
raw data
```
# set(repLevelLuad2)-set(Y1.columns)
# Y1[['Allele', 'Category', 'Clone ID', 'Gene Symbol']].head()
# repLevelLuad2[repLevelLuad2['PublicID']=='BRDN0000553807'][['Col','InsertLength','NCBIGeneID','Name','OtherDescriptions','PublicID','Row','Symbol','Transcript','Vector','pert_type','x_mutation_status']].head()
```
#### Check Replicate Correlation
### CP - LINCS
```
# Ran the following on:
# https://ec2-54-242-99-61.compute-1.amazonaws.com:5006/notebooks/workspace_nucleolar/2020_07_20_Nucleolar_Calico/1-NucleolarSizeMetrics.ipynb
# Metadata
def recode_dose(x, doses, return_level=False):
closest_index = np.argmin([np.abs(dose - x) for dose in doses])
if np.isnan(x):
return 0
if return_level:
return closest_index + 1
else:
return doses[closest_index]
primary_dose_mapping = [0.04, 0.12, 0.37, 1.11, 3.33, 10, 20]
metadata=pd.read_csv("/home/ubuntu/bucket/projects/2018_04_20_Rosetta/workspace/raw-profiles/CP_LINCS/metadata/matadata_lincs_2.csv")
metadata['Metadata_mmoles_per_liter']=metadata.mmoles_per_liter.values.round(2)
metadata=metadata.rename(columns={"Assay_Plate_Barcode": "Metadata_Plate",'broad_sample':'Metadata_broad_sample','well_position':'Metadata_Well'})
lincs_submod_root_dir="/home/ubuntu/datasetsbucket/lincs-cell-painting/"
profileType=['_augmented','_normalized','_normalized_dmso',\
'_normalized_feature_select','_normalized_feature_select_dmso']
# profileType=['_normalized']
# plates=metadata.Assay_Plate_Barcode.unique().tolist()
plates=metadata.Metadata_Plate.unique().tolist()
for pt in profileType[4:5]:
repLevelLINCS0=[]
for p in plates:
profile_add=lincs_submod_root_dir+"/profiles/2016_04_01_a549_48hr_batch1/"+p+"/"+p+pt+".csv.gz"
if os.path.exists(profile_add):
repLevelLINCS0.append(pd.read_csv(profile_add))
repLevelLINCS = pd.concat(repLevelLINCS0)
meta_lincs1=metadata.rename(columns={"broad_sample": "Metadata_broad_sample"})
# metaCDRP1=metaCDRP1.rename(columns={"PlateName":"Metadata_Plate_Map_Name",'Well':'Metadata_Well'})
# metaCDRP1['Metadata_Well']=metaCDRP1['Metadata_Well'].str.lower()
repLevelLINCS2=pd.merge(repLevelLINCS,meta_lincs1,how='left', on=["Metadata_broad_sample","Metadata_Well","Metadata_Plate",'Metadata_mmoles_per_liter'])
repLevelLINCS2 = repLevelLINCS2.assign(Metadata_dose_recode=(repLevelLINCS2.Metadata_mmoles_per_liter.apply(
lambda x: recode_dose(x, primary_dose_mapping, return_level=False))))
repLevelLINCS2['Metadata_pert_id_dose']=repLevelLINCS2['Metadata_pert_id']+'_'+repLevelLINCS2['Metadata_dose_recode'].astype(str)
# repLevelLINCS2['Metadata_Sample_Dose']=repLevelLINCS2['Metadata_broad_sample']+'_'+repLevelLINCS2['Metadata_dose_recode'].astype(str)
repLevelLINCS2['Metadata_pert_id_dose']=repLevelLINCS2['Metadata_pert_id_dose'].replace(np.nan, 'DMSO')
# saveDF_to_CSV_GZ_no_timestamp(repLevelLINCS2,procProf_dir+'/preprocessed_data/LINCS-Pilot1/CellPainting/replicate_level_cp'+pt+'.csv.gz')
print(meta_lincs1.shape,repLevelLINCS.shape,repLevelLINCS2.shape)
# (8120, 15) (52223, 1810) (688699, 1825)
# repLevelLINCS
# pd.merge(repLevelLINCS,meta_lincs1,how='left', on=["Metadata_broad_sample"]).shape
repLevelLINCS.shape,meta_lincs1.shape
(8120, 15) (52223, 1238) (52223, 1253)
csv_l1k_lincs=pd.read_csv('./preprocessed_data/LINCS-Pilot1/L1000/replicate_level_l1k'+'.csv.gz')
csv_pddf=pd.read_csv('./preprocessed_data/LINCS-Pilot1/CellPainting/replicate_level_cp'+pt[0]+'.csv.gz')
csv_l1k_lincs.head()
csv_l1k_lincs.pert_id_dose.unique()
csv_pddf.Metadata_pert_id_dose.unique()
```
#### Read saved data
```
repLevelLINCS2.groupby(['Metadata_pert_id']).size()
repLevelLINCS2.groupby(['Metadata_pert_id_dose']).size().describe()
repLevelLINCS2.Metadata_Plate.unique().shape
repLevelLINCS2['Metadata_pert_id_dose'].unique().shape
# csv_pddf['Metadata_mmoles_per_liter'].round(0).unique()
# np.sort(csv_pddf['Metadata_mmoles_per_liter'].unique())
csv_pddf.groupby(['Metadata_dose_recode']).size()#.median()
# repLevelLincs2=csv_pddf.copy()
import gc
cp_features=repLevelLincs2.columns[repLevelLincs2.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")]
cols2remove0=[i for i in cp_features if ((repLevelLincs2[i].isnull()).sum(axis=0)/repLevelLincs2.shape[0])>0.05]
print(cols2remove0)
repLevelLincs3=repLevelLincs2.drop(cols2remove0, axis=1);
print('here0')
# cp_features=list(set(cp_features)-set(cols2remove0))
# repLevelTA2=repLevelTA2.replace('nan', np.nan)
del repLevelLincs2
gc.collect()
print('here0')
cp_features=repLevelLincs3.columns[repLevelLincs3.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")]
repLevelLincs3[cp_features] = repLevelLincs3[cp_features].interpolate()
print('here1')
repLevelLincs3 = standardize_per_catX(repLevelLincs3,'Metadata_Plate',cp_features.tolist());
print('here1')
# df0=repLevelCDRP3[repLevelCDRP3['Metadata_broad_sample']!='DMSO'].reset_index(drop=True)
# repSizeDF=repLevelLincs3.groupby(['Metadata_broad_sample']).size().reset_index()
repSizeDF=repLevelLincs3.groupby(['Metadata_pert_id_dose']).size().reset_index()
highRepComp=repSizeDF[repSizeDF[0]>1].Metadata_pert_id_dose.tolist()
highRepComp.remove('DMSO')
# df0=repLevelLincs3[(repLevelLincs3['Metadata_broad_sample'].isin(highRepComp)) &\
# (repLevelLincs3['Metadata_dose_recode']==1.11)]
df0=repLevelLincs3[(repLevelLincs3['Metadata_pert_id_dose'].isin(highRepComp))]
x_lincs_cp=replicateCorrs(df0,'Metadata_pert_id_dose',cp_features,1)
# saveAsNewSheetToExistingFile(filename,x_lincs_cp[2],'cp-lincs')
repSizeDF
# repLevelLincs2=csv_pddf.copy()
# cp_features=repLevelLincs2.columns[repLevelLincs2.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")]
# cols2remove0=[i for i in cp_features if ((repLevelLincs2[i].isnull()).sum(axis=0)/repLevelLincs2.shape[0])>0.05]
# print(cols2remove0)
# repLevelLincs3=repLevelLincs2.drop(cols2remove0, axis=1);
# # cp_features=list(set(cp_features)-set(cols2remove0))
# # repLevelTA2=repLevelTA2.replace('nan', np.nan)
# repLevelLincs3 = repLevelLincs3.interpolate()
# repLevelLincs3 = standardize_per_catX(repLevelLincs3,'Metadata_Plate',cp_features.tolist());
# cp_features=repLevelLincs3.columns[repLevelLincs3.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")]
# # df0=repLevelCDRP3[repLevelCDRP3['Metadata_broad_sample']!='DMSO'].reset_index(drop=True)
# # repSizeDF=repLevelLincs3.groupby(['Metadata_broad_sample']).size().reset_index()
repSizeDF=repLevelLincs3.groupby(['Metadata_pert_id']).size().reset_index()
highRepComp=repSizeDF[repSizeDF[0]>1].Metadata_pert_id.tolist()
# highRepComp.remove('DMSO')
# df0=repLevelLincs3[(repLevelLincs3['Metadata_broad_sample'].isin(highRepComp)) &\
# (repLevelLincs3['Metadata_dose_recode']==1.11)]
df0=repLevelLincs3[(repLevelLincs3['Metadata_pert_id'].isin(highRepComp))]
x_lincs_cp=replicateCorrs(df0,'Metadata_pert_id',cp_features,1)
# saveAsNewSheetToExistingFile(filename,x_lincs_cp[2],'cp-lincs')
# x=replicateCorrs(df0,'Metadata_broad_sample',cp_features,1)
# highRepComp[-1]
saveAsNewSheetToExistingFile(filename,x[2],'cp-lincs')
# repLevelLincs3.Metadata_Plate
repLevelLincs3.head()
# csv_pddf[(csv_pddf['Metadata_dose_recode']==0.04) & (csv_pddf['Metadata_pert_id']=="BRD-A00147595")][['Metadata_Plate','Metadata_Well']].drop_duplicates()
# csv_pddf[(csv_pddf['Metadata_dose_recode']==0.04) & (csv_pddf['Metadata_pert_id']=="BRD-A00147595") &
# (csv_pddf['Metadata_Plate']=='SQ00015196') & (csv_pddf['Metadata_Well']=="B12")][csv_pddf.columns[1820:]].drop_duplicates()
# def standardize_per_catX(df,column_name):
column_name='Metadata_Plate'
repLevelLincs_scaled_perPlate=repLevelLincs3.copy()
repLevelLincs_scaled_perPlate[cp_features.tolist()]=repLevelLincs3[cp_features.tolist()+[column_name]].groupby(column_name).transform(lambda x: (x - x.mean()) / x.std()).values
# def standardize_per_catX(df,column_name):
# # column_name='Metadata_Plate'
# cp_features=df.columns[df.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")]
# df_scaled_perPlate=df.copy()
# df_scaled_perPlate[cp_features.tolist()]=\
# df[cp_features.tolist()+[column_name]].groupby(column_name)\
# .transform(lambda x: (x - x.mean()) / x.std()).values
# return df_scaled_perPlate
df0=repLevelLincs_scaled_perPlate[(repLevelLincs_scaled_perPlate['Metadata_Sample_Dose'].isin(highRepComp))]
x=replicateCorrs(df0,'Metadata_broad_sample',cp_features,1)
```
| github_jupyter |
# Introduction
Visualization of statistics that support the claims of Black Lives Matter movement, data from 2015 and 2016.
Data source: https://www.theguardian.com/us-news/ng-interactive/2015/jun/01/about-the-counted
Idea from BuzzFeed article: https://www.buzzfeednews.com/article/peteraldhous/race-and-police-shootings
### Imports
Libraries and data
```
import pandas as pd
import numpy as np
from bokeh.io import output_notebook, show, export_png
from bokeh.plotting import figure, output_file
from bokeh.models import HoverTool, ColumnDataSource,NumeralTickFormatter
from bokeh.palettes import Spectral4, PuBu4
from bokeh.transform import dodge
from bokeh.layouts import gridplot
selectcolumns=['raceethnicity','armed']
df1 = pd.read_csv('the-counted-2015.csv',usecols=selectcolumns)
df1.head()
df2 = pd.read_csv('the-counted-2016.csv',usecols=selectcolumns)
df2.head()
df=pd.concat([df1,df2])
df.shape # df contains "The Counted" data from both 2015 and 2016
```
Source for ethnicities percentage in 2015: https://www.statista.com/statistics/270272/percentage-of-us-population-by-ethnicities/
Source for population total: https://en.wikipedia.org/wiki/Demography_of_the_United_States#Vital_statistics_from_1935
```
ethndic={"White": 61.72,
"Latino": 17.66,
"Black": 12.38,
"Others": (5.28+2.05+0.73+0.17)
}
#print(type(ethndic))
print(ethndic)
population=(321442000 + 323100000)/2 # average between 2015 and 2016 data
# estimates by ethnicity
ethnestim={"White": round((population*ethndic["White"]/100)),
"Latino": round((population*ethndic["Latino"]/100)),
"Black": round((population*ethndic["Black"]/100)),
"Others": round((population*ethndic["Others"]/100))
}
print(ethnestim)
```
# Analysis
```
df.groupby(by='raceethnicity').describe()
```
Check if there are any missing values:
```
df.isna().sum()
df = df[(df.raceethnicity != 'Arab-American') & (df.raceethnicity != 'Unknown')]
# no data available about the percentage of this ethnicity over population, so it is discarded
df.replace(to_replace=['Asian/Pacific Islander','Native American','Other'],value='Others',inplace=True)
# those categories all fall under Others in the population percentages found online
df.replace(to_replace=['Hispanic/Latino'],value='Latino',inplace=True)
# this value is renamed for consistency with population ethnicity data
df.groupby(by='raceethnicity').describe()
def givepercent (dtf,ethnicity):
# Function to compute percentages by ethnicity
return round(((dtf.raceethnicity == ethnicity).sum()/(dtf.shape[0])*100),2)
killed={"White":(df.raceethnicity == 'White').sum(),
"Latino": (df.raceethnicity == 'Latino').sum(),
"Black": (df.raceethnicity == 'Black').sum(),
"Others": (df.raceethnicity == 'Others').sum()
}
print(killed)
killedperc={"White": givepercent(df,'White'),
"Latino": givepercent(df,'Latino'),
"Black": givepercent(df,'Black'),
"Others": givepercent(df,'Others')
}
print(killedperc)
df.groupby(by='armed').describe()
```
The analysis is limited to the value *No*, but could consider *Disputed* and *Non-lethal firearm*, which constitute other 108 data points.
```
dfunarmed = df[(df.armed == 'No')]
dfunarmed.groupby(by='raceethnicity').describe()
unarmed={"White":(dfunarmed.raceethnicity == 'White').sum(),
"Latino": (dfunarmed.raceethnicity == 'Latino').sum(),
"Black": (dfunarmed.raceethnicity == 'Black').sum(),
"Others": (dfunarmed.raceethnicity == 'Others').sum()
}
print(unarmed)
unarmedperc={"White":givepercent(dfunarmed,'White'),
"Latino": givepercent(dfunarmed,'Latino'),
"Black": givepercent(dfunarmed,'Black'),
"Others": givepercent(dfunarmed,'Others')
}
print(unarmedperc)
def percent1ethn (portion,population,decimals):
# Function to compute the percentage of the portion killed over a given population
return round((portion/population*100),decimals)
killed1ethn={"White": percent1ethn(killed['White'],ethnestim['White'],6),
"Latino": percent1ethn(killed['Latino'],ethnestim['Latino'],6),
"Black": percent1ethn(killed['Black'],ethnestim['Black'],6),
"Others": percent1ethn(killed['Others'],ethnestim['Others'],6)
}
print(killed1ethn)
unarmedoverkilled={"White": percent1ethn(unarmed['White'],killed['White'],2),
"Latino": percent1ethn(unarmed['Latino'],killed['Latino'],2),
"Black": percent1ethn(unarmed['Black'],killed['Black'],2),
"Others": percent1ethn(unarmed['Others'],killed['Others'],2)
}
print(unarmedoverkilled)
```
**Hypothesis testing**
```
whitesample=ethnestim['White']
blacksample=ethnestim['Black']
wkilled=killed['White']
bkilled=killed['Black']
pw=wkilled/whitesample
pb=bkilled/blacksample
#happened by chance?
#Hnull pw-pb = 0 (no difference between white and black)
#Halt pw-pb != 0 (the two proportions are different)
#Significance level = 5%
# Test statistic: Z-statistic
difference=pb-pw
print(difference)
standarderror=np.sqrt(((pw*(1-pw))/whitesample)+((pb*(1-pb))/blacksample))
zstat=(difference)/standarderror
print(zstat)
# Z-score for significance level
zscore=1.96
if zstat > zscore:
print("The null hypothesis is rejected.")
else:
print("The null hypothesis is not rejected.")
whitesample=killed['White']
blacksample=killed['Black']
wunarmed=unarmed['White']
bunarmed=unarmed['Black']
pw=wunarmed/whitesample
pb=bunarmed/blacksample
#happened by chance?
#Hnull pw-pb = 0 (no difference between white and black)
#Halt pw-pb != 0 (the two proportions are different)
#Significance level = 5%
# Test statistic: Z-statistic
difference=pb-pw
print(difference)
standarderror=np.sqrt(((pw*(1-pw))/whitesample)+((pb*(1-pb))/blacksample))
zstat=(difference)/standarderror
print(zstat)
# Z-score for significance level
zscore=1.96
if zstat > zscore:
print("The null hypothesis is rejected.")
else:
print("The null hypothesis is not rejected.")
ethnicities = list(ethndic.keys())
populethn = list(ethndic.values())
killed = list(killedperc.values())
unarmed = list(unarmedperc.values())
data1 = {'ethnicities' : ethnicities,
'populethn' : populethn,
'killed' : killed,
'unarmed' : unarmed}
source = ColumnDataSource(data=data1)
```
# Results
```
TOOLS = "pan,wheel_zoom,box_zoom,reset,save,box_select"
palette=Spectral4
titlefontsize='16pt'
cplot = figure(title="The Counted (data from 2015 and 2016)", tools=TOOLS,
x_range=ethnicities, y_range=(0, 75))#, sizing_mode='scale_both')
cplot.vbar(x=dodge('ethnicities', 0.25, range=cplot.x_range),top='populethn', source=source,
width=0.4,line_width=0 ,line_color=None, legend='Ethnicity % over population',
color=str(Spectral4[0]), name='populethn')
cplot.vbar(x=dodge('ethnicities', -0.25, range=cplot.x_range), top='killed', source=source,
width=0.4, line_width=0 ,line_color=None, legend="Killed % over total killed",
color=str(Spectral4[2]), name="killed")
cplot.vbar(x=dodge('ethnicities', 0.0, range=cplot.x_range), top='unarmed', source=source,
width=0.4, line_width=0 ,line_color=None, legend="Unarmed % over total unarmed",
color=str(Spectral4[1]), name="unarmed")
cplot.add_tools(HoverTool(names=["unarmed"],
tooltips=[
( 'Population', '@populethn{(00.00)}%' ),
( 'Killed', '@killed{(00.00)}%' ),
( 'Unarmed', '@unarmed{(00.00)}%' )], # Fields beginning with @ display values from ColumnDataSource.
mode='vline'))
#cplot.x_range.range_padding = 0.1
cplot.xgrid.grid_line_color = None
cplot.legend.location = "top_right"
cplot.xaxis.axis_label = "Ethnicity"
cplot.xaxis.axis_label_text_font_size='18pt'
cplot.xaxis.minor_tick_line_color = None
cplot.title.text_font_size=titlefontsize
cplot.legend.label_text_font_size='16pt'
cplot.xaxis.major_label_text_font_size='16pt'
cplot.yaxis.major_label_text_font_size='16pt'
perckillethn = list(killed1ethn.values())
data2 = {'ethnicities' : ethnicities,
'perckillethn' : perckillethn}
source = ColumnDataSource(data=dict(data2, color=PuBu4))
plot2 = figure(title="Killed % over population with same ethnicity",
tools=TOOLS, x_range=ethnicities, y_range=(0, max(perckillethn)*1.2))#, sizing_mode='scale_both')
plot2.vbar(x=dodge('ethnicities', 0.0, range=cplot.x_range), top='perckillethn', source=source,
width=0.4, line_width=0 ,line_color=None, legend="",
color='color', name="perckillethn")
plot2.add_tools(HoverTool(names=["perckillethn"],
tooltips=[
( 'Killed', '@perckillethn{(0.00000)}%' )],
#( 'Unarmed', '@unarmed{(00.00)}%' )], # Fields beginning with @ display values from ColumnDataSource.
mode='vline'))
#plot2.x_range.range_padding = 0.1
plot2.xgrid.grid_line_color = None
plot2.xaxis.axis_label = "Ethnicity"
plot2.xaxis.axis_label_text_font_size='18pt'
plot2.xaxis.minor_tick_line_color = None
plot2.title.text_font_size=titlefontsize
plot2.xaxis.major_label_text_font_size='16pt'
plot2.yaxis.major_label_text_font_size='16pt'
plot2.yaxis[0].formatter = NumeralTickFormatter(format="0.0000")
percunarmethn = list(unarmedoverkilled.values())
data3 = {'ethnicities' : ethnicities,
'percunarmethn' : percunarmethn}
source = ColumnDataSource(data=dict(data3, color=PuBu4))
plot3 = figure(title="Unarmed % over killed with same ethnicity",
tools=TOOLS, x_range=ethnicities, y_range=(0, max(percunarmethn)*1.2))#, sizing_mode='scale_both')
plot3.vbar(x=dodge('ethnicities', 0.0, range=cplot.x_range), top='percunarmethn', source=source,
width=0.4, line_width=0 ,line_color=None, legend="",
color='color', name="percunarmethn")
plot3.add_tools(HoverTool(names=["percunarmethn"],
tooltips=[
( 'Unarmed', '@percunarmethn{(00.00)}%' )],
#( 'Unarmed', '@unarmed{(00.00)}%' )], # Fields beginning with @ display values from ColumnDataSource.
mode='vline'))
#plot3.x_range.range_padding = 0.1
plot3.xgrid.grid_line_color = None
plot3.xaxis.axis_label = "Ethnicity"
plot3.xaxis.axis_label_text_font_size='18pt'
plot3.xaxis.minor_tick_line_color = None
plot3.title.text_font_size=titlefontsize
plot3.xaxis.major_label_text_font_size='16pt'
plot3.yaxis.major_label_text_font_size='16pt'
output_file("thecounted.html", title="The Counted Visualization")
output_notebook()
gplot=gridplot([cplot, plot2, plot3], sizing_mode='stretch_both', ncols=3)#, plot_width=800, plot_height=600)
show(gplot) # open a browser
export_png(gplot, filename="bokeh_thecounted.png")
```
Hover on the bar charts to read the percentage values.
# Conclusions
The plot shows that if the people shot by police were proportional to the population distribution, the orange and green bar charts should have been almost the same height as the blue ones. Although this is true for Latino ethnicity, it is not for the Black one: this is the second most represented among killed and among those killed who were unarmed.
| github_jupyter |
```
import json
import requests
from bs4 import BeautifulSoup
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from scipy.stats import norm
df = pd.read_csv('user_info.csv')
from selenium import webdriver
options = webdriver.ChromeOptions()
options.add_argument('--ignore-certificate-errors')
options.add_argument('--incognito')
options.add_argument('--headless')
driver = webdriver.Chrome("../assets/chromedriver", options=options)
class Player:
def __init__(self, name, level, rating, prestige, games_won, qps, medals, hero):
self.name = name
self.level = level
self.rating = rating
self.prestige = prestige
self.qps = qps
self.medals = medals
self.hero = hero
self.kd_ratio = [i/(1+sum([qps.elims,qps.deaths])) for i in [qps.elims,qps.deaths]]
self.games_won = games_won
class Stats:
def __init__(self, elims=0, dmg_done=0, deaths=0, solo_kills=0):
self.elims = elims
self.dmg_done = dmg_done
self.deaths = deaths
self.solo_kills = solo_kills
class Medals:
def __init__(self, bronze=0, silver=0, gold=0):
self.bronze = bronze
self.silver = silver
self.gold = gold
hero_list = ['ana','ashe','baptiste','bastion','brigitte','dVa','doomfist',
'genji','hanzo','junkrat','lucio','mccree','mei','mercy','moira',
'orisa','pharah','reaper','reinhardt','roadhog','soldier76','sombra',
'symmetra','torbjorn','tracer','widowmaker','winston','wreckingBall',
'zarya','zenyatta','sigma']
def create_player(js):
heroes = {}
if 'quickPlayStats' not in js:
for hero in hero_list:
heroes.update({hero: Stats(0,0,0,0)})
return Player(js['name'], js['level'],js['rating'],js['prestige'], 0, Stats(), Medals(), heroes)
if 'careerStats' not in js['quickPlayStats']:
for hero in hero_list:
heroes.update({hero: Stats(0,0,0,0)})
return Player(js['name'], js['level'],js['rating'],js['prestige'], 0, Stats(), Medals(), heroes)
if js.get('quickPlayStats',{}).get('careerStats',{}) == None or 'allHeroes' not in js.get('quickPlayStats',{}).get('careerStats',{}):
for hero in hero_list:
heroes.update({hero: Stats(0,0,0,0)})
return Player(js['name'], js['level'],js['rating'],js['prestige'], 0, Stats(), Medals(), heroes)
elims = 0
damageDone = 0
deaths = 0
soloKills = 0
if js['quickPlayStats']['careerStats']['allHeroes']['combat'] != None:
if 'eliminations' in js['quickPlayStats']['careerStats']['allHeroes']['combat']:
elims = js['quickPlayStats']['careerStats']['allHeroes']['combat']['eliminations']
if 'damageDone' in js['quickPlayStats']['careerStats']['allHeroes']['combat']:
damageDone = js['quickPlayStats']['careerStats']['allHeroes']['combat']['damageDone']
if 'deaths' in js['quickPlayStats']['careerStats']['allHeroes']['combat']:
deaths = js['quickPlayStats']['careerStats']['allHeroes']['combat']['deaths']
if 'soloKills' in js['quickPlayStats']['careerStats']['allHeroes']['combat']:
soloKills = js['quickPlayStats']['careerStats']['allHeroes']['combat']['soloKills']
qps = Stats(elims,damageDone,deaths,soloKills)
medals = Medals(js['quickPlayStats']['awards'].get('medalsBronze'),
js['quickPlayStats']['awards'].get('medalsSilver'),
js['quickPlayStats']['awards'].get('medalsGold'))
for hero in hero_list:
print(hero)
if hero in js['quickPlayStats']['careerStats']:
elims = 0
damageDone = 0
deaths = 0
soloKills = 0
if js['quickPlayStats']['careerStats'][hero]['combat'] != None:
if 'eliminations' in js['quickPlayStats']['careerStats'][hero]['combat']:
elims = js['quickPlayStats']['careerStats'][hero]['combat']['eliminations']
if 'damageDone' in js['quickPlayStats']['careerStats'][hero]['combat']:
damageDone = js['quickPlayStats']['careerStats'][hero]['combat']['damageDone']
if 'deaths' in js['quickPlayStats']['careerStats'][hero]['combat']:
deaths = js['quickPlayStats']['careerStats'][hero]['combat']['deaths']
if 'soloKills' in js['quickPlayStats']['careerStats'][hero]['combat']:
soloKills = js['quickPlayStats']['careerStats'][hero]['combat']['soloKills']
heroes.update({hero: Stats(elims,damageDone,deaths,soloKills)})
else:
heroes.update({hero: Stats(0,0,0,0)})
return Player(js['name'], js['level'],js['rating'],js['prestige'], js['quickPlayStats']['games']['won'], qps, medals, heroes)
def df_object(p):
item = [p.name,p.level,p.rating,p.prestige,p.games_won,p.qps.elims,p.qps.dmg_done,
p.qps.deaths,p.qps.solo_kills,p.medals.bronze,p.medals.silver,p.medals.gold]
for hero in hero_list:
item.extend([p.hero[hero].elims,p.hero[hero].dmg_done,p.hero[hero].deaths,p.hero[hero].solo_kills])
return item
usernames = pd.read_csv('../assets/data/usernames_scraped_fixed.csv')
usernames.head()
len(usernames['users'])
##dataframe setup
columns = ['username','level','rating','prestige','games_won','qps_elims','qps_dmg_done',
'qps_deaths','qps_solo_kills','medals_bronze','medals_silver','medals_gold']
for hero in hero_list:
hero_data = [f'{hero}_elims',f'{hero}_dmg_done',f'{hero}_deaths',f'{hero}_solo_kills']
columns.extend(hero_data)
data = pd.DataFrame(columns=columns)
amount = 0
for user in usernames['users'].values:
url = f"https://ow-api.com/v1/stats/pc/us/{user}/complete"
print(url)
response = requests.get(url)
j = json.loads(response.text)
u = create_player(j)
data.loc[len(data), :] = df_object(u)
amount += 1
percent = np.round((amount/len(usernames['users'])),decimals=2)
clear_output()
progress = widgets.IntProgress(
value=amount,
min=0,
max=len(usernames['users'].values),
step=1,
description=f'{percent}%',
bar_style='info', # 'success', 'info', 'warning', 'danger' or ''
orientation='horizontal'
)
display(progress)
data.head()
data.tail()
df = pd.read_csv('user_info.csv')
print(df.shape)
df = df.append(data)
df.shape, data.shape
data.to_csv('user_info.csv',index=False)
# def s(username):
# global search
# search = username
# interactive(s, username='')
# usernames = pd.read_csv('usernames_scraped_fixed.csv')
# usernames.head()
# df = pd.read_csv('usernames_scraped.csv')
# username_scraped = []
# def str2bool(v):
# return v.lower() in ("True", "true")
# for name in df['users']:
# driver.get(f"https://playoverwatch.com/en-us/search?q={name}")
# time.sleep(2)
# page_source = driver.page_source
# soup = BeautifulSoup(page_source)
# players = soup.find_all('a', class_="player-badge")
# for element in players:
# locked = str2bool(element.find("div", {"data-visibility-private": True})['data-visibility-private'])
# if(locked == False):
# username_scraped.append(element.find(class_='player-badge-name').text.replace('#', '-'))
# print(len(username_scraped))
# print(len(username_scraped))
# df1 = pd.read_csv('usernames_scraped_fixed.csv')
# df2 = pd.DataFrame(username_scraped,columns=['users'])
# df1 = df1.append(df2)
# df1.to_csv('usernames_scraped_fixed.csv',index=False)
# df1.shape
# usernames['users'].values
# def on_change(b):
# global player
# player = name=dropbox.value
# print('player')
# dropbox = widgets.Select(
# options=usernames['users'].values,
# value=usernames['users'].values[0],
# description='User:',
# disabled=False
# )
# dropbox.observe(on_change, names='value')
# display(dropbox)
# player
# soup = BeautifulSoup(page_source)
# players = soup.find_all('a', class_="player-badge")
# def f(name):
# return name
# def on_button_clicked(b):
# global player
# player = name=b.description
# displays = []
# for element in players:
# locked = str2bool(element.find("div", {"data-visibility-private": True})['data-visibility-private'])
# if(locked == True):
# tooltip = 'Sorry, player has their profile set to private...'
# icon = 'lock'
# else:
# tooltip = "Click to view this player"
# icon = 'unlock'
# button = widgets.Button(
# description=element.find(class_='player-badge-name').text.capitalize().replace('#', '-'),
# disabled=locked,
# button_style='', # 'success', 'info', 'warning', 'danger' or ''
# icon=icon,
# tooltip=tooltip
# )
# out = widgets.Output()
# button.on_click(on_button_clicked)
# display(button,out)
# url = f"https://ow-api.com/v1/stats/pc/us/{player}/complete"
# print(url)
# response = requests.get(url)
# print(response)
# j = json.loads(response.text)
# if(j['private'] == True):
# print("Sorry can't load this profile. it's private")
# else:
# print(j['name'])
```
| github_jupyter |
# 1. Write a Python Program to Find LCM?
```
# Given some numbers, LCM of these numbers is the smallest positive integer that is divisible by all the numbers numbers
def LCM(ls):
lar = max(ls)
flag=1
while(flag==1):
for i in ls:
if(lar%i!=0):
flag=1
break
else:
flag=0
continue
lcm=lar
lar += 1
return lcm
n=int(input("Enter the range of your input "))
ls=[]
for i in range(n):
a=int(input("Enter the number "))
ls.append(a)
print("The LCM of entered numbers is", LCM(ls))
```
# 2. Write a Python Program to Find HCF?
```
def HCF(ls):
lar=0
for i in range(2,min(ls)+1) :
count=0
for j in ls:
if j%i==0:
count=count+1
if count==len(ls):
lar=i
return lar
if count!=len(ls):
return 1
n=int(input("Enter the range of your input "))
ls=[]
for i in range(n):
a=int(input("Enter the number "))
ls.append(a)
print("The LCM of entered numbers is", HCF(ls))
```
# 3. Write a Python Program to Convert Decimal to Binary, Octal and Hexadecimal?
```
a=int(input("Enter the Decimal value: "))
def ToBinary(x):
binary=" "
while(x!=0 and x>0):
binary=binary+str(x%2)
x=int(x/2)
return(binary[::-1])
def ToOctal(x):
if(x<8):
print(x)
else:
octal=" "
while(x!=0 and x>0):
octal=octal+str(x%8)
x=int(x/8)
return(octal[::-1])
def ToHexa(x):
if(x<10):
print(x)
else:
ls={10:"A", 11:"B", 12:"C", 13:"D", 14:"E", 15:"F"}
Hexa=" "
while(x!=0 and x>0):
if(x%16) in ls.keys():
Hexa=Hexa+ls[x%16]
else:
Hexa=Hexa+str(x%16)
x=int(x/16)
return(Hexa[::-1])
print("Binary of",a,"is", ToBinary(a))
print("Octal of",a,"is", ToOctal(a))
print("Hexadecimal of",a,"is", ToHexa(a))
```
# 4. Write a Python Program To Find ASCII value of a character?
```
a=input("Enter the character to find ASCII: ")
print("ASCII of ",a ,"is", ord(a))
```
# 5.Write a Python Program to Make a Simple Calculator with 4 basic mathematical operations?
```
class Calculator:
def Add(self,x,y):
return x+y
def Subtract(self,x,y):
return x-y
def Multiply(self,x,y):
return x*y
def Divide(self,x,y):
try:
return x/y
except Exception as es:
print("An error has occured",es)
Calc = Calculator()
print("Choose your Calculator Operation: ")
print("1 : Addition")
print("2 : Subtraction")
print("3 : Multipilication")
print("4 : Division")
a=int(input("Enter Your Selection: "))
x=int(input("Enter Your 1st number: "))
y=int(input("Enter Your 2nd number: "))
if(a==1):
print("You chose Addition: ")
print("The result of your operation is", Calc.Add(x,y))
elif(a==2):
print("You chose Subtraction: ")
print("The result of your operation is", Calc.Subtract(x,y))
elif(a==3):
print("You chose Multipilication: ")
print("The result of your operation is", Calc.Multiply(x,y))
elif(a==4):
print("You chose Division: ")
print("The result of your operation is", Calc.Divide(x,y))
else:
print("You chose a wrong operation")
```
| github_jupyter |
# Data Wrangling, Analysis and Visualization of @WeLoveDogs twitter data.
```
import pandas as pd
import numpy as np
import tweepy as ty
import requests
import json
import io
import time
```
## Gathering
```
df = pd.read_csv('twitter-archive-enhanced.csv')
df.head()
image_response = requests.get(r'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv')
image_df = pd.read_csv(io.StringIO(image_response.content.decode('utf-8')), sep='\t')
image_df.head()
image_df.info()
CONSUMER_KEY = '<My Consumer Key>'
CONSUMER_SECRET = '<My Consumer Secret>'
ACCESS_TOKEN = '<My Access Token>'
ACCESS_TOKEN_SECRET = '<My Access Token Secret>'
auth = ty.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_TOKEN, ACCESS_TOKEN_SECRET)
api = ty.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True)
tweet_count = 0
for id in df['tweet_id']:
with open('tweet_json.txt', 'a') as file:
try:
start = time.time()
tweet = api.get_status(id, tweet_mode='extended')
st = json.dumps(tweet._json)
file.writelines(st + '\n')
tweet_count += 1
if tweet_count % 20 == 0 or tweet_count == len(df):
end = time.time()
print('Tweet id: {0}\tDownload Time: {1} sec\tTweets Downloaded: {2}.'.format(id, (end-start), tweet_count))
except Exception as e:
print('Exception occured for tweet {0} : {1}'.format(id, str(e)))
print('There are {} records for which tweets does not exist in Twitter database.'.format(len(df) - tweet_count))
tweet_ids = []
favorite_count = []
retweet_count = []
with open('tweet_json.txt', 'r') as file:
for line in file.readlines():
data = json.loads(line)
tweet_ids.append(data['id'])
favorite_count.append(data['favorite_count'])
retweet_count.append(data['retweet_count'])
favorite_retweet_df = pd.DataFrame(data={'tweet_id': tweet_ids, 'favorite_count': favorite_count,
'retweet_count': retweet_count})
favorite_retweet_df.head()
favorite_retweet_df.info()
```
## Assessing
### Quality Issues
- The WeRateDogs Twitter archive contains some retweets also. We only need to consider original tweets for this project.
- `tweet_id` column having integer datatype in all the dataframes. Conversion to string required. Since, We are not going to do any mathematical operations with it.
- `rating_denominator` value should be 10 since we are giving rating out of 10.
- For some tweets, The values of `rating_numerator` column are very high, possibly outliers.
- `name` column has **None** string and *a*, *an*, *the* as values.
- Extract dog **stages** from the tweet text (if present) for null values.
- `timestamp` column is given as **string**. Convert it to date.
- Since we are not using retweets, `in_reply_to_status_id`, `in_reply_to_user_id`, `source`, `retweeted_status_id`, `retweeted_status_user_id`, `retweeted_status_timestamp` columns are not required.
- `image_df` contains tweets that do not belong to a dog.
- `source` column in main dataframe is of no use in the analysis as it only tell us about the source of tweet.
### Tidiness Issues
- A dog can have one stage at a time, still we are having 4 columns to store one piece of information.
- We are having 3 predicted breeds of dogs in the image prediction file. But only required the one with higher probability, given that it is a breed of dog.
- All 3 dataframes contains same tweet_id column, which we can use to merge them and use as one dataframe for our analysis.
```
sum(df.duplicated())
# name and stages having wrong values for no. of records
df.tail()
# tweet id is int64, retweet related columns are not required
# name and dog stages showing full data but most of them are None.
# timestamp having string datatype
df.info()
# rating_numerator is having min value of 0 and max of 1776 (outliers)
df.describe()
# denominator value should be 10
sum(df['rating_denominator'] != 10)
df.rating_numerator.value_counts()
```
## Cleaning
```
# Making a copy of the data so that original data will be unchanged
df_clean = df.copy()
image_df_clean = image_df.copy()
favorite_retweet_df_clean = favorite_retweet_df.copy()
```
## Quality Issues
### Define
- Replacing all the **None** strings and *a*, *an* and *the* to **np.nan** in `name` column using pandas **_replace()_** function.
### Code
```
df_clean['name'] = df_clean['name'].replace({'None': np.nan, 'a': np.nan, 'an': np.nan, 'the': np.nan})
```
### Test
```
df_clean.info()
```
### Define
- `tweet_id` column having integer datatype in all the dataframes. Converting it to string using pandas **_astype()_** function.
- `timestamp` column is given as string. Converting it to date using pandas **to_datetime()** function.
### Code
```
df_clean['tweet_id'] = df_clean['tweet_id'].astype(str)
image_df_clean['tweet_id'] = image_df_clean['tweet_id'].astype(str)
favorite_retweet_df_clean['tweet_id'] = favorite_retweet_df_clean['tweet_id'].astype(str)
df_clean.timestamp = pd.to_datetime(df_clean.timestamp)
```
### Test
```
df_clean.dtypes
image_df_clean.dtypes
favorite_retweet_df_clean.dtypes
```
### Define
- Converting `rating_denominator` column value to 10 value when it is not, using pandas boolean indexing. Since we are giving rating out of 10.
### Code
```
df_clean.loc[df_clean.rating_denominator != 10, 'rating_denominator'] = 10
```
### Test
```
sum(df_clean.rating_denominator != 10)
```
### Define
- Removing retweets since we are only concerned about original tweets. Since, there is null present in `in_reply_to_status_id`, `retweeted_status_id` columns for retweets. Droping these records using the pandas drop() function.
### Code
```
df_clean.drop(index=df_clean[df_clean.retweeted_status_id.notnull()].index, inplace=True)
df_clean.drop(index=df_clean[df_clean.in_reply_to_status_id.notnull()].index, inplace=True)
```
### Test
```
sum(df_clean.retweeted_status_id.notnull())
sum(df_clean.in_reply_to_status_id.notnull())
```
### Define
- Removing `in_reply_to_status_id`, `in_reply_to_user_id`, `retweeted_status_id`, `retweeted_status_user_id`, `retweeted_status_timestamp` columns since they are related to retweet info. Hence, not required.
### Code
```
df_clean.drop(columns=['in_reply_to_status_id', 'in_reply_to_user_id', 'retweeted_status_id',
'retweeted_status_user_id', 'retweeted_status_timestamp'], inplace=True)
```
### Test
```
df_clean.info()
```
### Define
- Removing `source` column from df_clean using pandas **_drop_** function.
### Code
```
df_clean.drop(columns=['source'], inplace=True)
```
### Test
```
df_clean.info()
```
## Tidiness Issues
### Define
- Replacing 3 predictions to one dog breed with higher probability, given that it is a breed of dog, with the use of pandas **_apply()_** function.
### Code
```
non_dog_ind = image_df_clean.query('not p1_dog and not p2_dog and not p3_dog').index
image_df_clean.drop(index=non_dog_ind, inplace=True)
def get_priority_dog(dog):
return dog['p1'] if dog['p1_dog'] else dog['p2'] if dog['p2_dog'] else dog['p3']
image_df_clean['dog_breed'] = image_df_clean.apply(get_priority_dog, axis=1)
image_df_clean.drop(columns=['p1', 'p1_conf', 'p1_dog', 'p2', 'p2_conf', 'p2_dog', 'p3', 'p3_conf', 'p3_dog', 'img_num'],
inplace=True)
```
### Test
```
image_df_clean.head()
```
### Define
- There are 4 column present for specifing dog breed, which can be done using one column only. Creating a column name `dog_stage` and adding present dog breed in it.
### Code
```
def get_dog_stage(dog):
if dog['doggo'] != 'None':
return dog['doggo']
elif dog['floofer'] != 'None':
return dog['floofer']
elif dog['pupper'] != 'None':
return dog['pupper']
else:
return dog['puppo'] # if last entry is also nan, we have to return nan anyway
df_clean['dog_stage'] = df_clean.apply(get_dog_stage, axis=1)
df_clean.drop(columns=['doggo', 'floofer', 'pupper', 'puppo'], inplace=True)
```
### Test
```
df_clean.info()
```
### Define
- All 3 dataframes contains same `tweet_id` column, which we can use ot merge them and use as one dataframe for our analysis.
### Code
```
df_clean = pd.merge(df_clean, image_df_clean, on='tweet_id')
df_clean = pd.merge(df_clean, favorite_retweet_df_clean, on='tweet_id')
```
### Test
```
df_clean.head()
```
## Quality Issues
### Define
- Try to extract dog dstage from tweet text using regular expressions and series **_str.extract()_** function.
### Code
```
stages = df_clean[df_clean.dog_stage == 'None'].text.str.extract(r'(doggo|pupper|floof|puppo|pup)', expand=True)
len(df_clean[df_clean.dog_stage == 'None'])
df_clean.loc[stages.index, 'dog_stage'] = stages[0]
```
### Test
```
len(df_clean[df_clean.dog_stage.isnull()])
```
### Define
- Removing outliers from `rating_numerator`
### Code
```
df_clean.boxplot(column=['rating_numerator'], figsize=(20,8), vert=False)
```
- As clear from the boxplot, `rating_numerator` has a number of outliers which can affect our analysis. So, removing all the rating points above 15 abd below 7 to reduce the effect of outliers.
```
df_clean.drop(index=df_clean.query('rating_numerator > 15 or rating_numerator < 7').index, inplace=True)
```
### Test
```
df_clean.boxplot(column=['rating_numerator'], figsize=(20,8), vert=False)
```
- As we can see now, there are no outliers present in the datat anymore.
## Storing Data in a CSV file
```
df_clean.to_csv('twitter_archive_master.csv', index=False)
print('Save Done !')
```
## Data Analysis and Visualization
```
ax = df_clean.plot.scatter('rating_numerator', 'favorite_count', figsize=(10, 10), title='Rating VS. Favorites')
ax.set_xlabel('Ratings')
ax.set_ylabel('Favorites')
```
## Insight 1:
- Number of favorite count is increasing with the rating. i.e. Dogs getting more rating in the tweets are likely to receive more likes (favorites).
```
df_clean.dog_breed.value_counts()
```
## Insight 2
- Most of the pictures present in @WeLoveDogs twitter account are of Golden Retriever, followed by Labrador Retriever, Pembroke and Chihuahua .
```
df_clean.loc[df_clean.favorite_count.idxmax()][['dog_breed', 'favorite_count']]
df_clean.loc[df_clean.favorite_count.idxmin()][['dog_breed', 'favorite_count']]
df_clean.loc[df_clean.retweet_count.idxmax()][['dog_breed', 'retweet_count']]
df_clean.loc[df_clean.retweet_count.idxmin()][['dog_breed', 'retweet_count']]
```
## Insight 3
- The dog with highest number of favorites (likes) is an Labrador retriever while the one with lowest number of favorites is an english setter.
- The same dogs who got highest and lowest favorites count also received highest and lowest retweet count respectively.
**So, if a tweet got more likes(favorites), it got better chances to have more retweets than the one who got low number of likes(favorites) and vice versa.**
| github_jupyter |
```
import keras
import keras.backend as K
from keras.datasets import mnist
from keras.models import Sequential, Model, load_model
from keras.layers import Dense, Dropout, Activation, Flatten, Input, Lambda
from keras.layers import Conv2D, MaxPooling2D, AveragePooling2D, Conv1D, MaxPooling1D, LSTM, ConvLSTM2D, GRU, BatchNormalization, LocallyConnected2D, Permute, TimeDistributed, Bidirectional
from keras.layers import Concatenate, Reshape, Conv2DTranspose, Embedding, Multiply, Activation
from functools import partial
from collections import defaultdict
import os
import pickle
import numpy as np
import scipy.sparse as sp
import scipy.io as spio
import isolearn.io as isoio
import isolearn.keras as isol
import matplotlib.pyplot as plt
from sequence_logo_helper import dna_letter_at, plot_dna_logo
from sklearn import preprocessing
import pandas as pd
import tensorflow as tf
from keras.backend.tensorflow_backend import set_session
def contain_tf_gpu_mem_usage() :
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.Session(config=config)
set_session(sess)
contain_tf_gpu_mem_usage()
#optimus 5-prime functions
def test_data(df, model, test_seq, obs_col, output_col='pred'):
'''Predict mean ribosome load using model and test set UTRs'''
# Scale the test set mean ribosome load
scaler = preprocessing.StandardScaler()
scaler.fit(df[obs_col].reshape(-1,1))
# Make predictions
predictions = model.predict(test_seq).reshape(-1)
# Inverse scaled predicted mean ribosome load and return in a column labeled 'pred'
df.loc[:,output_col] = scaler.inverse_transform(predictions)
return df
def one_hot_encode(df, col='utr', seq_len=50):
# Dictionary returning one-hot encoding of nucleotides.
nuc_d = {'a':[1,0,0,0],'c':[0,1,0,0],'g':[0,0,1,0],'t':[0,0,0,1], 'n':[0,0,0,0]}
# Creat empty matrix.
vectors=np.empty([len(df),seq_len,4])
# Iterate through UTRs and one-hot encode
for i,seq in enumerate(df[col].str[:seq_len]):
seq = seq.lower()
a = np.array([nuc_d[x] for x in seq])
vectors[i] = a
return vectors
def r2(x,y):
slope, intercept, r_value, p_value, std_err = stats.linregress(x,y)
return r_value**2
#Train data
df = pd.read_csv("../../../seqprop/examples/optimus5/GSM3130435_egfp_unmod_1.csv")
df.sort_values('total_reads', inplace=True, ascending=False)
df.reset_index(inplace=True, drop=True)
df = df.iloc[:280000]
# The training set has 260k UTRs and the test set has 20k UTRs.
#e_test = df.iloc[:20000].copy().reset_index(drop = True)
e_train = df.iloc[20000:].copy().reset_index(drop = True)
e_train.loc[:,'scaled_rl'] = preprocessing.StandardScaler().fit_transform(e_train.loc[:,'rl'].values.reshape(-1,1))
seq_e_train = one_hot_encode(e_train,seq_len=50)
x_train = seq_e_train
x_train = np.reshape(x_train, (x_train.shape[0], 1, x_train.shape[1], x_train.shape[2]))
y_train = np.array(e_train['scaled_rl'].values)
y_train = np.reshape(y_train, (y_train.shape[0],1))
print("x_train.shape = " + str(x_train.shape))
print("y_train.shape = " + str(y_train.shape))
#Load Predictor
predictor_path = 'optimusRetrainedMain.hdf5'
predictor = load_model(predictor_path)
predictor.trainable = False
predictor.compile(optimizer=keras.optimizers.SGD(lr=0.1), loss='mean_squared_error')
#Generate (original) predictions
pred_train = predictor.predict(x_train[:, 0, ...], batch_size=32)
y_train = (y_train >= 0.)
y_train = np.concatenate([1. - y_train, y_train], axis=1)
pred_train = (pred_train >= 0.)
pred_train = np.concatenate([1. - pred_train, pred_train], axis=1)
from keras.layers import Input, Dense, Multiply, Flatten, Reshape, Conv2D, MaxPooling2D, GlobalMaxPooling2D, Activation
from keras.layers import BatchNormalization
from keras.models import Sequential, Model
from keras.optimizers import Adam
from keras import regularizers
from keras import backend as K
import tensorflow as tf
import numpy as np
from keras.layers import Layer, InputSpec
from keras import initializers, regularizers, constraints
class InstanceNormalization(Layer):
def __init__(self, axes=(1, 2), trainable=True, **kwargs):
super(InstanceNormalization, self).__init__(**kwargs)
self.axes = axes
self.trainable = trainable
def build(self, input_shape):
self.beta = self.add_weight(name='beta',shape=(input_shape[-1],),
initializer='zeros',trainable=self.trainable)
self.gamma = self.add_weight(name='gamma',shape=(input_shape[-1],),
initializer='ones',trainable=self.trainable)
def call(self, inputs):
mean, variance = tf.nn.moments(inputs, self.axes, keep_dims=True)
return tf.nn.batch_normalization(inputs, mean, variance, self.beta, self.gamma, 1e-6)
def bernoulli_sampling (prob):
""" Sampling Bernoulli distribution by given probability.
Args:
- prob: P(Y = 1) in Bernoulli distribution.
Returns:
- samples: samples from Bernoulli distribution
"""
n, x_len, y_len, d = prob.shape
samples = np.random.binomial(1, prob, (n, x_len, y_len, d))
return samples
class INVASE():
"""INVASE class.
Attributes:
- x_train: training features
- y_train: training labels
- model_type: invase or invase_minus
- model_parameters:
- actor_h_dim: hidden state dimensions for actor
- critic_h_dim: hidden state dimensions for critic
- n_layer: the number of layers
- batch_size: the number of samples in mini batch
- iteration: the number of iterations
- activation: activation function of models
- learning_rate: learning rate of model training
- lamda: hyper-parameter of INVASE
"""
def __init__(self, x_train, y_train, model_type, model_parameters):
self.lamda = model_parameters['lamda']
self.actor_h_dim = model_parameters['actor_h_dim']
self.critic_h_dim = model_parameters['critic_h_dim']
self.n_layer = model_parameters['n_layer']
self.batch_size = model_parameters['batch_size']
self.iteration = model_parameters['iteration']
self.activation = model_parameters['activation']
self.learning_rate = model_parameters['learning_rate']
#Modified Code
self.x_len = x_train.shape[1]
self.y_len = x_train.shape[2]
self.dim = x_train.shape[3]
self.label_dim = y_train.shape[1]
self.model_type = model_type
optimizer = Adam(self.learning_rate)
# Build and compile critic
self.critic = self.build_critic()
self.critic.compile(loss='categorical_crossentropy',
optimizer=optimizer, metrics=['acc'])
# Build and compile the actor
self.actor = self.build_actor()
self.actor.compile(loss=self.actor_loss, optimizer=optimizer)
if self.model_type == 'invase':
# Build and compile the baseline
self.baseline = self.build_baseline()
self.baseline.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=['acc'])
def actor_loss(self, y_true, y_pred):
"""Custom loss for the actor.
Args:
- y_true:
- actor_out: actor output after sampling
- critic_out: critic output
- baseline_out: baseline output (only for invase)
- y_pred: output of the actor network
Returns:
- loss: actor loss
"""
y_pred = K.reshape(y_pred, (K.shape(y_pred)[0], self.x_len*self.y_len*1))
y_true = y_true[:, 0, 0, :]
# Actor output
actor_out = y_true[:, :self.x_len*self.y_len*1]
# Critic output
critic_out = y_true[:, self.x_len*self.y_len*1:(self.x_len*self.y_len*1+self.label_dim)]
if self.model_type == 'invase':
# Baseline output
baseline_out = \
y_true[:, (self.x_len*self.y_len*1+self.label_dim):(self.x_len*self.y_len*1+2*self.label_dim)]
# Ground truth label
y_out = y_true[:, (self.x_len*self.y_len*1+2*self.label_dim):]
elif self.model_type == 'invase_minus':
# Ground truth label
y_out = y_true[:, (self.x_len*self.y_len*1+self.label_dim):]
# Critic loss
critic_loss = -tf.reduce_sum(y_out * tf.log(critic_out + 1e-8), axis = 1)
if self.model_type == 'invase':
# Baseline loss
baseline_loss = -tf.reduce_sum(y_out * tf.log(baseline_out + 1e-8),
axis = 1)
# Reward
Reward = -(critic_loss - baseline_loss)
elif self.model_type == 'invase_minus':
Reward = -critic_loss
# Policy gradient loss computation.
custom_actor_loss = \
Reward * tf.reduce_sum(actor_out * K.log(y_pred + 1e-8) + \
(1-actor_out) * K.log(1-y_pred + 1e-8), axis = 1) - \
self.lamda * tf.reduce_mean(y_pred, axis = 1)
# custom actor loss
custom_actor_loss = tf.reduce_mean(-custom_actor_loss)
return custom_actor_loss
def build_actor(self):
"""Build actor.
Use feature as the input and output selection probability
"""
actor_model = Sequential()
actor_model.add(Conv2D(self.actor_h_dim, (1, 7), padding='same', activation='linear'))
actor_model.add(InstanceNormalization())
actor_model.add(Activation(self.activation))
for _ in range(self.n_layer - 2):
actor_model.add(Conv2D(self.actor_h_dim, (1, 7), padding='same', activation='linear'))
actor_model.add(InstanceNormalization())
actor_model.add(Activation(self.activation))
actor_model.add(Conv2D(1, (1, 1), padding='same', activation='sigmoid'))
feature = Input(shape=(self.x_len, self.y_len, self.dim), dtype='float32')
selection_probability = actor_model(feature)
return Model(feature, selection_probability)
def build_critic(self):
"""Build critic.
Use selected feature as the input and predict labels
"""
critic_model = Sequential()
critic_model.add(Conv2D(self.critic_h_dim, (1, 7), padding='same', activation='linear'))
critic_model.add(InstanceNormalization())
critic_model.add(Activation(self.activation))
for _ in range(self.n_layer - 2):
critic_model.add(Conv2D(self.critic_h_dim, (1, 7), padding='same', activation='linear'))
critic_model.add(InstanceNormalization())
critic_model.add(Activation(self.activation))
critic_model.add(Flatten())
critic_model.add(Dense(self.critic_h_dim, activation=self.activation))
critic_model.add(Dropout(0.2))
critic_model.add(Dense(self.label_dim, activation ='softmax'))
## Inputs
# Features
feature = Input(shape=(self.x_len, self.y_len, self.dim), dtype='float32')
# Binary selection
selection = Input(shape=(self.x_len, self.y_len, 1), dtype='float32')
# Element-wise multiplication
critic_model_input = Multiply()([feature, selection])
y_hat = critic_model(critic_model_input)
return Model([feature, selection], y_hat)
def build_baseline(self):
"""Build baseline.
Use the feature as the input and predict labels
"""
baseline_model = Sequential()
baseline_model.add(Conv2D(self.critic_h_dim, (1, 7), padding='same', activation='linear'))
baseline_model.add(InstanceNormalization())
baseline_model.add(Activation(self.activation))
for _ in range(self.n_layer - 2):
baseline_model.add(Conv2D(self.critic_h_dim, (1, 7), padding='same', activation='linear'))
baseline_model.add(InstanceNormalization())
baseline_model.add(Activation(self.activation))
baseline_model.add(Flatten())
baseline_model.add(Dense(self.critic_h_dim, activation=self.activation))
baseline_model.add(Dropout(0.2))
baseline_model.add(Dense(self.label_dim, activation ='softmax'))
# Input
feature = Input(shape=(self.x_len, self.y_len, self.dim), dtype='float32')
# Output
y_hat = baseline_model(feature)
return Model(feature, y_hat)
def train(self, x_train, y_train):
"""Train INVASE.
Args:
- x_train: training features
- y_train: training labels
"""
for iter_idx in range(self.iteration):
## Train critic
# Select a random batch of samples
idx = np.random.randint(0, x_train.shape[0], self.batch_size)
x_batch = x_train[idx,:]
y_batch = y_train[idx,:]
# Generate a batch of selection probability
selection_probability = self.actor.predict(x_batch)
# Sampling the features based on the selection_probability
selection = bernoulli_sampling(selection_probability)
# Critic loss
critic_loss = self.critic.train_on_batch([x_batch, selection], y_batch)
# Critic output
critic_out = self.critic.predict([x_batch, selection])
# Baseline output
if self.model_type == 'invase':
# Baseline loss
baseline_loss = self.baseline.train_on_batch(x_batch, y_batch)
# Baseline output
baseline_out = self.baseline.predict(x_batch)
## Train actor
# Use multiple things as the y_true:
# - selection, critic_out, baseline_out, and ground truth (y_batch)
if self.model_type == 'invase':
y_batch_final = np.concatenate((np.reshape(selection, (y_batch.shape[0], -1)),
np.asarray(critic_out),
np.asarray(baseline_out),
y_batch), axis = 1)
elif self.model_type == 'invase_minus':
y_batch_final = np.concatenate((np.reshape(selection, (y_batch.shape[0], -1)),
np.asarray(critic_out),
y_batch), axis = 1)
y_batch_final = y_batch_final[:, None, None, :]
# Train the actor
actor_loss = self.actor.train_on_batch(x_batch, y_batch_final)
if self.model_type == 'invase':
# Print the progress
dialog = 'Iterations: ' + str(iter_idx) + \
', critic accuracy: ' + str(critic_loss[1]) + \
', baseline accuracy: ' + str(baseline_loss[1]) + \
', actor loss: ' + str(np.round(actor_loss,4))
elif self.model_type == 'invase_minus':
# Print the progress
dialog = 'Iterations: ' + str(iter_idx) + \
', critic accuracy: ' + str(critic_loss[1]) + \
', actor loss: ' + str(np.round(actor_loss,4))
if iter_idx % 100 == 0:
print(dialog)
def importance_score(self, x):
"""Return featuer importance score.
Args:
- x: feature
Returns:
- feature_importance: instance-wise feature importance for x
"""
feature_importance = self.actor.predict(x)
return np.asarray(feature_importance)
def predict(self, x):
"""Predict outcomes.
Args:
- x: feature
Returns:
- y_hat: predictions
"""
# Generate a batch of selection probability
selection_probability = self.actor.predict(x)
# Sampling the features based on the selection_probability
selection = bernoulli_sampling(selection_probability)
# Prediction
y_hat = self.critic.predict([x, selection])
return np.asarray(y_hat)
#Gradient saliency/backprop visualization
import matplotlib.collections as collections
import operator
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import matplotlib.colors as colors
import matplotlib as mpl
from matplotlib.text import TextPath
from matplotlib.patches import PathPatch, Rectangle
from matplotlib.font_manager import FontProperties
from matplotlib import gridspec
from matplotlib.ticker import FormatStrFormatter
def plot_importance_scores(importance_scores, ref_seq, figsize=(12, 2), score_clip=None, sequence_template='', plot_start=0, plot_end=96) :
end_pos = ref_seq.find("#")
fig = plt.figure(figsize=figsize)
ax = plt.gca()
if score_clip is not None :
importance_scores = np.clip(np.copy(importance_scores), -score_clip, score_clip)
max_score = np.max(np.sum(importance_scores[:, :], axis=0)) + 0.01
for i in range(0, len(ref_seq)) :
mutability_score = np.sum(importance_scores[:, i])
dna_letter_at(ref_seq[i], i + 0.5, 0, mutability_score, ax)
plt.sca(ax)
plt.xlim((0, len(ref_seq)))
plt.ylim((0, max_score))
plt.axis('off')
plt.yticks([0.0, max_score], [0.0, max_score], fontsize=16)
for axis in fig.axes :
axis.get_xaxis().set_visible(False)
axis.get_yaxis().set_visible(False)
plt.tight_layout()
plt.show()
#Execute INVASE benchmark on synthetic datasets
mask_penalty = 0.5#0.1
hidden_dims = 32
n_layers = 4
epochs = 25
batch_size = 128
model_parameters = {
'lamda': mask_penalty,
'actor_h_dim': hidden_dims,
'critic_h_dim': hidden_dims,
'n_layer': n_layers,
'batch_size': batch_size,
'iteration': int(x_train.shape[0] * epochs / batch_size),
'activation': 'relu',
'learning_rate': 0.0001
}
encoder = isol.OneHotEncoder(50)
score_clip = None
allFiles = ["optimus5_synthetic_random_insert_if_uorf_1_start_1_stop_variable_loc_512.csv",
"optimus5_synthetic_random_insert_if_uorf_1_start_2_stop_variable_loc_512.csv",
"optimus5_synthetic_random_insert_if_uorf_2_start_1_stop_variable_loc_512.csv",
"optimus5_synthetic_random_insert_if_uorf_2_start_2_stop_variable_loc_512.csv",
"optimus5_synthetic_examples_3.csv"]
#Train INVASE
invase_model = INVASE(x_train, pred_train, 'invase', model_parameters)
invase_model.train(x_train, pred_train)
for csv_to_open in allFiles :
#Load dataset for benchmarking
dataset_name = csv_to_open.replace(".csv", "")
benchmarkSet = pd.read_csv(csv_to_open)
seq_e_test = one_hot_encode(benchmarkSet, seq_len=50)
x_test = seq_e_test[:, None, ...]
print(x_test.shape)
pred_test = predictor.predict(x_test[:, 0, ...], batch_size=32)
y_test = pred_test
y_test = (y_test >= 0.)
y_test = np.concatenate([1. - y_test, y_test], axis=1)
pred_test = (pred_test >= 0.)
pred_test = np.concatenate([1. - pred_test, pred_test], axis=1)
importance_scores_test = invase_model.importance_score(x_test)
#Evaluate INVASE model on train and test data
invase_pred_train = invase_model.predict(x_train)
invase_pred_test = invase_model.predict(x_test)
print("Training Accuracy = " + str(np.sum(np.argmax(invase_pred_train, axis=1) == np.argmax(pred_train, axis=1)) / float(pred_train.shape[0])))
print("Test Accuracy = " + str(np.sum(np.argmax(invase_pred_test, axis=1) == np.argmax(pred_test, axis=1)) / float(pred_test.shape[0])))
for plot_i in range(0, 3) :
print("Test sequence " + str(plot_i) + ":")
plot_dna_logo(x_test[plot_i, 0, :, :], sequence_template='N'*50, plot_sequence_template=True, figsize=(12, 1), plot_start=0, plot_end=50)
plot_importance_scores(np.maximum(importance_scores_test[plot_i, 0, :, :].T, 0.), encoder.decode(x_test[plot_i, 0, :, :]), figsize=(12, 1), score_clip=score_clip, sequence_template='N'*50, plot_start=0, plot_end=50)
#Save predicted importance scores
model_name = "invase_" + dataset_name + "_conv_full_data"
np.save(model_name + "_importance_scores_test", importance_scores_test)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/iVibudh/TensorFlow-for-DeepLearning/blob/main/08-Time-Series-Forecasting/moving_average.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Moving average
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l08c03_moving_average.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l08c03_moving_average.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
## Setup
```
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
keras = tf.keras
def plot_series(time, series, format="-", start=0, end=None, label=None):
plt.plot(time[start:end], series[start:end], format, label=label)
plt.xlabel("Time")
plt.ylabel("Value")
if label:
plt.legend(fontsize=14)
plt.grid(True)
def trend(time, slope=0):
return slope * time
def seasonal_pattern(season_time):
"""Just an arbitrary pattern, you can change it if you wish"""
return np.where(season_time < 0.4,
np.cos(season_time * 2 * np.pi),
1 / np.exp(3 * season_time))
def seasonality(time, period, amplitude=1, phase=0):
"""Repeats the same pattern at each period"""
season_time = ((time + phase) % period) / period
return amplitude * seasonal_pattern(season_time)
def white_noise(time, noise_level=1, seed=None):
rnd = np.random.RandomState(seed)
return rnd.randn(len(time)) * noise_level
```
## Trend and Seasonality
```
time = np.arange(4 * 365 + 1)
slope = 0.05
baseline = 10
amplitude = 40
series = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude)
noise_level = 5
noise = white_noise(time, noise_level, seed=42)
series += noise
plt.figure(figsize=(10, 6))
plot_series(time, series)
plt.show()
```
## Naive Forecast
```
split_time = 1000
time_train = time[:split_time]
x_train = series[:split_time]
time_valid = time[split_time:]
x_valid = series[split_time:]
naive_forecast = series[split_time - 1:-1]
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid, start=0, end=150, label="Series")
plot_series(time_valid, naive_forecast, start=1, end=151, label="Forecast")
```
Now let's compute the mean absolute error between the forecasts and the predictions in the validation period:
```
keras.metrics.mean_absolute_error(x_valid, naive_forecast).numpy()
```
That's our baseline, now let's try a moving average.
## Moving Average
```
def moving_average_forecast(series, window_size):
"""Forecasts the mean of the last few values.
If window_size=1, then this is equivalent to naive forecast"""
forecast = []
for time in range(len(series) - window_size):
forecast.append(series[time:time + window_size].mean())
return np.array(forecast)
def moving_average_forecast(series, window_size):
"""Forecasts the mean of the last few values.
If window_size=1, then this is equivalent to naive forecast
This implementation is *much* faster than the previous one"""
mov = np.cumsum(series)
mov[window_size:] = mov[window_size:] - mov[:-window_size]
return mov[window_size - 1:-1] / window_size
moving_avg = moving_average_forecast(series, 30)[split_time - 30:]
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid, label="Series")
plot_series(time_valid, moving_avg, label="Moving average (30 days)")
keras.metrics.mean_absolute_error(x_valid, moving_avg).numpy()
```
That's worse than naive forecast! The moving average does not anticipate trend or seasonality, so let's try to remove them by using differencing. Since the seasonality period is 365 days, we will subtract the value at time *t* – 365 from the value at time *t*.
```
time_a = np.array(range(1, 21))
a = np.array([1.1, 1.5, 1.6, 1.4, 1.5, 1.6,1.7, 1.8, 1.9, 2,
2.1, 2.5, 2.6, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9, 3])
print("a: ", a, len(a))
print("time_a: ", time_a, len (time_a))
ma_a = moving_average_forecast(a, 10)
print("ma_a: ", ma_a)
diff_a = (a[10:] - a[:-10]) ### This removes TREND + SEASONALITY, so, Only Noise is left
diff_time = time_a[10:]
print("diff_time: ", diff_time)
print("diff_a: ", diff_a)
a[10:]
a[:-10]
diff_series = (series[365:] - series[:-365])
diff_time = time[365:]
plt.figure(figsize=(10, 6))
plot_series(diff_time, diff_series, label="Series(t) – Series(t–365)")
plt.show()
```
Focusing on the validation period:
```
plt.figure(figsize=(10, 6))
plot_series(time_valid, diff_series[split_time - 365:], label="Series(t) – Series(t–365)")
plt.show()
```
Great, the trend and seasonality seem to be gone, so now we can use the moving average:
```
diff_moving_avg = moving_average_forecast(diff_series, 50)[split_time - 365 - 50:]
plt.figure(figsize=(10, 6))
plot_series(time_valid, diff_series[split_time - 365:], label="Series(t) – Series(t–365)")
plot_series(time_valid, diff_moving_avg, label="Moving Average of Diff")
plt.show()
```
Now let's bring back the trend and seasonality by adding the past values from t – 365:
```
diff_moving_avg_plus_past = series[split_time - 365:-365] + diff_moving_avg
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid, label="Series")
plot_series(time_valid, diff_moving_avg_plus_past, label="Forecasts")
plt.show()
keras.metrics.mean_absolute_error(x_valid, diff_moving_avg_plus_past).numpy()
```
Better than naive forecast, good. However the forecasts look a bit too random, because we're just adding past values, which were noisy. Let's use a moving averaging on past values to remove some of the noise:
```
diff_moving_avg_plus_smooth_past = moving_average_forecast(series[split_time - 370:-359], 11) + diff_moving_avg
# moving_average_forecast(series[split_time - 370:-359], 11) = Past series have been smoothed to reduce the effects of the past noise
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid, label="Series")
plot_series(time_valid, diff_moving_avg_plus_smooth_past, label="Forecasts")
plt.show()
keras.metrics.mean_absolute_error(x_valid, diff_moving_avg_plus_smooth_past).numpy()
```
That's starting to look pretty good! Let's see if we can do better with a Machine Learning model.
| github_jupyter |
# Approximate q-learning
In this notebook you will teach a __tensorflow__ neural network to do Q-learning.
__Frameworks__ - we'll accept this homework in any deep learning framework. This particular notebook was designed for tensorflow, but you will find it easy to adapt it to almost any python-based deep learning framework.
```
#XVFB will be launched if you run on a server
import os
if os.environ.get("DISPLAY") is not str or len(os.environ.get("DISPLAY"))==0:
!bash ../xvfb start
%env DISPLAY=:1
import gym
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
env = gym.make("CartPole-v0").env
env.reset()
n_actions = env.action_space.n
state_dim = env.observation_space.shape
plt.imshow(env.render("rgb_array"))
state_dim
```
# Approximate (deep) Q-learning: building the network
To train a neural network policy one must have a neural network policy. Let's build it.
Since we're working with a pre-extracted features (cart positions, angles and velocities), we don't need a complicated network yet. In fact, let's build something like this for starters:

For your first run, please only use linear layers (L.Dense) and activations. Stuff like batch normalization or dropout may ruin everything if used haphazardly.
Also please avoid using nonlinearities like sigmoid & tanh: agent's observations are not normalized so sigmoids may become saturated from init.
Ideally you should start small with maybe 1-2 hidden layers with < 200 neurons and then increase network size if agent doesn't beat the target score.
```
import tensorflow as tf
import keras
import keras.layers as L
tf.reset_default_graph()
sess = tf.InteractiveSession()
keras.backend.set_session(sess)
network = keras.models.Sequential()
network.add(L.InputLayer(state_dim))
# let's create a network for approximate q-learning following guidelines above
# <YOUR CODE: stack more layers!!!1 >
network.add(L.Dense(128, activation='relu'))
network.add(L.Dense(196, activation='relu'))
network.add(L.Dense(n_actions, activation='linear'))
import numpy as np
import random
def get_action(state, epsilon=0):
"""
sample actions with epsilon-greedy policy
recap: with p = epsilon pick random action, else pick action with highest Q(s,a)
"""
q_values = network.predict(state[None])[0]
n = len(q_values)
if random.random() < epsilon:
return random.randint(0, n - 1)
else:
return np.argmax(q_values)
# return <epsilon-greedily selected action>
assert network.output_shape == (None, n_actions), "please make sure your model maps state s -> [Q(s,a0), ..., Q(s, a_last)]"
assert network.layers[-1].activation == keras.activations.linear, "please make sure you predict q-values without nonlinearity"
# test epsilon-greedy exploration
s = env.reset()
assert np.shape(get_action(s)) == (), "please return just one action (integer)"
for eps in [0., 0.1, 0.5, 1.0]:
state_frequencies = np.bincount([get_action(s, epsilon=eps) for i in range(10000)], minlength=n_actions)
best_action = state_frequencies.argmax()
assert abs(state_frequencies[best_action] - 10000 * (1 - eps + eps / n_actions)) < 200
for other_action in range(n_actions):
if other_action != best_action:
assert abs(state_frequencies[other_action] - 10000 * (eps / n_actions)) < 200
print('e=%.1f tests passed'%eps)
```
### Q-learning via gradient descent
We shall now train our agent's Q-function by minimizing the TD loss:
$$ L = { 1 \over N} \sum_i (Q_{\theta}(s,a) - [r(s,a) + \gamma \cdot max_{a'} Q_{-}(s', a')]) ^2 $$
Where
* $s, a, r, s'$ are current state, action, reward and next state respectively
* $\gamma$ is a discount factor defined two cells above.
The tricky part is with $Q_{-}(s',a')$. From an engineering standpoint, it's the same as $Q_{\theta}$ - the output of your neural network policy. However, when doing gradient descent, __we won't propagate gradients through it__ to make training more stable (see lectures).
To do so, we shall use `tf.stop_gradient` function which basically says "consider this thing constant when doingbackprop".
```
# Create placeholders for the <s, a, r, s'> tuple and a special indicator for game end (is_done = True)
states_ph = keras.backend.placeholder(dtype='float32', shape=(None,) + state_dim)
actions_ph = keras.backend.placeholder(dtype='int32', shape=[None])
rewards_ph = keras.backend.placeholder(dtype='float32', shape=[None])
next_states_ph = keras.backend.placeholder(dtype='float32', shape=(None,) + state_dim)
is_done_ph = keras.backend.placeholder(dtype='bool', shape=[None])
#get q-values for all actions in current states
predicted_qvalues = network(states_ph)
#select q-values for chosen actions
predicted_qvalues_for_actions = tf.reduce_sum(predicted_qvalues * tf.one_hot(actions_ph, n_actions), axis=1)
gamma = 0.99
# compute q-values for all actions in next states
# predicted_next_qvalues = <YOUR CODE - apply network to get q-values for next_states_ph>
predicted_next_qvalues = network(next_states_ph)
# compute V*(next_states) using predicted next q-values
next_state_values = tf.reduce_max(predicted_next_qvalues, axis=1)
# compute "target q-values" for loss - it's what's inside square parentheses in the above formula.
target_qvalues_for_actions = rewards_ph + gamma * next_state_values
# at the last state we shall use simplified formula: Q(s,a) = r(s,a) since s' doesn't exist
target_qvalues_for_actions = tf.where(is_done_ph, rewards_ph, target_qvalues_for_actions)
#mean squared error loss to minimize
loss = (predicted_qvalues_for_actions - tf.stop_gradient(target_qvalues_for_actions)) ** 2
loss = tf.reduce_mean(loss)
# training function that resembles agent.update(state, action, reward, next_state) from tabular agent
train_step = tf.train.AdamOptimizer(1e-4).minimize(loss)
assert tf.gradients(loss, [predicted_qvalues_for_actions])[0] is not None, "make sure you update q-values for chosen actions and not just all actions"
assert tf.gradients(loss, [predicted_next_qvalues])[0] is None, "make sure you don't propagate gradient w.r.t. Q_(s',a')"
assert predicted_next_qvalues.shape.ndims == 2, "make sure you predicted q-values for all actions in next state"
assert next_state_values.shape.ndims == 1, "make sure you computed V(s') as maximum over just the actions axis and not all axes"
assert target_qvalues_for_actions.shape.ndims == 1, "there's something wrong with target q-values, they must be a vector"
```
### Playing the game
```
def generate_session(t_max=1000, epsilon=0, train=False):
"""play env with approximate q-learning agent and train it at the same time"""
total_reward = 0
s = env.reset()
for t in range(t_max):
a = get_action(s, epsilon=epsilon)
next_s, r, done, _ = env.step(a)
if train:
sess.run(train_step,{
states_ph: [s], actions_ph: [a], rewards_ph: [r],
next_states_ph: [next_s], is_done_ph: [done]
})
total_reward += r
s = next_s
if done: break
return total_reward
epsilon = 0.5
for i in range(1000):
session_rewards = [generate_session(epsilon=epsilon, train=True) for _ in range(100)]
print("epoch #{}\tmean reward = {:.3f}\tepsilon = {:.3f}".format(i, np.mean(session_rewards), epsilon))
epsilon *= 0.99
assert epsilon >= 1e-4, "Make sure epsilon is always nonzero during training"
if np.mean(session_rewards) > 300:
print ("You Win!")
break
```
### How to interpret results
Welcome to the f.. world of deep f...n reinforcement learning. Don't expect agent's reward to smoothly go up. Hope for it to go increase eventually. If it deems you worthy.
Seriously though,
* __ mean reward__ is the average reward per game. For a correct implementation it may stay low for some 10 epochs, then start growing while oscilating insanely and converges by ~50-100 steps depending on the network architecture.
* If it never reaches target score by the end of for loop, try increasing the number of hidden neurons or look at the epsilon.
* __ epsilon__ - agent's willingness to explore. If you see that agent's already at < 0.01 epsilon before it's is at least 200, just reset it back to 0.1 - 0.5.
### Record videos
As usual, we now use `gym.wrappers.Monitor` to record a video of our agent playing the game. Unlike our previous attempts with state binarization, this time we expect our agent to act ~~(or fail)~~ more smoothly since there's no more binarization error at play.
As you already did with tabular q-learning, we set epsilon=0 for final evaluation to prevent agent from exploring himself to death.
```
#record sessions
import gym.wrappers
env = gym.wrappers.Monitor(gym.make("CartPole-v0"),directory="videos",force=True)
sessions = [generate_session(epsilon=0, train=False) for _ in range(100)]
env.close()
#show video
from IPython.display import HTML
import os
video_names = list(filter(lambda s:s.endswith(".mp4"),os.listdir("./videos/")))
HTML("""
<video width="640" height="480" controls>
<source src="{}" type="video/mp4">
</video>
""".format("./videos/"+video_names[-1])) #this may or may not be _last_ video. Try other indices
```
---
### Submit to coursera
```
%load_ext autoreload
%autoreload 2
from submit2 import submit_cartpole
submit_cartpole(generate_session, '[email protected]', 'gMmaBajRboD6YXKK')
```
| github_jupyter |
# NOAA Wave Watch 3 and NDBC Buoy Data Comparison
*Note: this notebook requires python3.*
This notebook demostrates how to compare [WaveWatch III Global Ocean Wave Model](http://data.planetos.com/datasets/noaa_ww3_global_1.25x1d:noaa-wave-watch-iii-nww3-ocean-wave-model?utm_source=github&utm_medium=notebook&utm_campaign=ndbc-wavewatch-iii-notebook) and [NOAA NDBC buoy data](http://data.planetos.com/datasets/noaa_ndbc_stdmet_stations?utm_source=github&utm_medium=notebook&utm_campaign=ndbc-wavewatch-iii-notebook) using the Planet OS API.
API documentation is available at http://docs.planetos.com. If you have questions or comments, join the [Planet OS Slack community](http://slack.planetos.com/) to chat with our development team.
For general information on usage of IPython/Jupyter and Matplotlib, please refer to their corresponding documentation. https://ipython.org/ and http://matplotlib.org/. This notebook also makes use of the [matplotlib basemap toolkit.](http://matplotlib.org/basemap/index.html)
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import dateutil.parser
import datetime
from urllib.request import urlopen, Request
import simplejson as json
from datetime import date, timedelta, datetime
import matplotlib.dates as mdates
from mpl_toolkits.basemap import Basemap
```
**Important!** You'll need to replace apikey below with your actual Planet OS API key, which you'll find [on the Planet OS account settings page.](#http://data.planetos.com/account/settings/?utm_source=github&utm_medium=notebook&utm_campaign=ww3-api-notebook) and NDBC buoy station name in which you are intrested.
```
dataset_id = 'noaa_ndbc_stdmet_stations'
## stations with wave height available: '46006', '46013', '46029'
## stations without wave height: icac1', '41047', 'bepb6', '32st0', '51004'
## stations too close to coastline (no point to compare to ww3)'sacv4', 'gelo1', 'hcef1'
station = '46029'
apikey = open('APIKEY').readlines()[0].strip() #'<YOUR API KEY HERE>'
```
Let's first query the API to see what stations are available for the [NDBC Standard Meteorological Data dataset.](http://data.planetos.com/datasets/noaa_ndbc_stdmet_stations?utm_source=github&utm_medium=notebook&utm_campaign=ndbc-wavewatch-iii-notebook)
```
API_url = 'http://api.planetos.com/v1/datasets/%s/stations?apikey=%s' % (dataset_id, apikey)
request = Request(API_url)
response = urlopen(request)
API_data_locations = json.loads(response.read())
# print(API_data_locations)
```
Now we'll use matplotlib to visualize the stations on a simple basemap.
```
m = Basemap(projection='merc',llcrnrlat=-80,urcrnrlat=80,\
llcrnrlon=-180,urcrnrlon=180,lat_ts=20,resolution='c')
fig=plt.figure(figsize=(15,10))
m.drawcoastlines()
##m.fillcontinents()
for i in API_data_locations['station']:
x,y=m(API_data_locations['station'][i]['SpatialExtent']['coordinates'][0],
API_data_locations['station'][i]['SpatialExtent']['coordinates'][1])
plt.scatter(x,y,color='r')
x,y=m(API_data_locations['station'][station]['SpatialExtent']['coordinates'][0],
API_data_locations['station'][station]['SpatialExtent']['coordinates'][1])
plt.scatter(x,y,s=100,color='b')
```
Let's examine the last five days of data. For the WaveWatch III forecast, we'll use the reference time parameter to pull forecast data from the 18:00 model run from five days ago.
```
## Find suitable reference time values
atthemoment = datetime.utcnow()
atthemoment = atthemoment.strftime('%Y-%m-%dT%H:%M:%S')
before5days = datetime.utcnow() - timedelta(days=5)
before5days_long = before5days.strftime('%Y-%m-%dT%H:%M:%S')
before5days_short = before5days.strftime('%Y-%m-%d')
start = before5days_long
end = atthemoment
reftime_start = str(before5days_short) + 'T18:00:00'
reftime_end = reftime_start
```
API request for NOAA NDBC buoy station data
```
API_url = "http://api.planetos.com/v1/datasets/{0}/point?station={1}&apikey={2}&start={3}&end={4}&count=1000".format(dataset_id,station,apikey,start,end)
print(API_url)
request = Request(API_url)
response = urlopen(request)
API_data_buoy = json.loads(response.read())
buoy_variables = []
for k,v in set([(j,i['context']) for i in API_data_buoy['entries'] for j in i['data'].keys()]):
buoy_variables.append(k)
```
Find buoy station coordinates to use them later for finding NOAA Wave Watch III data
```
for i in API_data_buoy['entries']:
#print(i['axes']['time'])
if i['context'] == 'time_latitude_longitude':
longitude = (i['axes']['longitude'])
latitude = (i['axes']['latitude'])
print ('Latitude: '+ str(latitude))
print ('Longitude: '+ str(longitude))
```
API request for NOAA WaveWatch III (NWW3) Ocean Wave Model near the point of selected station. Note that data may not be available at the requested reference time. If the response is empty, try removing the reference time parameters `reftime_start` and `reftime_end` from the query.
```
API_url = 'http://api.planetos.com/v1/datasets/noaa_ww3_global_1.25x1d/point?lat={0}&lon={1}&verbose=true&apikey={2}&count=100&end={3}&reftime_start={4}&reftime_end={5}'.format(latitude,longitude,apikey,end,reftime_start,reftime_end)
request = Request(API_url)
response = urlopen(request)
API_data_ww3 = json.loads(response.read())
print(API_url)
ww3_variables = []
for k,v in set([(j,i['context']) for i in API_data_ww3['entries'] for j in i['data'].keys()]):
ww3_variables.append(k)
```
Manually review the list of WaveWatch and NDBC data variables to determine which parameters are equivalent for comparison.
```
print(ww3_variables)
print(buoy_variables)
```
Next we'll build a dictionary of corresponding variables that we want to compare.
```
buoy_model = {'wave_height':'Significant_height_of_combined_wind_waves_and_swell_surface',
'mean_wave_dir':'Primary_wave_direction_surface',
'average_wpd':'Primary_wave_mean_period_surface',
'wind_spd':'Wind_speed_surface'}
```
Read data from the JSON responses and convert the values to floats for plotting. Note that depending on the dataset, some variables have different timesteps than others, so a separate time array for each variable is recommended.
```
def append_data(in_string):
if in_string == None:
return np.nan
elif in_string == 'None':
return np.nan
else:
return float(in_string)
ww3_data = {}
ww3_times = {}
buoy_data = {}
buoy_times = {}
for k,v in buoy_model.items():
ww3_data[v] = []
ww3_times[v] = []
buoy_data[k] = []
buoy_times[k] = []
for i in API_data_ww3['entries']:
for j in i['data']:
if j in buoy_model.values():
ww3_data[j].append(append_data(i['data'][j]))
ww3_times[j].append(dateutil.parser.parse(i['axes']['time']))
for i in API_data_buoy['entries']:
for j in i['data']:
if j in buoy_model.keys():
buoy_data[j].append(append_data(i['data'][j]))
buoy_times[j].append(dateutil.parser.parse(i['axes']['time']))
for i in ww3_data:
ww3_data[i] = np.array(ww3_data[i])
ww3_times[i] = np.array(ww3_times[i])
```
Finally, let's plot the data using matplotlib.
```
buoy_label = "NDBC Station %s" % station
ww3_label = "WW3 at %s" % reftime_start
for k,v in buoy_model.items():
if np.abs(np.nansum(buoy_data[k]))>0:
fig=plt.figure(figsize=(10,5))
plt.title(k+' '+v)
plt.plot(ww3_times[v],ww3_data[v], label=ww3_label)
plt.plot(buoy_times[k],buoy_data[k],'*',label=buoy_label)
plt.legend(bbox_to_anchor=(1.5, 0.22), loc=1, borderaxespad=0.)
plt.xlabel('Time')
plt.ylabel(k)
fig.autofmt_xdate()
plt.grid()
```
| github_jupyter |
# Aula 1
```
import pandas as pd
url_dados = 'https://github.com/alura-cursos/imersaodados3/blob/main/dados/dados_experimentos.zip?raw=true'
dados = pd.read_csv(url_dados, compression = 'zip')
dados
dados.head()
dados.shape
dados['tratamento']
dados['tratamento'].unique()
dados['tempo'].unique()
dados['dose'].unique()
dados['droga'].unique()
dados['g-0'].unique()
dados['tratamento'].value_counts()
dados['dose'].value_counts()
dados['tratamento'].value_counts(normalize = True)
dados['dose'].value_counts(normalize = True)
dados['tratamento'].value_counts().plot.pie()
dados['tempo'].value_counts().plot.pie()
dados['tempo'].value_counts().plot.bar()
dados_filtrados = dados[dados['g-0'] > 0]
dados_filtrados.head()
```
#Desafios Aula 1
## Desafio 01: Investigar por que a classe tratamento é tão desbalanceada?
Dependendo o tipo de pesquisa é possível usar o mesmo controle para mais de um caso. Repare que o grupo de controle é um grupo onde não estamos aplicando o efeito de uma determinada droga. Então, esse mesmo grupo pode ser utilizado como controle para cada uma das drogas estudadas.
Um ponto relevante da base de dados que estamos trabalhando é que todos os dados de controle estão relacionados ao estudo de apenas uma droga.
```
print(f"Total de dados {len(dados['id'])}\n")
print(f"Quantidade de drogas {len(dados.groupby(['droga', 'tratamento']).count()['id'])}\n")
display(dados.query('tratamento == "com_controle"').value_counts('droga'))
print()
display(dados.query('droga == "cacb2b860"').value_counts('tratamento'))
print()
```
## Desafio 02: Plotar as 5 últimas linhas da tabela
```
dados.tail()
```
Outra opção seria usar o seguinte comando:
```
dados[-5:]
```
## Desafio 03: Proporção das classes tratamento.
```
dados['tratamento'].value_counts(normalize = True)
```
## Desafio 04: Quantas tipos de drogas foram investigadas.
```
dados['droga'].unique().shape[0]
```
Outra opção de solução:
```
len(dados['droga'].unique())
```
## Desafio 05: Procurar na documentação o método query(pandas).
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.query.html
## Desafio 06: Renomear as colunas tirando o hífen.
```
dados.columns
nome_das_colunas = dados.columns
novo_nome_coluna = []
for coluna in nome_das_colunas:
coluna = coluna.replace('-', '_')
novo_nome_coluna.append(coluna)
dados.columns = novo_nome_coluna
dados.head()
```
Agora podemos comparar o resultado usando Query com o resultado usando máscara + slice
```
dados_filtrados = dados[dados['g_0'] > 0]
dados_filtrados.head()
dados_filtrados = dados.query('g_0 > 0')
dados_filtrados.head()
```
## Desafio 07: Deixar os gráficos bonitões. (Matplotlib.pyplot)
```
import matplotlib.pyplot as plt
valore_tempo = dados['tempo'].value_counts(ascending=True)
valore_tempo.sort_index()
plt.figure(figsize=(15, 10))
valore_tempo = dados['tempo'].value_counts(ascending=True)
ax = valore_tempo.sort_index().plot.bar()
ax.set_title('Janelas de tempo', fontsize=20)
ax.set_xlabel('Tempo', fontsize=18)
ax.set_ylabel('Quantidade', fontsize=18)
plt.xticks(rotation = 0, fontsize=16)
plt.yticks(fontsize=16)
plt.show()
```
##Desafio 08: Resumo do que você aprendeu com os dados
Nesta aula utilizei a biblioteca Pandas, diversas funcionalidades da mesma para explorar dados. Durante a análise de dados, descobri fatores importantes para a obtenção de insights e também aprendi como plotar os gráficos de pizza e de colunas discutindo pontos positivos e negativos.
Para mais informações a base dados estudada na imersão é uma versão simplificada [deste desafio](https://www.kaggle.com/c/lish-moa/overview/description) do Kaggle (em inglês).
Também recomendo acessar o
[Connectopedia](https://clue.io/connectopedia/). O Connectopedia é um dicionário gratuito de termos e conceitos que incluem definições de viabilidade de células e expressão de genes.
O desafio do Kaggle também está relacionado a estes artigos científicos:
Corsello et al. “Discovering the anticancer potential of non-oncology drugs by systematic viability profiling,” Nature Cancer, 2020, https://doi.org/10.1038/s43018-019-0018-6
Subramanian et al. “A Next Generation Connectivity Map: L1000 Platform and the First 1,000,000 Profiles,” Cell, 2017, https://doi.org/10.1016/j.cell.2017.10.049
```
```
| github_jupyter |
# Part 2 - Lists
We learned about **variables** in Part 1. Sometimes it makes sense to group lots of items of information together in a **list**. This is a good idea when the items are all connected in some way.
For example, we might want to store the names of our friends. We could create several variables and assign the name of one friend to each of them (remember to do a Shift-Return on your keyboard, or click the Run button, to run the code):
```
friend_1 = "Fred"
friend_2 = "Jane"
friend_3 = "Rob"
friend_4 = "Sophie"
friend_5 = "Rachel"
```
This works OK. All the names are stored. But the problem is that they are all stored separately - Python does not know that they are all part of the same collection of friends.
But there's a really nice way to solve this problem: **lists**. We can create a list like this:
```
friends = [ "Fred", "Jane", "Rob", "Sophie", "Rachel" ]
```
We create a list using square brackets `[` `]`, with each item in the list separated by a comma `,`.
### List methods
There are some special **list** tools (called **methods**) that we can use on our `friends` list.
We can add a name to the list using `append`:
```
friends.append("David")
print(friends)
```
We can put the names in alphabetical order using `sort`:
```
friends.sort()
print(friends)
```
We can reverse the order using `reverse`:
```
friends.reverse()
print(friends)
```
These **list** **methods** allow us to do lots of things like this without having to write much code ourselves.
### What is a list index?
Let's make a new list. This time, we will start with an empty list, and then add to it:
```
shapes = []
print(shapes)
shapes.append("triangle")
shapes.append("square")
shapes.append("pentagon")
shapes.append("hexagon")
print(shapes)
```
These lists are great, but we might not always want to operate on the whole list at once. Instead, we might want to pick out one particular item in the list. In a **list**, each item is numbered, *starting from zero*. This number is called an **index**. So, remembering that the numbering *starts from zero*, we can access the first item in the list like this:
```
print(shapes[0])
```
Notice that the **index** is surrounded by square brackets `[` `]`. We can get the second item in `shapes` like this:
```
print(shapes[1])
```
and the last one like this:
```
print(shapes[3])
```
Python also has a special way of indexing the last item in a list, and this is especially useful if you're not sure how many items there are in your list. You can use a negative index:
```
print(shapes[-1])
```
We can ask Python to tell us the index of an item in the list:
```
idx = shapes.index("square")
print(idx)
```
Actually, the `index` method tells us the index of the first time that an item appears in a list.
### Displaying our data in a bar chart
Let's have another look at our `shapes` list:
```
print(shapes)
```
and let's make another list that has the number of sides that each shape has:
```
sides=[3,4,5,6]
print(sides)
```
It would be nice to draw a bar chart showing the number of sides that each shape has. But drawing all those bars and axes would take quite a lot of code. Don't worry - someone has already done this, and we can use the code that they have written using the **import** command.
```
import matplotlib.pyplot as plt
%matplotlib inline
```
(The `%matplotlib inline` is a Jupyter notebook command that means the plots we make will appear right here in our notebook)
Now we can plot our bar chart using only three lines of code:
```
plt.bar(shapes,sides)
plt.xlabel("Shape")
plt.ylabel("Number of sides");
```
That's a really nice, neat bar chart!
Try changing the lists and re-plotting the bar chart. You can add another shape, for example.
### What have we covered in this notebook?
We've learned about **lists** and the list **methods** that we can use to work with them. We've also learned about how to access an item in the list using an **index**. We then plotted a bar chart using code written by someone else - we used **import** to get this code.
| github_jupyter |
<a href="https://colab.research.google.com/github/osipov/edu/blob/master/pyt0/Demo_Data_Visualization.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg"/></a>
# Data Visualization
```
%matplotlib inline
import torch as pt
import matplotlib.pyplot as plt
x = pt.linspace(0, 10, 100)
fig = plt.figure()
plt.plot(x, pt.sin(x), '-')
plt.plot(x, pt.cos(x), '--')
plt.show() # not needed in notebook, but needed in production
```
## You can save your plots...
```
fig.savefig('my_figure.png')
!ls -lh my_figure.png
# For Windows, comment out the above and replace with below
# On Windows, comment out above and uncomment below
#!dir my_figure.png"
```
## ...and reload saved images for display inside the notebook
```
from IPython.display import Image
Image('my_figure.png')
# matplotlib supports many different file types
fig.canvas.get_supported_filetypes()
```
## MATLAB-Style Interface
```
plt.figure() # create a plot figure
# create the first of two panels and set current axis
plt.subplot(2, 1, 1) # (rows, columns, panel number)
plt.plot(x, pt.sin(x))
# create the second panel and set current axis
plt.subplot(2, 1, 2)
plt.plot(x, pt.cos(x));
```
## Grids
```
plt.style.use('seaborn-whitegrid')
fig = plt.figure()
ax = plt.axes()
```
## Draw a Function
```
plt.style.use('seaborn-whitegrid')
fig = plt.figure()
ax = plt.axes()
x = pt.linspace(0, 10, 1000)
ax.plot(x, pt.sin(x));
```
## Specify axes limits...
```
plt.plot(x, pt.sin(x))
plt.xlim(-1, 11)
plt.ylim(-1.5, 1.5);
```
## Flipping the Axes Limits
```
plt.plot(x, pt.sin(x))
plt.xlim(10, 0)
plt.ylim(1.2, -1.2);
```
## Axis
```
plt.plot(x, pt.sin(x))
plt.axis([-1, 11, -1.5, 1.5]);
```
## ...or let matplotlib "tighten" the axes...
```
plt.plot(x, pt.sin(x))
plt.axis('tight');
```
## ...or make the limits equal
```
plt.plot(x, pt.sin(x))
plt.axis('equal');
```
## Add titles and axis labels
```
plt.plot(x, pt.sin(x))
plt.title("A Sine Curve")
plt.xlabel("x")
plt.ylabel("sin(x)");
```
## ...and a legend
```
plt.plot(x, pt.sin(x), '-g', label='sin(x)')
plt.plot(x, pt.cos(x), ':b', label='cos(x)')
plt.axis('equal')
plt.legend();
```
## Object-Oriented Interface
```
# First create a grid of plots
# ax will be an array of two Axes objects
fig, ax = plt.subplots(2)
# Call plot() method on the appropriate object
ax[0].plot(x, pt.sin(x))
ax[1].plot(x, pt.cos(x));
```
## OO interface to axes
```
ax = plt.axes()
ax.plot(x, pt.sin(x))
ax.set(xlim=(0, 10), ylim=(-2, 2),
xlabel='x', ylabel='sin(x)',
title='A Simple Plot');
```
## Interface Differences
| MATLAB-Style | OO Style |
|--------------|-----------------|
| plt.xlabel() | ax.set_xlabel() |
| plt.ylabel() | ax.set_ylabel() |
| plt.xlim() | ax.set_xlim() |
| plt.ylim() | ax.set_ylim() |
| plt.title() | ax.set_title() |
## Custom legends
```
x = pt.linspace(0, 10, 1000)
plt.style.use('classic')
plt.figure(figsize=(12,6))
plt.rc('xtick', labelsize=20)
plt.rc('ytick', labelsize=20)
fig, ax = plt.subplots()
ax.plot(x, pt.sin(x), '-b', label='Sine')
ax.plot(x, pt.cos(x), '--r', label='Cosine')
ax.axis('equal')
leg = ax.legend()
ax.legend(loc='upper left', frameon=False)
fig
ax.legend(frameon=False, loc='lower center', ncol=2)
fig
```
## Many ways to specify color...
```
plt.plot(x, pt.sin(x - 0), color='blue') # specify color by name
plt.plot(x, pt.sin(x - 1), color='g') # short color code (rgbcmyk)
plt.plot(x, pt.sin(x - 2), color='0.75') # Grayscale between 0 and 1
plt.plot(x, pt.sin(x - 3), color='#FFDD44') # Hex code (RRGGBB from 00 to FF)
plt.plot(x, pt.sin(x - 4), color=(1.0,0.2,0.3)) # RGB tuple, values 0 to 1
plt.plot(x, pt.sin(x - 5), color='chartreuse'); # all HTML color names supported
```
## Specifying different line styles...
```
plt.plot(x, x + 0, linestyle='solid')
plt.plot(x, x + 1, linestyle='dashed')
plt.plot(x, x + 2, linestyle='dashdot')
plt.plot(x, x + 3, linestyle='dotted');
# For short, you can use the following codes:
plt.plot(x, x + 4, linestyle='-') # solid
plt.plot(x, x + 5, linestyle='--') # dashed
plt.plot(x, x + 6, linestyle='-.') # dashdot
plt.plot(x, x + 7, linestyle=':'); # dotted
```
## Specify different plot markers
```
rnd1 = pt.manual_seed(0)
rnd2 = pt.manual_seed(1)
for marker in 'o.,x+v^<>sd':
plt.plot(pt.rand(5, generator = rnd1), pt.rand(5, generator = rnd2), marker,
label='marker={}'.format(marker))
plt.legend(numpoints=1)
plt.xlim(0, 1.8);
```
## Scatterplots with Colors and Sizes
```
pt.manual_seed(0);
x = pt.randn(100)
y = pt.randn(100)
colors = pt.rand(100)
sizes = 1000 * pt.rand(100)
plt.scatter(x, y, c=colors, s=sizes, alpha=0.3,
cmap='viridis')
plt.colorbar(); # show color scale
```
## Visualizing Multiple Dimensions
```
from sklearn.datasets import load_iris
iris = load_iris()
features = iris.data.T
plt.scatter(features[0], features[1], alpha=0.2,
s=100*features[3], c=iris.target, cmap='viridis')
plt.xlabel(iris.feature_names[0])
plt.ylabel(iris.feature_names[1]);
```
## Histograms
```
data = pt.randn(10000)
plt.hist(data);
plt.hist(data, bins=30, alpha=0.5,
histtype='stepfilled', color='steelblue',
edgecolor='none')
```
## Display a grid of images
```
# load images of the digits 0 through 5 and visualize several of them
from sklearn.datasets import load_digits
digits = load_digits(n_class=6)
fig, ax = plt.subplots(8, 8, figsize=(6, 6))
for i, axi in enumerate(ax.flat):
axi.imshow(digits.images[i], cmap='binary')
axi.set(xticks=[], yticks=[])
```
Copyright 2020 CounterFactual.AI LLC. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
| github_jupyter |
# Classification
## MNIST
```
from sklearn.datasets import fetch_openml
mnist = fetch_openml('mnist_784', version=1)
mnist.keys()
X, y = mnist['data'], mnist['target']
X.shape, y.shape
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
some_digit = X[0]
some_digit_img = some_digit.reshape(28, 28)
plt.imshow(some_digit_img, cmap='binary')
plt.axis('off')
plt.show()
y[0]
import numpy as np
y = y.astype(np.uint8)
y[0]
# MNIST is already split into training and test set
X_train, X_test, y_train, y_test = X[:60000], X[60000:], y[:60000], y[60000:]
```
## Training a Binary Classifier
For start let's make a binary classifier that will indentify single digit - digit 5.
```
y_train_5, y_test_5 = (y_train == 5), (y_test == 5)
from sklearn.linear_model import SGDClassifier
sgd_clf = SGDClassifier(random_state=42)
sgd_clf.fit(X_train, y_train_5)
sgd_clf.predict([some_digit])
```
## Performance Measures
### Measuring Accuracy Using Cross-Validation
#### Implementing Cross-Validation
Following code is roughly equivalent to *Scikit-Learn*'s function `cross_val_score`.
```
from sklearn.model_selection import StratifiedKFold
from sklearn.base import clone
skfolds = StratifiedKFold(n_splits=3, shuffle=True, random_state=42)
for train_ix, test_ix in skfolds.split(X_train, y_train_5):
clone_clf = clone(sgd_clf)
X_train_folds = X_train[train_ix]
y_train_folds = y_train_5[train_ix]
X_test_folds = X_train[test_ix]
y_test_folds = y_train_5[test_ix]
clone_clf.fit(X_train_folds, y_train_folds)
y_pred = clone_clf.predict(X_test_folds)
n_correct = np.sum(y_pred == y_test_folds)
print(n_correct / len(y_pred))
from sklearn.model_selection import cross_val_score
cross_val_score(sgd_clf, X_train, y_train_5, cv=3, scoring='accuracy')
```
This seems pretty good! However, let's check a classifier that always classifies an image as **not 5**.
```
from sklearn.base import BaseEstimator
class Never5Classifier(BaseEstimator):
def fit(self, X, y=None):
return self
def predict(self, X):
return np.zeros((len(X), 1), dtype=bool)
never_5_clf = Never5Classifier()
cross_val_score(never_5_clf, X_train, y_train_5, cv=3, scoring='accuracy')
```
Over 90% accuracy! Well, the problem is that just about 10% of the whole dataset are images of 5 (there are 10 numbers in total). Hence the 90% accuracy.
### Confusion Matrix
The idea of a *confusion matrix* is to count the number of times class A is classified as class B and so on.
To compute the confusion matrix one must first get predicions (here on the train set, let's keep test set aside). We can take predictions for a cross-validation with `cross_val_predict` and pass them to `confusion_matrix`.
For a binary classification the confusion matrix looks like this:
| | N | P |
|-----|----|----|
| N | TN | FP |
| P | FN | TP |
Rows are the *actual* class and columns are the predicted class, furthermore
* *P* - *positive* (class)
* *N* - *negative* (class)
* *TN* - *true negative*
* *TP* - *true positive*
* *FN* - *false negative*
* *FP* - *false negative*
```
from sklearn.model_selection import cross_val_predict
from sklearn.metrics import confusion_matrix
y_train_pred = cross_val_predict(sgd_clf, X_train, y_train_5, cv=3)
confusion_matrix(y_train_5, y_train_pred)
y_train_perfect_predictions = y_train_5 # pretend we reached perfection
confusion_matrix(y_train_5, y_train_perfect_predictions)
```
### Precision and Recall
**Precision** is the accuracy of positive predictions and is defined as $\text{precision} = \frac{TP}{TP + FP}$
*Trivial way to ensure 100% precision is to make single prediction and make sure it's correct.*
**Recall (sensitivity, true positive rate)** is the ratio of positive instances that are correctly detected and is defined as $\text{recall} = \frac{TP}{TP + FN}$
Intuitive notion of precision and recall:
* *precision* - how often is the predictor correct when the actual class is the positive one
* *recall* - how likely does the predictor detect the positive class
```
from sklearn.metrics import precision_score, recall_score
precision = precision_score(y_train_5, y_train_pred)
recall = recall_score(y_train_5, y_train_pred)
precision, recall
```
Precision and recall are handy but it's even better to have single score based on which we can compare classifiers.
$\mathbf{F_1}$ score is the *harmonic mean* of precision and recall. Regular mean puts the same weight to all values, harmonic mean gives much more importance to lower values. So in order to have high $F_1$ score, both precision and mean must be high.
$$
F_1 = \frac{2}{\frac{1}{\text{precision}} + \frac{1}{\text{recall}}} = 2 \times \frac{\text{precision} \times \text{recall}}{\text{precision} + \text{recall}} = \frac{TP}{TP + \frac{FN + FP}{2}}
$$
```
from sklearn.metrics import f1_score
f1_score(y_train_5, y_train_pred)
```
### Precision/Recall Trade-off
*Increasing precision reduces recall and vice versa.*
How does the classification work? The `SGDClassifier`, for instance, computes for each instance a score based on a *decision function*. If this score is greater than *decision threshold*, it assigns the instance to the positive class. Shifting this threshold will likely result a change in precision and recall.
```
y_scores = sgd_clf.decision_function([some_digit])
y_scores
def predict_some_digit(threshold):
return (y_scores > threshold)
# Raising the threshold decreases recall
predict_some_digit(threshold=0), predict_some_digit(threshold=8000)
```
From the example above, increasing the decision threshold decreases recall (`some_digit` is actually a 5 and with the increased thresholt is is no longer recognized).
But how to decide which threshold to use?
```
y_scores = cross_val_predict(sgd_clf, X_train, y_train_5, cv=3, method='decision_function')
from sklearn.metrics import precision_recall_curve
def plot_precision_recall_vs_threshold(precisions, recalls, thresholds):
plt.plot(thresholds, precisions[:-1], 'b--', label='Precision')
plt.plot(thresholds, recalls[:-1], 'g-', label='Recall')
plt.xlabel('Threshold')
plt.legend(loc='center right', fontsize=16)
plt.grid(True)
plt.axis([-50000, 50000, 0, 1])
precisions, recalls, thresholds = precision_recall_curve(y_train_5, y_scores)
recall_90_precision = recalls[np.argmax(precisions >= 0.9)]
threshold_90_precision = thresholds[np.argmax(precisions >= 0.9)]
plt.figure(figsize=(8, 4))
# plot precision and recall curves vs decision threshold
plot_precision_recall_vs_threshold(precisions, recalls, thresholds)
# plot threshold corresponding to 90% precision
plt.plot([threshold_90_precision, threshold_90_precision], [0., 0.9], 'r:')
# plot precision level up to 90% precision threshold
plt.plot([-50000, threshold_90_precision], [0.9, 0.9], 'r:')
# plot recall level up to 90% precision threshold
plt.plot([-50000, threshold_90_precision], [recall_90_precision, recall_90_precision], 'r:')
# plot points on precision and recall curves corresponding to 90% precision threshold
plt.plot([threshold_90_precision], [0.9], 'ro')
plt.plot([threshold_90_precision], [recall_90_precision], 'ro')
plt.show()
plt.figure(figsize=(8, 6))
# plot precision vs recall
plt.plot(recalls, precisions, "b-", linewidth=2)
plt.xlabel('Precision', fontsize=16)
plt.ylabel('Recall', fontsize=16)
# style the plot
plt.axis([0, 1, 0, 1])
plt.grid(True)
plt.title('Precision vs Recall')
# plot 90% precision point
plt.plot([recall_90_precision], [0.9], 'ro')
plt.plot([recall_90_precision, recall_90_precision], [0., 0.9], 'r:')
plt.plot([0.0, recall_90_precision], [0.9, 0.9], 'r:')
plt.show()
y_train_pred_90 = (y_scores >= threshold_90_precision)
precision_90 = precision_score(y_train_5, y_train_pred_90)
recall_90_precision = recall_score(y_train_5, y_train_pred_90)
precision_90, recall_90_precision
```
### The ROC Curve
The **receiver operating characteristic** curve is similar to precesion-recall curve but instead plots *true positive rate (recall, sensitivity)* agains *false positive rate* (FPR). The FPR is 1 minus *true negative rate rate (specificity*. I.e. ROC curve plots *sensitivity* against 1 - *specificity*.
```
from sklearn.metrics import roc_curve
def plot_roc_curve(fpr, tpr, label=None):
plt.plot(fpr, tpr, linewidth=2, label=label)
plt.plot([0, 1], [0, 1], 'k--')
plt.axis([0, 1, 0, 1])
plt.xlabel('False Positive Rate', fontsize=16)
plt.ylabel('True Positive Rate', fontsize=16)
plt.grid(True)
fpr, tpr, thresholds = roc_curve(y_train_5, y_scores)
fpr_90 = fpr[np.argmax(tpr >= recall_90_precision)]
plt.figure(figsize=(8, 6))
# plot the ROC curve
plot_roc_curve(fpr, tpr)
# plot point of 90% precision on the ROC curve
plt.plot([fpr_90], [recall_90_precision], 'ro')
plt.show()
```
Another way to compare classifiers is to measure the **area under the curve (AUC)**. Prfect classifier would have AUC score of 1 whereas completely random one would have 0.5 (this corresponds to the diagonal line in the ROC plot).
```
from sklearn.metrics import roc_auc_score
roc_auc_score(y_train_5, y_scores)
```
As a rule of thumb, use PR curve when
* positive class is rare
* we care more about the false positives
otherwise ROC curve might be better.
*For instance in the plot above, it might seem that the AUC is quite good but that's just because there's only few examples of the positive class (5s). In this case, the PR curve presents much more realistic view.*
Following example shows a DT which does not have a `decision_function` method. Instead, it has `predict_proba` method returning class probabilities. In general *Scikit-Learn* models will have one or the other method or both.
```
from sklearn.ensemble import RandomForestClassifier
forest_clf = RandomForestClassifier(random_state=42)
y_proba_forest = cross_val_predict(forest_clf, X_train, y_train_5, cv=3, method='predict_proba')
y_scores_forest = y_proba_forest[:, 1] # score = probability of the positive class
fpr_forest, tpr_forest, thresholds_forest = roc_curve(y_train_5, y_scores_forest)
recall_90_precision_forest = tpr_forest[np.argmax(fpr_forest >= fpr_90)]
plt.figure(figsize=(8, 6))
# plot the ROC curve of the SGD
plot_roc_curve(fpr, tpr, label='SGD')
# plot the ROC curve of the Random Forest
plot_roc_curve(fpr_forest, tpr_forest, label='Random Forest')
# plot point of 90% precision on the SGD ROC curve
plt.plot([fpr_90], [recall_90_precision], 'ro')
# plot point of 90% precision on the Random Forest ROC curve
plt.plot([fpr_90], [recall_90_precision_forest], 'ro')
plt.legend(loc='lower right', fontsize=16)
plt.show()
```
## Multiclass Classification
**Multiclass (Multinominal) Classifiers**:
* *Logistic Regression*
* *Random Forrest*
* *Naive Bayes*
**Binary Classifiers**:
* *SGD*
* *SVM*
Strategies to turn binary classifiers into multiclass:
* **One-versus-the-rest (OvR)**: Train one classifier per class. When predicting class for new instance, get the score from each one and choose the class with the highest score.
* **One-versus-one (OvO)**: Train one classifier for each pair of classes (for $N$ classes it's $N \times (N - 1) / 2$ classifiers). When predicting, run the instance through all classifiers and choose class which wins the most duels. Main advantage is that each classifier needs only portion of the training set which contains it's pair of classes which is good for classifiers which don't scale well (e.g. SVM).
```
from sklearn.svm import SVC
svm_clf = SVC(gamma="auto", random_state=42)
svm_clf.fit(X_train[:1000], y_train[:1000])
svm_clf.predict([some_digit])
some_digit_scores = svm_clf.decision_function([some_digit])
some_digit_scores
some_digit_class = np.argmax(some_digit_scores)
svm_clf.classes_[some_digit_class]
```
One can manually select the strategy by wrapping the model class into `OneVsRestClassifier` or `OneVsOneClassifier`.
```
from sklearn.multiclass import OneVsRestClassifier
ovr_clf = OneVsRestClassifier(SVC(gamma="auto", random_state=42))
ovr_clf.fit(X_train[:1000], y_train[:1000])
ovr_clf.predict([some_digit])
len(ovr_clf.estimators_)
```
`SGDClassifier` uses *OvR* under the hood
```
sgd_clf.fit(X_train, y_train)
sgd_clf.predict([some_digit])
sgd_clf.decision_function([some_digit])
cross_val_score(sgd_clf, X_train, y_train, cv=3, scoring='accuracy')
```
CV on the SGD classifier shows pretty good accuracy compared to dummy (random) classifier which would have around 10%. This can be improved even further by simply scaling the input.
```
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train.astype(np.float64))
cross_val_score(sgd_clf, X_train_scaled, y_train, cv=3, scoring='accuracy')
```
### Error Analysis
```
y_train_pred = cross_val_predict(sgd_clf, X_train_scaled, y_train, cv=3)
conf_mx = confusion_matrix(y_train, y_train_pred)
conf_mx
plt.matshow(conf_mx, cmap=plt.cm.gray)
plt.title('Training set confusion matrix for the SGD classifier')
plt.show()
```
Let's transform the confusion matrix a bit to focus on the errors:
1. divide each value by the number of instances (images in this case) in that class
1. fill diagonal with zeros to keep just the errors
```
row_sums = conf_mx.sum(axis=1, keepdims=True)
norm_conf_mx = conf_mx / row_sums
np.fill_diagonal(norm_conf_mx, 0)
plt.matshow(norm_conf_mx, cmap=plt.cm.gray)
plt.title('Class-normalized confusion matrix with 0 on diagonal')
plt.show()
```
## Multilabel Classification
*Multilabel classification* refers to a classification task where the classifier predicts multiple classes at once (output is a boolean vector).
```
from sklearn.neighbors import KNeighborsClassifier
y_train_large = (y_train >= 7)
y_train_odd = (y_train % 2 == 1)
y_multilabel = np.c_[y_train_large, y_train_odd]
knn_clf = KNeighborsClassifier()
knn_clf.fit(X_train, y_multilabel)
knn_clf.predict([some_digit])
# This takes too long to evaluate but normally it would output the F1 score
# y_train_knn_pred = cross_val_predict(knn_clf, X_train, y_multilabel, cv=3, n_jobs=-1)
# f1_score(y_multilabel, y_train_knn_pred, average='macro')
```
## Multioutput Classification
*Multioutput-multiclass* or just *multioutput classification* is a generalization of multilabel classification where each label can be multiclass (categorical, not just boolean).
Following example removes noise from images. In this setup the output is one label per pixel (multilabel) and each pixel's label can have multiple values - pixel intensities (multioutput).
```
# modified training set
noise = np.random.randint(0, 100, (len(X_train), 784))
X_train_mod = X_train + noise
# modified test set
noise = np.random.randint(0, 100, (len(X_test), 784))
X_test_mod = X_test + noise
# targets are original images
y_train_mod = X_train
y_test_mod = X_test
some_index = 0
# noisy image
plt.subplot(121)
plt.imshow(X_test_mod[some_index].reshape(28, 28), cmap='binary')
plt.axis('off')
# original image
plt.subplot(122)
plt.imshow(y_test_mod[some_index].reshape(28, 28), cmap='binary')
plt.axis('off')
plt.show()
knn_clf.fit(X_train_mod, y_train_mod)
clean_digit = knn_clf.predict([X_test_mod[some_index]])
plt.imshow(clean_digit.reshape(28, 28), cmap='binary')
plt.axis('off')
plt.show()
```
## Extra Material
### Dummy Classifier
```
from sklearn.dummy import DummyClassifier
dummy_clf = DummyClassifier(strategy='prior')
y_probas_dummy = cross_val_predict(dummy_clf, X_train, y_train_5, cv=3, method='predict_proba')
y_scores_dummy = y_probas_dummy[:, 1]
fprr, tprr, thresholdsr = roc_curve(y_train_5, y_scores_dummy)
plot_roc_curve(fprr, tprr)
```
## Exercises
### Data Augmentation
```
from scipy.ndimage.interpolation import shift
def shift_image(image, dx, dy):
image = image.reshape((28, 28))
shifted_image = shift(image, [dy, dx], cval=0, mode='constant')
return shifted_image.reshape([-1])
image = X_train[1000]
shifted_image_down = shift_image(image, 0, 5)
shifted_image_left = shift_image(image, -5, 0)
plt.figure(figsize=(12, 3))
# original image
plt.subplot(131)
plt.title('Original', fontsize=14)
plt.imshow(image.reshape(28, 28), interpolation='nearest', cmap='Greys')
# image shifted down
plt.subplot(132)
plt.title('Shifted down', fontsize=14)
plt.imshow(shifted_image_down.reshape(28, 28), interpolation='nearest', cmap='Greys')
# image shifted left
plt.subplot(133)
plt.title('Shifted left', fontsize=14)
plt.imshow(shifted_image_left.reshape(28, 28), interpolation='nearest', cmap='Greys')
plt.show()
from sklearn.metrics import accuracy_score
X_train_augmented = [image for image in X_train]
y_train_augmented = [label for label in y_train]
shifts = ((1, 0), (-1, 0), (0, 1), (0, -1))
for dx, dy in shifts:
for image, label in zip(X_train, y_train):
X_train_augmented.append(shift_image(image, dx, dy))
y_train_augmented.append(label)
X_train_augmented = np.array(X_train_augmented)
y_train_augmented = np.array(y_train_augmented)
shuffle_idx = np.random.permutation(len(X_train_augmented))
X_train_augmented = X_train_augmented[shuffle_idx]
y_train_augmented = y_train_augmented[shuffle_idx]
# Best params without augmentation
knn_clf = KNeighborsClassifier(n_neighbors=4, weights='distance')
knn_clf.fit(X_train_augmented, y_train_augmented)
# Accuracy without augmentation: 0.9714
y_pred = knn_clf.predict(X_test)
accuracy_score(y_test, y_pred)
```
| github_jupyter |
<center>
<img src="https://gitlab.com/ibm/skills-network/courses/placeholder101/-/raw/master/labs/module%201/images/IDSNlogo.png" width="300" alt="cognitiveclass.ai logo" />
</center>
# **Hands-on Lab : Web Scraping**
Estimated time needed: **30 to 45** minutes
## Objectives
In this lab you will perform the following:
* Extract information from a given web site
* Write the scraped data into a csv file.
## Extract information from the given web site
You will extract the data from the below web site: <br>
```
#this url contains the data you need to scrape
url = "https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DA0321EN-SkillsNetwork/labs/datasets/Programming_Languages.html"
```
The data you need to scrape is the **name of the programming language** and **average annual salary**.<br> It is a good idea to open the url in your web broswer and study the contents of the web page before you start to scrape.
Import the required libraries
```
# Your code here
from bs4 import BeautifulSoup
import requests
import pandas as pd
```
Download the webpage at the url
```
#your code goes here
data = requests.get(url).text
```
Create a soup object
```
#your code goes here
soup = BeautifulSoup(data, 'html5lib')
```
Scrape the `Language name` and `annual average salary`.
```
#your code goes here
lang_data = pd.DataFrame(columns=['Language', 'Avg_Salary'])
table = soup.find('table')
for row in table.find_all('tr'):
cols = row.find_all('td')
lang_name = cols[1].getText()
avg_salary = cols[3].getText()
lang_data = lang_data.append({"Language":lang_name, "Avg_Salary":avg_salary}, ignore_index=True)
#print("{}----------{}".format(lang_name, avg_salary))
```
Save the scrapped data into a file named *popular-languages.csv*
```
# your code goes here
#Drop the first row
#lang_data.drop(0, axis=0, inplace=True)
lang_data.to_csv('popular-languages.csv', index=False)
```
## Authors
Ramesh Sannareddy
### Other Contributors
Rav Ahuja
## Change Log
| Date (YYYY-MM-DD) | Version | Changed By | Change Description |
| ----------------- | ------- | ----------------- | ---------------------------------- |
| 2020-10-17 | 0.1 | Ramesh Sannareddy | Created initial version of the lab |
Copyright © 2020 IBM Corporation. This notebook and its source code are released under the terms of the [MIT License](https://cognitiveclass.ai/mit-license/?utm_medium=Exinfluencer\&utm_source=Exinfluencer\&utm_content=000026UJ\&utm_term=10006555\&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDA0321ENSkillsNetwork21426264-2021-01-01).
| github_jupyter |
Datashader provides a flexible series of processing stages that map from raw data into viewable images. As shown in the [Introduction](1-Introduction.ipynb), using datashader can be as simple as calling ``datashade()``, but understanding each of these stages will help you get the most out of the library.
The stages in a datashader pipeline are similar to those in a [3D graphics shading pipeline](https://en.wikipedia.org/wiki/Graphics_pipeline):

Here the computational steps are listed across the top of the diagram, while the data structures or objects are listed along the bottom. Breaking up the computations in this way is what makes Datashader able to handle arbitrarily large datasets, because only one stage (Aggregation) requires access to the entire dataset. The remaining stages use a fixed-sized data structure regardless of the input dataset, allowing you to use any visualization or embedding methods you prefer without running into performance limitations.
In this notebook, we'll first put together a simple, artificial example to get some data, and then show how to configure and customize each of the data-processing stages involved:
1. [Projection](#Projection)
2. [Aggregation](#Aggregation)
3. [Transformation](#Transformation)
4. [Colormapping](#Colormapping)
5. [Embedding](#Embedding)
## Data
For an example, we'll construct a dataset made of five overlapping 2D Gaussian distributions with different σs (spatial scales). By default we'll have 10,000 datapoints from each category, but you should see sub-second response times even for 1 million datapoints per category if you increase `num`.
```
import pandas as pd
import numpy as np
from collections import OrderedDict as odict
num=10000
np.random.seed(1)
dists = {cat: pd.DataFrame(odict([('x',np.random.normal(x,s,num)),
('y',np.random.normal(y,s,num)),
('val',val),
('cat',cat)]))
for x, y, s, val, cat in
[( 2, 2, 0.03, 10, "d1"),
( 2, -2, 0.10, 20, "d2"),
( -2, -2, 0.50, 30, "d3"),
( -2, 2, 1.00, 40, "d4"),
( 0, 0, 3.00, 50, "d5")] }
df = pd.concat(dists,ignore_index=True)
df["cat"]=df["cat"].astype("category")
```
Datashader can work many different data objects provided by different data libraries depending on the type of data involved, such as columnar data in [Pandas](http://pandas.pydata.org) or [Dask](http://dask.pydata.org) dataframes, gridded multidimensional array data using [xarray](http://xarray.pydata.org), columnar data on GPUs using [cuDF](https://github.com/rapidsai/cudf), multidimensional arrays on GPUs using [CuPy](https://cupy.chainer.org/), and ragged arrays using [SpatialPandas](https://github.com/holoviz/spatialpandas) (see the [Performance User Guide](../10_Performance.ipynb) for a guide to selecting an appropriate library). Here, we're using a Pandas dataframe, with 50,000 rows by default:
```
df.tail()
```
To illustrate this dataset, we'll make a quick-and-dirty Datashader plot that dumps these x,y coordinates into an image:
```
import datashader as ds
import datashader.transfer_functions as tf
%time tf.shade(ds.Canvas().points(df,'x','y'))
```
Without any special tweaking, datashader is able to reveal the overall shape of this distribution faithfully: four summed 2D normal distributions of different variances, arranged at the corners of a square, overlapping another very high-variance 2D normal distribution centered in the square. This immediately obvious structure makes a great starting point for exploring the data, and you can then customize each of the various stages involved as described below.
Of course, this is just a static plot, and you can't see what the axes are, so we can instead embed this data into an interactive plot if we prefer:
Here, if you are running a live Python process, you can enable the "wheel zoom" tool on the right, zoom in anywhere in the distribution, and datashader will render a new image that shows the full distribution at that new location. If you are viewing this on a static web site, zooming will simply make the existing set of pixels larger, because this dynamic updating requires Python.
Now that you can see the overall result, we'll unpack each of the steps in the Datashader pipeline and show how this image is constructed from the data.
## Projection
Datashader is designed to render datasets projected on to a 2D rectangular grid, eventually generating an image where each pixel corresponds to one cell in that grid. The ***Projection*** stage is primarily conceptual, as it consists of you deciding what you want to plot and how you want to plot it:
- **Variables**: Select which variable you want to have on the *x* axis, and which one for the *y* axis. If those variables are not already columns in your dataframe (e.g. if you want to do a coordinate transformation), you'll need to create suitable columns mapping directly to *x* and *y* for use in the next step. For this example, the "x" and "y" columns are conveniently named `x` and `y` already, but any column name can be used for these axes.
- **Ranges**: Decide what ranges of those values you want to map onto the scene. If you omit the ranges, datashader will calculate the ranges from the data values, but you will often wish to supply explicit ranges for three reasons:
1. Calculating the ranges requires a complete pass over the data, which takes nearly as much time as actually aggregating the data, so your plots will be about twice as fast if you specify the ranges.
2. Real-world datasets often have some outliers with invalid values, which can make it difficult to see the real data, so after your first plot you will often want to specify only the range that appears to have valid data.
3. Over the valid range of data, you will often be mainly interested in a specific region, allowing you to zoom in to that area (though with an interactive plot you can always do that as needed).
- **Axis types**: Decide whether you want `'linear'` or `'log'` axes.
- **Resolution**: Decide what size of aggregate array you are going to want.
Here's an example of specifying a ``Canvas`` (a.k.a. "Scene") object for a 200x200-pixel image covering the range +/-8.0 on both axes:
```
canvas = ds.Canvas(plot_width=300, plot_height=300,
x_range=(-8,8), y_range=(-8,8),
x_axis_type='linear', y_axis_type='linear')
```
At this stage, no computation has actually been done -- the `canvas` object is a purely declarative, recording your preferences to be applied in the next stage.
<!-- Need to move the Points/Lines/Rasters discussion into the section above once the API is rationalized, and rename Canvas to Scene. -->
## Aggregation
<!-- This section really belongs under Scene, above-->
Once a `Canvas` object has been specified, it can then be used to guide aggregating the data into a fixed-sized grid. Data is assumed to consist of a series of items, each of which has some visible representation (its rendering as a "glyph") that is combined with the representation of other items to produce an aggregate representation of the whole set of items in the rectangular grid. The available glyph types for representing a data item are currently:
- **Canvas.points**: each data item is a coordinate location (an x,y pair), mapping into the single closest grid cell to that datapoint's location.
- **Canvas.line**: each data item is a coordinate location, mapping into every grid cell falling between this point's location and the next in a straight line segment.
- **Canvas.area**: each data item is a coordinate location, rendered as a shape filling the axis-aligned area between this point, the next point, and a baseline (e.g. zero, filling the area between a line and a base).
- **Canvas.trimesh**: each data item is a triple of coordinate locations specifying a triangle, filling in the region bounded by that triangle.
- **Canvas.polygons**: each data item is a sequence of coordinate locations specifying a polygon, filling in the region bounded by that polygon (minus holes if specified separately).
- **Canvas.raster**: the collection of data items is an array specifying regularly spaced axis-aligned rectangles forming a regular grid; each cell in this array is rendered as a filled rectangle.
- **Canvas.quadmesh**: the collection of data items is an array specifying irregularly spaced quadrilaterals forming a grid that is regular in the input space but can have arbitrary rectilinear or curvilinear shapes in the aggregate grid; each cell in this array is rendered as a filled quadrilateral.
These types are each covered in detail in the [User Guide](../user_guide/). Datashader can be extended to add additional types here and in each section below; see [Extending Datashader](../user_guide/9-Extending.ipynb) for more details. Many other plots like time series and network graphs can be constructed out of these basic primitives.
<!-- (to here) -->
### Reductions
One you have determined your mapping, you'll next need to choose a reduction operator to use when aggregating multiple datapoints into a given pixel. For points, each datapoint is mapped into a single pixel, while the other glyphs have spatial extent and can thus map into multiple pixels, each of which operates the same way. All glyphs act like points if the entire glyph is contained within that pixel. Here we will talk only about "datapoints" for simplicity, which for an area-based glyph should be interpreted as "the part of that glyph that falls into this pixel".
All of the currently supported reduction operators are incremental, which means that we can efficiently process datasets in a single pass. Given an aggregate bin to update (typically corresponding to one eventual pixel) and a new datapoint, the reduction operator updates the state of the bin in some way. (Actually, datapoints are normally processed in batches for efficiency, but it's simplest to think about the operator as being applied per data point, and the mathematical result should be the same.) A large number of useful [reduction operators]((https://datashader.org/api.html#reductions) are supplied in `ds.reductions`, including:
**`count(column=None)`**:
increment an integer count each time a datapoint maps to this bin. The resulting aggregate array will be an unsigned integer type, allowing counts to be distinguished from the other types that are normally floating point.
**`any(column=None)`**:
the bin is set to 1 if any datapoint maps to it, and 0 otherwise.
**`sum(column)`**:
add the value of the given column for this datapoint to a running total for this bin.
**`by(column, reduction)`**:
given a bin with categorical data (i.e., [Pandas' `categorical` datatype](https://pandas-docs.github.io/pandas-docs-travis/categorical.html)), aggregate each category separately, accumulating the given datapoint in an appropriate category within this bin. These categories can later be collapsed into a single aggregate if needed; see examples below.
**`summary(name1=op1,name2=op2,...)`**:
allows multiple reduction operators to be computed in a single pass over the data; just provide a name for each resulting aggregate and the corresponding reduction operator to use when creating that aggregate. If multiple aggregates are needed for the same dataset and the same Canvas, using `summary` will generally be much more efficient than making multiple separate passes over the dataset.
The API documentation contains the complete list of [reduction operators]((https://datashader.org/api.html#reductions) provided, including `mean`, `min`, `max`, `var` (variance), `std` (standard deviation). The reductions are also imported into the ``datashader`` namespace for convenience, so that they can be accessed like ``ds.mean()`` here.
For the operators above, those accepting a `column` argument will only do the operation if the value of that column for this datapoint is not `NaN`. E.g. `count` with a column specified will count the datapoints having non-`NaN` values for that column.
Once you have selected your reduction operator, you can compute the aggregation for each pixel-sized aggregate bin:
```
canvas.points(df, 'x', 'y', agg=ds.count())
```
The result of will be an [xarray](http://xarray.pydata.org) `DataArray` data structure containing the bin values (typically one value per bin, but more for multiple category or multiple-aggregate operators) along with axis range and type information.
We can visualize this array in many different ways by customizing the pipeline stages described in the following sections, but for now we'll use HoloViews to render images using the default parameters to show the effects of a few different aggregate operators:
```
tf.Images(tf.shade( canvas.points(df,'x','y', ds.count()), name="count()"),
tf.shade( canvas.points(df,'x','y', ds.any()), name="any()"),
tf.shade( canvas.points(df,'x','y', ds.mean('y')), name="mean('y')"),
tf.shade(50-canvas.points(df,'x','y', ds.mean('val')), name="50- mean('val')"))
```
Here ``count()`` renders each bin's count in a different color, to show the true distribution, while ``any()`` turns on a pixel if any point lands in that bin, and ``mean('y')`` averages the `y` column for every datapoint that falls in that bin. Of course, since ever datapoint falling into a bin happens to have the same `y` value, the mean reduction with `y` simply scales each pixel by its `y` location.
For the last image above, we specified that the `val` column should be used for the `mean` reduction, which in this case results in each category being assigned a different color, because in our dataset all items in the same category happen to have the same `val`. Here we also manipulated the result of the aggregation before displaying it by subtracting it from 50, as detailed in the next section.
## Transformation
Now that the data has been projected and aggregated into a gridded data structure, it can be processed in any way you like, before converting it to an image as will be described in the following section. At this stage, the data is still stored as bin data, not pixels, which makes a very wide variety of operations and transformations simple to express.
For instance, instead of plotting all the data, we can easily plot only those bins in the 99th percentile by count (left), or apply any [NumPy ufunc](http://docs.scipy.org/doc/numpy/reference/ufuncs.html) to the bin values (whether or not it makes any sense!):
```
agg = canvas.points(df, 'x', 'y')
tf.Images(tf.shade(agg.where(agg>=np.percentile(agg,99)), name="99th Percentile"),
tf.shade(np.power(agg,2), name="Numpy square ufunc"),
tf.shade(np.sin(agg), name="Numpy sin ufunc"))
```
The [xarray documentation](http://xarray.pydata.org/en/stable/computation.html) describes all the various transformations you can apply from within xarray, and of course you can always extract the data values and operate on them outside of xarray for any transformation not directly supported by xarray, then construct a suitable xarray object for use in the following stage. Once the data is in the aggregate array, you generally don't have to worry much about optimization, because it's a fixed-sized grid regardless of your data size, and so it is very straightforward to apply arbitrary transformations to the aggregates.
The above examples focus on a single aggregate, but there are many ways that you can use multiple data values per bin as well. For instance, you can apply any aggregation "categorically", aggregating `by` some categorical value so that datapoints for each unique value are aggregated independently:
```
aggc = canvas.points(df, 'x', 'y', ds.by('cat', ds.count()))
aggc
```
Here the `count()` aggregate has been collected into not just a 2D aggregate array, but a whole stack of aggregate arrays, one per `cat` value, making the aggregate be three dimensional (x,y,cat) rather than just two (x,y). With this 3D aggregate of counts per category, you can then select a specific category or subset of them for further processing, where `.sum(dim='cat')` will collapse across such a subset to give a single aggregate array:
```
agg_d3_d5=aggc.sel(cat=['d3', 'd5']).sum(dim='cat')
tf.Images(tf.shade(aggc.sel(cat='d3'), name="Category d3"),
tf.shade(agg_d3_d5, name="Categories d3 and d5"))
```
You can also combine multiple aggregates however you like, as long as they were all constructed using the same Canvas object (which ensures that their aggregate arrays are the same size) and cover the same axis ranges:
```
tf.Images(tf.shade(agg_d3_d5.where(aggc.sel(cat='d3') == aggc.sel(cat='d5')), name="d3+d5 where d3==d5"),
tf.shade( agg.where(aggc.sel(cat='d3') == aggc.sel(cat='d5')), name="d1+d2+d3+d4+d5 where d3==d5"))
```
The above two results are using the same mask (only those bins `where` the counts for 'd3' and 'd5' are equal), but applied to different aggregates (either just the `d3` and `d5` categories, or the entire set of counts).
## Colormapping
As you can see above, the usual way to visualize an aggregate array is to map from each array bin into a color for a corresponding pixel in an image. The above examples use the `tf.shade()` method, which maps a scalar aggregate bin value into an RGB (color) triple and an alpha (opacity) value. By default, the colors are chosen from the colormap ['lightblue','darkblue'] (i.e., `#ADD8E6` to `#00008B`), with intermediate colors chosen as a linear interpolation independently for the red, green, and blue color channels (e.g. `AD` to `00` for the red channel, in this case). The alpha (opacity) value is set to 0 for empty bins and 1 for non-empty bins, allowing the page background to show through wherever there is no data. You can supply any colormap you like, including Bokeh palettes, Matplotlib colormaps, or a list of colors (using the color names from `ds.colors`, integer triples, or hexadecimal strings):
```
from bokeh.palettes import RdBu9
tf.Images(tf.shade(agg,cmap=["darkred", "yellow"], name="darkred, yellow"),
tf.shade(agg,cmap=[(230,230,0), "orangered", "#300030"], name="yellow, orange red, dark purple"),
tf.shade(agg,cmap=list(RdBu9), name="Bokeh RdBu9"),
tf.shade(agg,cmap="black", name="Black"))
```
As a special case ("Black", above), if you supply only a single color, the color will be kept constant at the given value but the alpha (opacity) channel will vary with the data.
#### Colormapping categorical data
If you want to use `tf.shade` with a categorical aggregate, you can use a colormap just as for a non-categorical aggregate if you first select a single category using something like `aggc.sel(cat='d3')` or else collapse all categories into a single aggregate using something like `aggc.sum(dim='cat')`.
If you want to visualize all the categories in one image, you can use `tf.shade` with the categorical aggregate directly, which will assign a color to each category and then calculate the transparency and color of each pixel according to each category's contribution to that pixel:
```
color_key = dict(d1='blue', d2='green', d3='red', d4='orange', d5='purple')
tf.Images(tf.shade(aggc, name="Default color key"),
tf.shade(aggc, color_key=color_key, name="Custom color key"))
```
Here the different colors mix not just visually due to blurring, but are actually mixed mathematically per pixel, with pixels that include data from multiple categories taking intermediate color values. The total (summed) data values across all categories are used to calculate the alpha channel, with the previously computed color being revealed to a greater or lesser extent depending on the value of the aggregate for that bin. See [Colormapping with negative values](#Colormapping-with-negative-values) below for more details on how these colors and transparencies are calculated.
The default color key for categorical data provides distinguishable colors for a couple of dozen categories, but you can provide an explicit color_key if you prefer. Choosing colors for different categories is more of an art than a science, because the colors not only need to be distinguishable, their combinations also need to be distinguishable if those categories ever overlap in nearby pixels, or else the results will be ambiguous. In practice, only a few categories can be reliably distinguished in this way, but [zooming in](3_Interactivity.ipynb) can be used to help disambiguate overlapping colors, as long as the basic set of colors is itself distinguishable.
#### Transforming data values for colormapping
In each of the above examples, you may have noticed that we were never required to specify any parameters about the data values; the plots just appear like magic. That magic is implemented in `tf.shade`. What `tf.shade` does for a 2D aggregate (non-categorical) is:
1. **Mask** out all bins with a `NaN` value (for floating-point arrays) or a zero value (for the unsigned integer arrays that are returned from `count`); these bins will not have any effect on subsequent computations.
2. **Transform** the bin values using a specified scalar function `how`. Calculates the value of that function for the difference between each bin value and the minimum non-masked bin value. E.g. for `how="linear"`, simply returns the difference unchanged. Other `how` functions are discussed below.
3. **Map** the resulting transformed data array into the provided colormap. First finds the value span (*l*,*h*) for the resulting transformed data array -- what are the lowest and highest non-masked values? -- and then maps the range (*l*,*h*) into the full range of the colormap provided. If a colormap is used, masked values are given a fully transparent alpha value, and non-masked ones are given a fully opaque alpha value. If a single color is used, the alpha value starts at `min_alpha` and increases proportionally to the mapped data value up to the full `alpha` value.
The result is thus auto-ranged to show whatever data values are found in the aggregate bins, with the `span` argument (described below) allowing you to override the range explicitly if you need to.
As described in [Plotting Pitfalls](../user_guide/1_Plotting_Pitfalls.ipynb), auto-ranging is only part of what is required to reveal the structure of the dataset; it's also crucial to automatically and potentially nonlinearly map from the aggregate values (e.g. bin counts) into the colormap. If we used a linear mapping, we'd see very little of the structure of the data:
```
tf.shade(agg,how='linear')
```
In the linear version, you can see that the bins that have zero count show the background color, since they have been masked out using the alpha channel of the image, and that the rest of the pixels have been mapped to colors near the bottom of the colormap. If you peer closely at it, you may even be able to see that one pixel (from the smallest Gaussian) has been mapped to the highest color in the colormap (here dark blue). But no other structure is visible, because the highest-count bin is so much higher than all of the other bins:
```
top15=agg.values.flat[np.argpartition(agg.values.flat, -15)[-15:]]
print(sorted(top15))
print(sorted(np.round(top15*255.0/agg.values.max()).astype(int)))
```
I.e., if using a colormap with 255 colors, the largest bin (`agg.values.max()`) is mapped to the highest color, but with a linear scale all of the other bins map to only the first 24 colors, leaving all intermediate colors unused. If we want to see any structure for these intermediate ranges, we need to transform these numerical values somehow before displaying them. For instance, if we take the logarithm of these large values, they will be mapped into a more tractable range:
```
print(np.log1p(sorted(top15)))
```
So we can plot the logarithms of the values (``how='log'``, below), which is an arbitrary transform but is appropriate for many types of data. Alternatively, we can make a histogram of the numeric values, then assign a pixel color to each equal-sized histogram bin to ensure even usage of every displayable color (``how='eq_hist'``; see [plotting pitfalls](../user_guide/1_Plotting_Pitfalls.ipynb). We can even supply any arbitrary transformation to the colormapper as a callable, such as a twenty-third root:
```
tf.Images(tf.shade(agg,how='log', name="log"),
tf.shade(agg,how='eq_hist', name="eq_hist"),
tf.shade(agg,how=lambda d, m: np.where(m, np.nan, d)**(1/23.), name="23rd root"))
```
Usually, however, such custom operations are done directly on the aggregate during the ***Transformation*** stage; the `how` operations are meant for simple, well-defined transformations solely for the final steps of visualization, which allows the main aggregate array to stay in the original units and scale in which it was measured. Using `how` also helps simplify the subsequent ***Embedding*** stage, letting it provide one of a fixed set of legend types, either linear (for `how=linear`), logarithmic (for `how=log`) or percentile (for `how=eq_hist`). See the [shade docs](https://datashader.org/api.html#datashader.transfer_functions.shade) for more details on the `how` functions.
For categorical aggregates, the `shade` function works similarly to providing a single color to a non-categorical aggregate, with the alpha (opacity) calculated from the total value across all categories (and the color calculated as a weighted mixture of the colors for each category).
#### Controlling ranges for colormapping
By default, `shade` will autorange on the aggregate array, mapping the lowest and highest values of the aggregate array into the lowest and highest values of the colormap (or the available alpha values, for single colors). You can instead focus on a specific `span` of the aggregate data values, mapping that span into the available colors or the available alpha values:
```
tf.Images(tf.shade(agg,cmap=["grey", "blue"], name="gb 0 20", span=[0,20], how="linear"),
tf.shade(agg,cmap=["grey", "blue"], name="gb 50 200", span=[50,200], how="linear"),
tf.shade(agg,cmap="green", name="Green 10 20", span=[10,20], how="linear"))
```
On the left, all counts above 20 are mapped to the highest value in the colormap (blue in this case), losing the ability to distinguish between values above 20 but providing the maximum color precision for the specific range 0 to 20. In the middle, all values 0 to 50 map to the first color in the colormap (grey in this case), and the colors are then linearly interpolated up to 200, with all values 200 and above mapping to the highest value in the colormap (blue in this case). With the single color mapping to alpha on the right, counts up to 10 are all mapped to `min_alpha`, counts 20 and above are all mapped to the specified `alpha` (255 in this case), and alpha is scaled linearly in between.
For plots that scale with alpha (i.e., categorical or single-color non-categorical plots), you can control the range of alpha values generated by setting `min_alpha` (lower bound) and `alpha` (upper bound), on a scale 0 to 255):
```
tf.Images(tf.shade(agg,cmap="green", name="Green"),
tf.shade(agg,cmap="green", name="No min_alpha", min_alpha=0),
tf.shade(agg,cmap="green", name="Small alpha range", min_alpha=50, alpha=80))
```
Here you can see that the faintest pixels are more visible with the default `min_alpha` (normally 40, left) than if you explicitly set the `min_alpha=0` (middle), which is why the `min_alpha` default is non-zero; otherwise low values would be indistinguishable from the background (see [Plotting Pitfalls](../user_guide/1_Plotting_Pitfalls.ipynb)).
You can combine `span` and `alpha` ranges to specifically control the data value range that maps to an opacity range, for single-color and categorical plotting:
```
tf.Images(tf.shade(agg,cmap="green", name="g 0,20", span=[ 0,20], how="linear"),
tf.shade(agg,cmap="green", name="g 10,20", span=[10,20], how="linear"),
tf.shade(agg,cmap="green", name="g 10,20 0", span=[10,20], how="linear", min_alpha=0))
tf.Images(tf.shade(aggc, name="eq_hist"),
tf.shade(aggc, name="linear", how='linear'),
tf.shade(aggc, name="span 0,10", how='linear', span=(0,10)),
tf.shade(aggc, name="span 0,10", how='linear', span=(0,20), min_alpha=0))
```
The categorical examples above focus on counts, but `ds.by` works on other aggregate types as well, colorizing by category but aggregating by sum, mean, etc. (but see the [following section](#Colormapping-with-negative-values) for details on how to interpret such colors):
```
agg_c = canvas.points(df,'x','y', ds.by('cat', ds.count()))
agg_s = canvas.points(df,'x','y', ds.by("cat", ds.sum("val")))
agg_m = canvas.points(df,'x','y', ds.by("cat", ds.mean("val")))
tf.Images(tf.shade(agg_c), tf.shade(agg_s), tf.shade(agg_m))
```
#### Colormapping with negative values
The above examples all use positive data values to avoid confusion when there is no colorbar or other explicit indication of a z (color) axis range. Negative values are also supported, in which case for a non-categorical plot you should normally use a [diverging colormap](https://colorcet.holoviz.org/user_guide/Continuous.html#Diverging-colormaps,-for-plotting-magnitudes-increasing-or-decreasing-from-a-central-point:):
```
from colorcet import coolwarm, CET_D8
dfn = df.copy()
dfn.val.replace({20:-20, 30:0, 40:-40}, inplace=True)
aggn = ds.Canvas().points(dfn,'x','y', agg=ds.mean("val"))
tf.Images(tf.shade(aggn, name="Sequential", cmap=["lightblue","blue"], how="linear"),
tf.shade(aggn, name="DivergingW", cmap=coolwarm[::-1], span=(-50,50), how="linear"),
tf.shade(aggn, name="DivergingB", cmap=CET_D8[::-1], span=(-50,50), how="linear"))
```
In both of the above plots, values with no data are transparent as usual, showing white. With a sequential lightblue to blue colormap, increasing `val` numeric values are mapped to the colormap in order, with the smallest values (-40; large blob in the top left) getting the lowest color value (lightblue), less negative values (-20, blob in the bottom right) getting an intermediate color, and the largest average values (50, large distribution in the background) getting the highest color. Looking at such a plot, viewers have no easy way to determine which values are negative. Using a diverging colormap (right two plots) and forcing the span to be symmetric around zero ensures that negative values are plotted in one color range (reds) and positive are plotted in a clearly different range (blues). Note that when using a diverging colormap with transparent values, you should carefully consider what you want to happen around the zero point; here values with nearly zero average (blob in bottom left) disappear when using a white-centered diverging map ("coolwarm"), while they show up but in a neutral color when using a diverging map with a contrasting central color ("CET_D8").
For categorical plots of values that can be negative, the results are often quite difficult to interpret, for the same reason as for the Sequential case above:
```
agg_c = canvas.points(dfn,'x','y', ds.by('cat', ds.count()))
agg_s = canvas.points(dfn,'x','y', ds.by("cat", ds.sum("val")))
agg_m = canvas.points(dfn,'x','y', ds.by("cat", ds.mean("val")))
tf.Images(tf.shade(agg_c, name="count"),
tf.shade(agg_s, name="sum"),
tf.shade(agg_s, name="sum baseline=0", color_baseline=0))
```
Here a `count` aggregate ignores the negative values and thus works the same as when values were positive, but `sum` and other aggregates like `mean` take the negative values into account. By default, a pixel with the lowest value (whether negative or positive) maps to `min_alpha`, and the highest maps to `alpha`. The color is determined by how different each category's value is from the minimum value across all categories; categories with high values relative to the minimum contribute more to the color. There is not currently any way to tell which data values are positive or negative, as you can using a diverging colormap in the non-categorical case.
Instead of using the default of the data minimum, you can pass a specific `color_baseline`, which is appropriate if your data has a well-defined reference value such as zero. Here, when we pass `color_baseline=0` the negative values are essentially ignored for color calculations, which can be seen on the green blob, where any orange data point is fully orange despite the presence of green-category datapoints; the middle plot `sum` shows a more appropriate color mixture in that case.
#### Spreading
Once an image has been created, it can be further transformed with a set of functions from `ds.transfer_functions`.
For instance, because it can be difficult to see individual dots, particularly for zoomed-in plots, you can transform the image to replace each non-transparent pixel with a shape, such as a circle (default) or square. This process is called spreading:
```
img = tf.shade(aggc, name="Original image")
tf.Images(img,
tf.spread(img, name="spread 1px"),
tf.spread(img, px=2, name="spread 2px"),
tf.spread(img, px=3, shape='square', name="spread square"))
```
As you can see, spreading is very effective for isolated datapoints, which is what it's normally used for, but it has overplotting-like effects for closely spaced points like in the green and purple regions above, and so it would not normally be used when the datapoints are dense.
Spreading can be used with a custom mask, as long as it is square and an odd width and height (so that it will be centered over the original pixel):
```
mask = np.array([[1, 1, 1, 1, 1],
[1, 0, 0, 0, 1],
[1, 0, 0, 0, 1],
[1, 0, 0, 0, 1],
[1, 1, 1, 1, 1]])
tf.spread(img, mask=mask)
```
To support interactive zooming, where spreading would be needed only in sparse regions of the dataset, we provide the dynspread function. `dynspread` will dynamically calculate the spreading size to use by counting the fraction of non-masked bins that have non-masked neighbors; see the
[dynspread docs](https://datashader.org/api.html#datashader.transfer_functions.dynspread) for more details.
#### Other image transfer_functions
Other useful image operations are also provided, such as setting the background color or combining images:
```
tf.Images(tf.set_background(img,"black", name="Black bg"),
tf.stack(img,tf.shade(aggc.sel(cat=['d2', 'd3']).sum(dim='cat')), name="Sum d2 and d3 colors"),
tf.stack(img,tf.shade(aggc.sel(cat=['d2', 'd3']).sum(dim='cat')), how='saturate', name="d2+d3 saturated"))
```
See [the API docs](https://datashader.org/api.html#transfer-functions) for more details. Image composition operators to provide for the `how` argument of `tf.stack` (e.g. `over` (default), `source`, `add`, and `saturate`) are listed in [composite.py](https://raw.githubusercontent.com/holoviz/datashader/master/datashader/composite.py) and illustrated [here](http://cairographics.org/operators).
## Moving on
The steps outlined above represent a complete pipeline from data to images, which is one way to use Datashader. However, in practice one will usually want to add one last additional step, which is to embed these images into a plotting program to be able to get axes, legends, interactive zooming and panning, etc. The [next notebook](3_Interactivity.ipynb) shows how to do such embedding.
| github_jupyter |
Lambda School Data Science, Unit 2: Predictive Modeling
# Applied Modeling, Module 1
You will use your portfolio project dataset for all assignments this sprint.
## Assignment
Complete these tasks for your project, and document your decisions.
- [ ] Choose your target. Which column in your tabular dataset will you predict?
- [ ] Choose which observations you will use to train, validate, and test your model. And which observations, if any, to exclude.
- [ ] Determine whether your problem is regression or classification.
- [ ] Choose your evaluation metric.
- [ ] Begin with baselines: majority class baseline for classification, or mean baseline for regression, with your metric of choice.
- [ ] Begin to clean and explore your data.
- [ ] Begin to choose which features, if any, to exclude. Would some features "leak" information from the future?
## Reading
### ROC AUC
- [Machine Learning Meets Economics](http://blog.mldb.ai/blog/posts/2016/01/ml-meets-economics/)
- [ROC curves and Area Under the Curve explained](https://www.dataschool.io/roc-curves-and-auc-explained/)
- [The philosophical argument for using ROC curves](https://lukeoakdenrayner.wordpress.com/2018/01/07/the-philosophical-argument-for-using-roc-curves/)
### Imbalanced Classes
- [imbalance-learn](https://github.com/scikit-learn-contrib/imbalanced-learn)
- [Learning from Imbalanced Classes](https://www.svds.com/tbt-learning-imbalanced-classes/)
### Last lesson
- [Attacking discrimination with smarter machine learning](https://research.google.com/bigpicture/attacking-discrimination-in-ml/), by Google Research, with interactive visualizations. _"A threshold classifier essentially makes a yes/no decision, putting things in one category or another. We look at how these classifiers work, ways they can potentially be unfair, and how you might turn an unfair classifier into a fairer one. As an illustrative example, we focus on loan granting scenarios where a bank may grant or deny a loan based on a single, automatically computed number such as a credit score."_
- [How Shopify Capital Uses Quantile Regression To Help Merchants Succeed](https://engineering.shopify.com/blogs/engineering/how-shopify-uses-machine-learning-to-help-our-merchants-grow-their-business)
- [Maximizing Scarce Maintenance Resources with Data: Applying predictive modeling, precision at k, and clustering to optimize impact](https://towardsdatascience.com/maximizing-scarce-maintenance-resources-with-data-8f3491133050), by Lambda DS3 student Michael Brady. His blog post extends the Tanzania Waterpumps scenario, far beyond what's in the lecture notebook.
- [Notebook about how to calculate expected value from a confusion matrix by treating it as a cost-benefit matrix](https://github.com/podopie/DAT18NYC/blob/master/classes/13-expected_value_cost_benefit_analysis.ipynb)
- [Simple guide to confusion matrix terminology](https://www.dataschool.io/simple-guide-to-confusion-matrix-terminology/) by Kevin Markham, with video
- [Visualizing Machine Learning Thresholds to Make Better Business Decisions](https://blog.insightdatascience.com/visualizing-machine-learning-thresholds-to-make-better-business-decisions-4ab07f823415)
# Import the final Liverpool Football Club data file.
```
# import pandas library as pd.
import pandas as pd
# read in the LiverpoolFootballClub_all csv file.
LPFC = pd.read_csv('https://raw.githubusercontent.com/CVanchieri/LSDS-DataSets/master/EnglishPremierLeagueData/LiverpoolFootballClubData_EPL.csv')
# show the data frame shape.
print(LPFC.shape)
# show the data frame with headers.
LPFC.head()
```
## Organizing & cleaning.
```
# group the columns we want to use.
columns = ["Div", "Date", "HomeTeam", "AwayTeam", "FTHG", "FTAG", "FTR",
"HTHG", "HTAG", "HTR", "HS", "AS", "HST", "AST", "HHW", "AHW",
"HC", "AC", "HF", "AF", "HO", "AO", "HY", "AY", "HR", "AR", "HBP", "ABP"]
# create a new data frame with just the grouped columns.
LPFC = LPFC[columns]
# show the data frame shape.
print(LPFC.shape)
# show the data frame with headers.
LPFC.head()
# relableing columns for better understanding.
LPFC = LPFC.rename(columns={"Div": "Division", "Date": "GameDate", "FTHG": "FullTimeHomeGoals", "FTAG": "FullTimeAwayGoals", "FTR": "FullTimeResult", "HTHG": "HalfTimeHomeGoals",
"HTAG": "HalfTimeAwayGoals", "HTR": "HalfTimeResult", "HS": "HomeShots", "AS": "AwayShots",
"HST": "HomeShotsOnTarget", "AST": "AwayShotsOnTarget", "HHW": "HomeShotsHitFrame",
"AHW": "AwayShotsHitFrame", "HC": "HomeCorners", "AC": "AwayCorners", "HF": "HomeFouls",
"AF": "AwayFouls", "HO": "HomeOffSides", "AO": "AwayOffSides", "HY": "HomeYellowCards",
"AY": "AwayYellowCards", "HR": "HomeRedCards", "AR": "AwayRedCards", "HBP": "HomeBookingPoints_Y5_R10",
"ABP": "AwayBookingPoints_Y5_R10"})
# show the data frame with headers.
LPFC.head()
```
## Baseline accuracy score.
```
# import accuracy_score from sklearn.metrics library.
from sklearn.metrics import accuracy_score
# determine 'majority class' baseline starting point for every prediction.
# single out the target, 'FullTimeResult' column.
target = LPFC['FullTimeResult']
# create the majority class with setting the 'mode' on the target data.
majority_class = target.mode()[0]
# create the y_pred data.
y_pred = [majority_class] * len(target)
# accuracy score for the majority class baseline = frequency of the majority class.
ac = accuracy_score(target, y_pred)
print("'Majority Baseline' Accuracy Score =", ac)
```
## Train/test split the data frame, train/val/test.
```
df = LPFC.copy()
target = 'FullTimeResult'
y = df[target]
# import train_test_split from sklearn.model_selection library.
from sklearn.model_selection import train_test_split
target = ['FullTimeResult']
y = df[target]
# split data into train, test.
X_train, X_val, y_train, y_val = train_test_split(df, y, test_size=0.20,
stratify=y, random_state=42)
# show the data frame shapes.
print("train =", X_train.shape, y_train.shape, "val =", X_val.shape, y_val.shape)
```
## LogisticREgression model.
```
import numpy as np
from datetime import datetime
def wrangle(X):
"""Wrangle train, validate, and test sets in the same way"""
# prevent SettingWithCopyWarning with a copy.
X = X.copy()
# make 'GameDate' useable with datetime.
X['GameDate'] = pd.to_datetime(X['GameDate'], infer_datetime_format=True)
# create new columns for 'YearOfGame', 'MonthOfGame', 'DayOfGame'.
X['YearOfGame'] = X['GameDate'].dt.year
X['MonthOfGame'] = X['GameDate'].dt.month
X['DayOfGame'] = X['GameDate'].dt.day
# removing 'FullTimeHomeGoals', 'FullTimeAwayGoals' as they directly coorelated to the result.
dropped_columns = ['FullTimeHomeGoals', 'FullTimeAwayGoals', 'Division', 'GameDate']
X = X.drop(columns=dropped_columns)
# return the wrangled dataframe
return X
X_train = wrangle(X_train)
X_val = wrangle(X_val)
# create the target as status_group.
target = 'FullTimeResult'
# set the features, remove target and id column.
train_features = X_train.drop(columns=[target])
# group all the numeric features.
numeric_features = train_features.select_dtypes(include='number').columns.tolist()
# group the cardinality of the nonnumeric features.
cardinality = train_features.select_dtypes(exclude='number').nunique()
# group all categorical features with cardinality <= 100.
categorical_features = cardinality[cardinality <= 500].index.tolist()
# create features with numeric + categorical
features = numeric_features + categorical_features
# create the new vaules with the new features/target data.
X_train = X_train[features]
X_val = X_val[features]
!pip install category_encoders
import category_encoders as ce
from sklearn.impute import SimpleImputer
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
pipeline = make_pipeline(
ce.OneHotEncoder(use_cat_names=True),
SimpleImputer(strategy='median'),
StandardScaler(),
LogisticRegression(solver='lbfgs', multi_class='auto', max_iter=1000, n_jobs=-1)
)
# Fit on train, score on val
pipeline.fit(X_train, y_train)
print ('Training Accuracy', pipeline.score(X_train, y_train))
print('Validation Accuracy', pipeline.score(X_val, y_val))
y_pred = pipeline.predict(X_val)
```
| github_jupyter |
___
<a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
___
# Matplotlib Exercises
Welcome to the exercises for reviewing matplotlib! Take your time with these, Matplotlib can be tricky to understand at first. These are relatively simple plots, but they can be hard if this is your first time with matplotlib, feel free to reference the solutions as you go along.
Also don't worry if you find the matplotlib syntax frustrating, we actually won't be using it that often throughout the course, we will switch to using seaborn and pandas built-in visualization capabilities. But, those are built-off of matplotlib, which is why it is still important to get exposure to it!
** * NOTE: ALL THE COMMANDS FOR PLOTTING A FIGURE SHOULD ALL GO IN THE SAME CELL. SEPARATING THEM OUT INTO MULTIPLE CELLS MAY CAUSE NOTHING TO SHOW UP. * **
# Exercises
Follow the instructions to recreate the plots using this data:
## Data
```
import numpy as np
x = np.arange(0,100)
y = x*2
z = x**2
```
** Import matplotlib.pyplot as plt and set %matplotlib inline if you are using the jupyter notebook. What command do you use if you aren't using the jupyter notebook?**
```
import matplotlib.pyplot as plt
%matplotlib inline
```
## Exercise 1
** Follow along with these steps: **
* ** Create a figure object called fig using plt.figure() **
* ** Use add_axes to add an axis to the figure canvas at [0,0,1,1]. Call this new axis ax. **
* ** Plot (x,y) on that axes and set the labels and titles to match the plot below:**
```
# Functional Method
fig = plt.figure()
ax = fig.add_axes([0, 0, 1, 1])
ax.plot(x, y)
ax.set_title('title')
ax.set_xlabel('X')
ax.set_ylabel('Y')
```
## Exercise 2
** Create a figure object and put two axes on it, ax1 and ax2. Located at [0,0,1,1] and [0.2,0.5,.2,.2] respectively.**
```
# create figure canvas
fig = plt.figure()
# create axes
ax1 = fig.add_axes([0,0,1,1])
ax2 = fig.add_axes([0.2,0.5,.2,.2])
plt.xticks(np.arange(0, 1.2, step=0.2))
plt.yticks(np.arange(0, 1.2, step=0.2))
```
** Now plot (x,y) on both axes. And call your figure object to show it.**
```
# create figure canvas
fig = plt.figure()
# create axes
ax1 = fig.add_axes([0,0,1,1])
ax2 = fig.add_axes([0.2,0.5,.2,.2])
ax1.set_xlabel('x1')
ax1.set_ylabel('y1')
ax2.set_xlabel('x2')
ax2.set_ylabel('y2')
ax1.plot(x, y, 'r-')
ax2.plot(x, y, 'b--')
plt.xticks(np.arange(0, 120, step=20))
plt.yticks(np.arange(0, 220, step=50))
```
## Exercise 3
** Create the plot below by adding two axes to a figure object at [0,0,1,1] and [0.2,0.5,.4,.4]**
```
fig = plt.figure()
ax1 = fig.add_axes([0,0,1,1])
ax2 = fig.add_axes([0.2,0.5,.4,.4])
```
** Now use x,y, and z arrays to recreate the plot below. Notice the xlimits and y limits on the inserted plot:**
```
fig = plt.figure()
ax1 = fig.add_axes([0,0,1,1])
ax2 = fig.add_axes([0.2,0.5,.4,.4])
ax1.plot(x, z)
ax2.plot(x, y, 'r--') # zoom using xlimit (20, 22), ylimit (30, 50)
ax2.set_xlim([20, 22])
ax2.set_ylim([30, 50])
ax2.set_title('zoom')
ax2.set_xlabel('X')
ax2.set_ylabel('Y')
ax1.set_xlabel('X')
ax1.set_ylabel('Z')
```
## Exercise 4
** Use plt.subplots(nrows=1, ncols=2) to create the plot below.**
```
fig, axes = plt.subplots(nrows=1, ncols=2)
# axes object is an array of subplot axis.
plt.tight_layout() # add space between rows & columns.
```
** Now plot (x,y) and (x,z) on the axes. Play around with the linewidth and style**
```
fig, axes = plt.subplots(nrows=1, ncols=2)
# axes object is an array of subplot axis.
axes[0].plot(x, y, 'b--', lw=3)
axes[1].plot(x, z, 'r-.', lw=2)
plt.tight_layout() # add space between rows & columns.
```
** See if you can resize the plot by adding the figsize() argument in plt.subplots() are copying and pasting your previous code.**
```
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(10, 7))
# axes object is an array of subplot axis.
axes[0].plot(x, y, 'b--', lw=3)
axes[1].plot(x, z, 'r-.', lw=2)
plt.tight_layout() # add space between rows & columns.
```
# Great Job!
| github_jupyter |
```
# Turn on Auto-Complete
%config IPCompleter.greedy=True
# Start logging process at root level
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
logging.root.setLevel(level=logging.INFO)
# Load model and dictionary
#model_id_current = 99999
#model_path_current = "models/enwiki-full-dict-"+str(model_id_current)+".model"
#model_path_99999 = "models/enwiki-20190319-lemmatized-99999.model"
model_path_current ="models/enwiki-20190409-lemmatized.model"
dictionary_full_wikien_lem_path = "dictionaries/enwiki-20190409-dict-lemmatized.txt.bz2"
# Load word2vec unlemmatized model
from gensim.models import Word2Vec
model = Word2Vec.load(model_path_current, mmap='r')
# Custom lemmatizer function to play with word
from gensim.utils import lemmatize
#vocabulary = set(wv.index2word)
def lem(word):
try:
return lemmatize(word)[0].decode("utf-8")
except:
pass
print(lem("dog"))
print(lem("that"))
# Testing similarity
print("Most similar to","woman")
print(model.wv.most_similar(lem("woman")))
print("Most similar to","doctor")
print(model.wv.most_similar(lem("doctor")))
# Saving some ram by using the KeyedVectors instance
wv = model.wv
#del model
# Testing similarity with KeyedVectors
print("Most similar to","woman")
print(wv.most_similar(lem("woman")))
print("\nMost similar to","man")
print(wv.most_similar(lem("man")))
print("\nMost similar to","doctor")
print(wv.most_similar(lem("doctor")))
print("\nMost similar to","doctor","cosmul")
print(wv.most_similar_cosmul(positive=[lem("doctor")]))
print("similarity of doctor + woman - man")
wv.most_similar(positive=[lem("doctor"),lem("woman")], negative=[lem("man")])
# Get cosmul of logic
print("cosmul of doctor + woman - man")
wv.most_similar_cosmul(positive=[lem("doctor"),lem("woman")], negative=[lem("man")])
# Ways to retrive word vector
print("Get item dog")
vec_dog = wv.__getitem__("dog/NN")
vec_dog = wv.get_vector("dog/NN")
vec_dog = wv.word_vec("dog/NN")
print("vec_dog", vec_dog.shape, vec_dog[:10])
# Get similar words to vector
print("Similar by vector to dog vector at top 10")
print(wv.similar_by_vector(vector=vec_dog, topn=10, restrict_vocab=None))
print("Most similar to dog vector")
print(wv.most_similar(positive=[vec_dog]))
print("Similar to cat vector")
vec_cat = wv.word_vec("cat/NN")
print(wv.most_similar(positive=[vec_cat]))
# closer to __ than __
print("closer to dog than cat")
print(wv.words_closer_than("dog/NN", "cat/NN"))
print("\ncloser to cat than dog")
print(wv.words_closer_than("cat/NN", "dog/NN"))
# Normalized Vector
vec_king_norm = wv.word_vec("king/NN", use_norm=True)
print("vec_king_norm:",vec_king_norm.shape, vec_king_norm[:10])
# Not normalized vectore
vec_king_unnorm = wv.word_vec("king/NN", use_norm=False)
print("vec_king_unnorm:",vec_king_norm.shape, vec_king_unnorm[:10])
wv.most_similar(positive=[vec_king_norm], negative=[vec_king_unnorm])
# Generate random vector
import numpy as np
vec_random = np.random.rand(300,)
vec_random_norm = vec_random / vec_random.max(axis=0)
print("similar to random vector")
print(wv.most_similar(positive=[vec_random]))
print("\n similar to nomalized random vector")
print(wv.most_similar(positive=[vec_random_norm]))
# Get similarity from a random vector and normilized king vector
print("similarity from a normalized random vector to normalized vector of king")
wv.most_similar(positive=[vec_random_norm,vec_king_norm])
# Get similarity from a random vector and unormalized king vector
print("similarity from a random vector to unormalized vector of king")
wv.most_similar(positive=[vec_random,vec_king_unnorm])
# Get cosine similarities from a vector to an array of vectors
print("cosine similarity from a random vector to unormalized vector of king")
wv.cosine_similarities(vec_random, [vec_king_unnorm])
# Tests analogies based on a text file
analogy_scores = wv.accuracy('datasets/questions-words.txt')
#print(analogy_scores)
# The the distance of two words
print("distance between dog and cat")
wv.distance("dog/NN","cat/NN")
# Get the distance of a word for the list of word
print("distance from dog to king and cat")
wv.distances("dog/NN",["king/NN","cat/NN"])
# Evaluate pairs of words
#wv.evaluate_word_pairs("datasets/SimLex-999.txt")
# Get sentence similarities
from gensim.models import KeyedVectors
from gensim.utils import simple_preprocess
def tokemmized(sentence, vocabulary):
tokens = [lem(word) for word in simple_preprocess(sentence)]
return [word for word in tokens if word in vocabulary]
def compute_sentence_similarity(sentence_1, sentence_2, model_wv):
vocabulary = set(model_wv.index2word)
tokens_1 = tokemmized(sentence_1, vocabulary)
tokens_2 = tokemmized(sentence_2, vocabulary)
del vocabulary
print(tokens_1, tokens_2)
return model_wv.n_similarity(tokens_1, tokens_2)
similarity = compute_sentence_similarity('this is a sentence', 'this is also a sentence', wv)
print(similarity,"\n")
similarity = compute_sentence_similarity('the cat is a mammal', 'the bird is a aves', wv)
print(similarity,"\n")
similarity = compute_sentence_similarity('the cat is a mammal', 'the dog is a mammal', wv)
print(similarity)
# Analogy with not normalized vectors
print("france is to paris as berlin is to ?")
wv.most_similar([wv['france/NN'] - wv['paris/NN'] + wv['berlin/NN']])
# Analogy with normalized Vector
vec_france_norm = wv.word_vec('france/NN', use_norm=True)
vec_paris_norm = wv.word_vec('paris/NN', use_norm=True)
vec_berlin_norm = wv.word_vec('berlin/NN', use_norm=True)
vec_germany_norm = wv.word_vec('germany/NN', use_norm=True)
vec_country_norm = wv.word_vec('country/NN', use_norm=True)
print("france is to paris as berlin is to ?")
wv.most_similar([vec_france_norm - vec_paris_norm + vec_berlin_norm])
# Cosine Similarities
print("cosine_similarities of france and paris")
print(wv.cosine_similarities(vec_france_norm, [vec_paris_norm]),wv.distance("france/NN", "paris/NN"))
print("cosine_similarities of france and berlin")
print(wv.cosine_similarities(vec_france_norm, [vec_berlin_norm]),wv.distance("france/NN", "berlin/NN"))
print("cosine_similarities of france and germany")
print(wv.cosine_similarities(vec_france_norm, [vec_germany_norm]),wv.distance("france/NN", "germany/NN"))
print("cosine_similarities of france and country")
print(wv.cosine_similarities(vec_france_norm, [vec_country_norm]),wv.distance("france/NN", "country/NN"))
# Analogy
print("king is to man what woman is to ?")
#wv.most_similar([wv['man/NN'] - wv['woman/NN'] + wv['king/NN']])
wv.most_similar([wv['king/NN'] - wv['man/NN'] + wv['woman/NN']])
# Analogy
print("paris is to france as germany is to ?")
wv.most_similar([wv['paris/NN'] - wv['france/NN'] + wv['germany/NN']])
# Analogy
print("cat is to mammal as sparrow is to ?")
wv.most_similar([wv['cat/NN'] - wv['mammal/NN'] + wv['bird/NN']])
# Analogy
print("grass is to green as sky is to ?")
wv.most_similar([wv['sky/NN'] - wv['blue/NN'] + wv['green/NN']])
# Analogy
print("athens is to greece as baghdad is to ?")
wv.most_similar([wv['athens/NN'] - wv['greece/NN'] + wv['afghanistan/NN']])
wv.most_similar([wv["country/NN"]])
wv.most_similar([wv["capital/NN"]])
wv.most_similar([wv["paris/NN"]-wv["capital/NN"]])
wv.most_similar([wv["bern/NN"]-wv["capital/NN"]])
wv.most_similar([wv["switzerland/NN"]-wv["bern/NN"]])
wv.distance("dog/NN", "dogs/NN")
wv.cosine_similarities(wv["dog/NN"],[wv["dogs/NN"]])
wv.distance("switzerland/NN", "bern/NN")
wv.cosine_similarities(wv["switzerland/NN"],[wv["bern/NN"]])
wv.distance("paris/NN", "bern/NN")
wv.cosine_similarities(wv["paris/NN"],[wv["bern/NN"]])
wv.cosine_similarities(wv["paris/NN"],[wv["dog/NN"]])
# Analogy
print("capital + science")
wv.most_similar([wv['capital/NN'] + wv['science/NN']])
wv.cosine_similarities(wv["education/NN"], [wv["natality/NN"],wv["salubrity/NN"],wv["economy/NN"]]
#wv.distance("education","natality")
# education, natality, salubrity, economy
#wv.most_similar_cosmul(positive=["doctor","woman"], negative=["man"])
```
| github_jupyter |
```
import pandas as pd
import psycopg2
import sqlalchemy
import config
import json
import numpy as np
import scrape
scrape.scrape_bls()
from sqlalchemy import create_engine
from config import password
Engine = create_engine(f"postgresql://postgres:{password}@localhost:5432/Employee_Turnover")
Connection = Engine.connect()
initial_df=pd.read_sql("select * from turnover_data", Connection)
bls_df=pd.read_sql("select * from blsdata", Connection)
Connection.close()
bls_df.drop(bls_df.index[[35,36]])
bls_df.drop("index",axis=1,inplace=True)
bls_df.head()
# Clean data: remove less helpful columns; rename values to be more user-friendly
df = initial_df.drop(['index','EducationField','EmployeeCount','EmployeeNumber','Education','StandardHours','JobRole','MaritalStatus',
'DailyRate','MonthlyRate','HourlyRate','Over18','OverTime','TotalWorkingYears'], axis=1).drop_duplicates()
df.rename(columns={'Attrition': 'Employment Status','BusinessTravel':'Business Travel','DistanceFromHome':'Commute (Miles)','EnvironmentSatisfaction':'Environment Satisfaction',
'JobInvolvement':'Job Involvement','JobLevel':'Job Level','JobSatisfaction':'Job Satisfaction',
'MonthlyIncome':'Monthly Income','NumCompaniesWorked':'Num Companies Worked','PercentSalaryHike':'Last Increase %',
'PerformanceRating':'Performance Rating','RelationshipSatisfaction':'Relationship Satisfaction','StockOptionLevel':'Stock Option Level',
'TrainingTimesLastYear':'Training Last Year','WorkLifeBalance':'Work/Life Balance','YearsAtCompany':'Tenure (Years)',
'YearsInCurrentRole':'Years in Role','YearsSinceLastPromotion':'Years Since Promotion','YearsWithCurrManager':'Years with Manager'}, inplace = True)
df['Employment Status'] = df['Employment Status'].replace(['No','Yes'],['Active','Terminated'])
df['Business Travel'] = df['Business Travel'].replace(['Travel_Rarely','Travel_Frequently','Non-Travel'],['Rare','Frequent','None'])
columns = list(df)
print(columns)
df.head()
factors = ['Age', 'Business Travel', 'Department', 'Commute (Miles)', 'Environment Satisfaction', 'Gender', 'Job Involvement', 'Job Level', 'Job Satisfaction', 'Monthly Income', 'Performance Rating', 'Relationship Satisfaction', 'Stock Option Level', 'Training Last Year']
count18to29 = 0
count30to39 = 0
count40to49 = 0
count50to59 = 0
count60up = 0
for x in df['Age']:
if x >=18 and x<30:
count18to29 += 1
elif x >= 30 and x<40:
count30to39 += 1
elif x >= 40 and x<50:
count40to49 += 1
elif x >= 50 and x<60:
count50to59 += 1
elif x >= 60:
count60up += 1
else: pass
age18to29_df= df.loc[(df['Age']>=18) & (df['Age']<30)]
age18to29_act_df=age18to29_df.loc[(df['Employment Status'] == 'Active')]
age18to29_term_df=age18to29_df.loc[(df['Employment Status'] == 'Terminated')]
age30to39_df= df.loc[(df['Age']>=30) & (df['Age']<40)]
age30to39_act_df=age30to39_df.loc[(df['Employment Status'] == 'Active')]
age30to39_term_df=age30to39_df.loc[(df['Employment Status'] == 'Terminated')]
age40to49_df= df.loc[(df['Age']>=40) & (df['Age']<50)]
age40to49_act_df=age40to49_df.loc[(df['Employment Status'] == 'Active')]
age40to49_term_df=age40to49_df.loc[(df['Employment Status'] == 'Terminated')]
age50to59_df= df.loc[(df['Age']>=50) & (df['Age']<60)]
age50to59_act_df=age50to59_df.loc[(df['Employment Status'] == 'Active')]
age50to59_term_df=age50to59_df.loc[(df['Employment Status'] == 'Terminated')]
age60up_df= df.loc[(df['Age']>=60)]
age60up_act_df=age60up_df.loc[(df['Employment Status'] == 'Active')]
age60up_term_df=age60up_df.loc[(df['Employment Status'] == 'Terminated')]
per18to29Act=round((len(age18to29_act_df)/count18to29*100),1)
per18to29Term=round((len(age18to29_term_df)/count18to29*100),1)
per30to39Act=round((len(age30to39_act_df)/count30to39*100),1)
per30to39Term=round((len(age30to39_term_df)/count30to39*100),1)
per40to49Act=round((len(age40to49_act_df)/count40to49*100),1)
per40to49Term=round((len(age40to49_term_df)/count40to49*100),1)
per50to59Act=round((len(age50to59_act_df)/count50to59*100),1)
per50to59Term=round((len(age50to59_term_df)/count50to59*100),1)
per60upAct=round((len(age60up_act_df)/count60up*100),1)
per60upTerm=round((len(age60up_term_df)/count60up*100),1)
age_dict={}
age_dict['18_to_29Act'] = per18to29Act
age_dict['18_to_29Term'] = per18to29Term
age_dict['30_to_39Act'] = per30to39Act
age_dict['30_to_39Term'] = per30to39Term
age_dict['40_to_49Act'] = per40to49Act
age_dict['40_to_49Term'] = per40to49Term
age_dict['50_to_59Act'] = per50to59Act
age_dict['50_to_59Term'] = per50to59Term
age_dict['60upAct'] = per60upAct
age_dict['60upTerm'] = per60upTerm
parsed = json.loads(json.dumps(age_dict))
json_age=json.dumps(parsed, indent=4, sort_keys=True)
new_df=df.groupby(["Business Travel","Employment Status"]).count().reset_index()
new_df.head(8)
trav_df = new_df["Age"].astype(float)
trav_df.head(6)
none_act=trav_df[2]
none_term=trav_df[3]
none_total=none_act + none_term
rare_act=trav_df[4]
rare_term=trav_df[5]
rare_total=rare_act + rare_term
freq_act=trav_df[0]
freq_term=trav_df[1]
freq_total=freq_act+freq_term
none_act_rate=(none_act/none_total*100).round(1)
none_term_rate=(none_term/none_total*100).round(1)
rare_act_rate=(rare_act/rare_total*100).round(1)
rare__term_rate=(rare_term/rare_total*100).round(1)
freq_act_rate=(freq_act/freq_total*100).round(1)
freq_term_rate=(freq_term/freq_total*100).round(1)
trav_dict = {}
trav_dict['factor'] = 'Business Travel'
trav_dict['category'] = ['None','Rare','Frequent']
trav_dict['counts'] = [none_total,rare_total,freq_total]
trav_dict['Active'] = [none_act_rate,rare_act_rate,freq_act_rate]
trav_dict['Terminated'] = [none_term_rate, rare_term_rate, freq_term_rate]
new_dep_df=df.groupby(["Department","Employment Status"]).count().reset_index()
new_dep_df.head()
dept_df = new_dep_df["Age"].astype(float)
dept_df.head(6)
hr_act=dept_df[0]
hr_term=dept_df[1]
hr_total=hr_act + hr_term
rd_act=dept_df[2]
rd_term=dept_df[3]
rd_total=rd_act + rd_term
sales_act=dept_df[4]
sales_term=dept_df[5]
sales_total=sales_act+sales_term
hr_act_rate=(hr_act/hr_total*100).round(1)
hr_term_rate=(hr_term/hr_total*100).round(1)
rd_act_rate=(rd_act/rd_total*100).round(1)
rd_term_rate=(rd_term/rd_total*100).round(1)
sales_act_rate=(sales_act/sales_total*100).round(1)
sales_term_rate=(sales_term/sales_total*100).round(1)
dept_dict = {}
dept_dict['factor'] = 'Department'
dept_dict['category'] = ['HR','R&D','Sales']
dept_dict['counts'] = [hr_total,rd_total,sales_total]
dept_dict['Active'] = [hr_act_rate,rd_act_rate,sales_act_rate]
dept_dict['Terminated'] = [hr_term_rate, rd_term_rate, sales_term_rate]
jobIn_df=df.groupby(["Job Involvement","Employment Status"]).count().reset_index()
jobIn_df.head(8)
JI=jobIn_df["Age"].astype(float)
JI.head(8)
type(JI)
JI1_act=JI[0]
JI1_term=JI[1]
JI1_total=(JI1_act + JI1_term)
JI2_act=JI[2]
JI2_term=JI[3]
JI2_total=(JI2_act + JI2_term)
JI3_act=JI[4]
JI3_term=JI[5]
JI3_total=(JI3_act + JI3_term)
JI4_act=JI[6]
JI4_term=JI[7]
JI4_total=(JI4_act + JI4_term)
rateJI1_act=(JI1_act/JI1_total*100).round(1)
rateJI1_term=(JI1_term/JI1_total*100).round(1)
rateJI2_act=(JI2_act/JI2_total*100).round(1)
rateJI2_term=(JI2_term/JI2_total*100).round(1)
rateJI3_act=(JI3_act/JI3_total*100).round(1)
rateJI3_term=(JI3_term/JI3_total*100).round(1)
rateJI4_act=(JI4_act/JI4_total*100).round(1)
rateJI4_term=(JI4_term/JI4_total*100).round(1)
jobInvol_dict={}
jobInvol_dict['factor'] = 'Job Involvement'
jobInvol_dict['category'] = ['1','2','3','4']
jobInvol_dict['counts'] = [JI1_total, JI2_total, JI3_total, JI4_total]
jobInvol_dict['Active'] = [rateJI1_act, rateJI2_act, rateJI3_act, rateJI4_act]
jobInvol_dict['Terminated'] = [rateJI1_term, rateJI2_term, rateJI3_term, rateJI4_term]
gender_df=df.groupby(["Gender","Employment Status"]).count().reset_index()
gender_df = gender_df["Age"].astype(float)
gender_df.head()
fem_act_count=gender_df[0]
fem_term_count=gender_df[1]
fem_count=fem_act_count+fem_term_count
male_act_count=gender_df[2]
male_term_count=gender_df[3]
male_count=male_act_count+male_term_count
fem_act_rate=(fem_act_count/fem_count*100).round(1)
fem_term_rate=(fem_term_count/fem_count*100).round(1)
male_act_rate=(male_act_count/male_count*100).round(1)
male_term_rate=(male_term_count/male_count*100).round(1)
print(male_act_rate,male_term_rate)
gender_dict = {}
gender_dict['factor'] = 'Gender'
gender_dict['category'] = ['Male','Female']
gender_dict['counts'] = [male_count, fem_count]
gender_dict['Active'] = [male_act_rate, fem_act_rate]
gender_dict['Terminated'] = [male_term_rate, fem_term_rate]
perf_df=df.groupby(["Performance Rating","Employment Status"]).count().reset_index()
perf_df=perf_df["Age"].astype(float)
count1_total=0
count2_total=0
count3_act=perf_df[0]
count3_term=perf_df[1]
count4_act=perf_df[2]
count4_term=perf_df[3]
count3_total=count3_act+count3_term
count4_total=count4_act+count4_term
rate1_act=0
rate1_term=0
rate2_act=0
rate2_term=0
rate3_act=(count3_act/count3_total*100).round(1)
rate3_term=(count3_term/count3_total*100).round(1)
rate4_act=(count4_act/count4_total*100).round(1)
rate4_term=(count4_term/count4_total*100).round(1)
perf_dict={}
perf_dict['factor'] = 'Performance Rating'
perf_dict['category'] = ['1','2','3','4']
perf_dict['counts'] = [count1_total, count2_total, count3_total, count4_total]
perf_dict['Active'] = [rate1_act, rate2_act, rate3_act, rate4_act]
perf_dict['Terminated'] = [rate1_term,rate2_term,rate3_term,rate4_term]
# Job Satisfaction
js_df=df.groupby(["Job Satisfaction","Employment Status"]).count().reset_index()
js=js_df["Age"].astype(int)
actjs1 = js[0]
terjs1 = js[1]
alljs1 = js[0] + js[1]
actjs2 = js[2]
terjs2 = js[3]
alljs2 = js[2] + js[3]
actjs3 = js[4]
terjs3 = js[5]
alljs3 = js[4] + js[5]
actjs4 = js[6]
terjs4 = js[7]
alljs4 = js[6] + js[7]
rate_actjs1 = (actjs1/alljs1*100).round(1)
rate_actjs2 = (actjs2/alljs2*100).round(1)
rate_actjs3 = (actjs3/alljs3*100).round(1)
rate_actjs4 = (actjs4/alljs4*100).round(1)
rate_termjs1 = (terjs1/alljs1*100).round(1)
rate_termjs2 = (terjs2/alljs2*100).round(1)
rate_termjs3 = (terjs3/alljs3*100).round(1)
rate_termjs4 = (terjs4/alljs4*100).round(1)
js_dict = {}
js_dict['factor'] = 'Job Satisfaction'
js_dict['category'] = ['1','2','3','4']
js_dict['counts'] = [alljs1,alljs2,alljs3,alljs4]
js_dict['Active'] = [rate_actjs1, rate_actjs2, rate_actjs3, rate_actjs4]
js_dict['Terminated'] = [rate_termjs1, rate_termjs2, rate_termjs3, rate_termjs4]
# Monthly Income
bins = [1000, 3000, 5000, 7000, 20000]
labels=['<$3000','$3000-4999','$5000-6999','$7000 and up']
groups = df.groupby(['Employment Status', pd.cut(df['Monthly Income'], bins=bins, labels=labels)])
mi=groups.size().reset_index().rename(columns={0:"Count"})
mi1_act=int(mi.iloc[0,2])
mi1_term=int(mi.iloc[4,2])
mi1_total=int((mi1_act + mi1_term))
mi2_act=int(mi.iloc[1,2])
mi2_term=int(mi.iloc[5,2])
mi2_total=int((mi2_act + mi2_term))
mi3_act=int(mi.iloc[2,2])
mi3_term=int(mi.iloc[6,2])
mi3_total=int((mi3_act + mi3_term))
mi4_act=int(mi.iloc[3,2])
mi4_term=int(mi.iloc[7,2])
mi4_total=int((mi4_act + mi4_term))
rate_mi1_act=(mi1_act/mi1_total*100)
rate_mi1_term=(mi1_term/mi1_total*100)
rate_mi2_act=(mi2_act/mi2_total*100)
rate_mi2_term=(mi2_term/mi2_total*100)
rate_mi3_act=(mi3_act/mi3_total*100)
rate_mi3_term=(mi3_term/mi3_total*100)
rate_mi4_act=(mi4_act/mi4_total*100)
rate_mi4_term=(mi4_term/mi4_total*100)
mi_dict={}
mi_dict['factor'] = 'Monthly Income'
mi_dict['category'] = [labels]
mi_dict['counts'] = [mi1_total, mi2_total, mi3_total, mi4_total]
mi_dict['Active'] = [rate_mi1_act, rate_mi2_act, rate_mi3_act, rate_mi4_act]
mi_dict['Terminated'] = [rate_mi1_term, rate_mi2_term, rate_mi3_term, rate_mi4_term]
# Stock Options
so_df=df.groupby(["Stock Option Level","Employment Status"]).count().reset_index()
so=so_df["Age"].astype(float)
actso1 = so[0]
terso1 = so[1]
allso1 = so[0] + so[1]
actso2 = so[2]
terso2 = so[3]
allso2 = so[2] + so[3]
actso3 = so[4]
terso3 = so[5]
allso3 = so[4] + so[5]
actso4 = so[6]
terso4 = so[7]
allso4 = so[6] + so[7]
rate_actso1 = (actso1/allso1*100).round(1)
rate_actso2 = (actso2/allso2*100).round(1)
rate_actso3 = (actso3/allso3*100).round(1)
rate_actso4 = (actso4/allso4*100).round(1)
rate_termso1 = (terso1/allso1*100).round(1)
rate_termso2 = (terso2/allso2*100).round(1)
rate_termso3 = (terso3/allso3*100).round(1)
rate_termso4 = (terso4/allso4*100).round(1)
so_dict = {}
so_dict['factor'] = 'Stock Option Level'
so_dict['category'] = ['1','2','3','4']
so_dict['counts'] = [allso1,allso2,allso3,allso4]
so_dict['Active'] = [rate_actso1, rate_actso2, rate_actso3, rate_actso4]
so_dict['Terminated'] = [rate_termso1, rate_termso2, rate_termso3, rate_termso4]
bins = [0,1,3,5,7]
labels=['None','1 or 2', '3 or 4', '5 or 6']
groupsT = df.groupby(['Employment Status', pd.cut(df['Training Last Year'], bins=bins, labels=labels)])
tr=groupsT.size().reset_index().rename(columns={0:"Count","Training Last Year":"Trainings Last Year"})
tr.head(8)
tr1_act=int(tr.iloc[0,2])
tr1_term=int(tr.iloc[4,2])
tr1_total=int((tr1_act + tr1_term))
tr2_act=int(tr.iloc[1,2])
tr2_term=int(tr.iloc[5,2])
tr2_total=int((tr2_act + tr2_term))
tr3_act=int(tr.iloc[2,2])
tr3_term=int(tr.iloc[6,2])
tr3_total=int((tr3_act + tr3_term))
tr4_act=int(tr.iloc[3,2])
tr4_term=int(tr.iloc[7,2])
tr4_total=int((tr4_act + tr4_term))
rate_tr1_act=(tr1_act/tr1_total*100).round()
rate_tr1_term=(tr1_term/tr1_total*100)
rate_tr2_act=(tr2_act/tr2_total*100)
rate_tr2_term=(tr2_term/tr2_total*100)
rate_tr3_act=(tr3_act/tr3_total*100)
rate_tr3_term=(tr3_term/tr3_total*100)
rate_tr4_act=(tr4_act/tr4_total*100)
rate_tr4_term=(tr4_term/tr4_total*100)
tr_dict={}
tr_dict['factor'] = 'Monthly Income'
tr_dict['category'] = [labels]
tr_dict['counts'] = [tr1_total, tr2_total, tr3_total, tr4_total]
tr_dict['Active'] = [rate_tr1_act, rate_tr2_act, rate_tr3_act, rate_tr4_act]
tr_dict['Terminated'] = [rate_tr1_term, rate_tr2_term, rate_tr3_term, rate_tr4_term]
bins = [0, 10, 20, 30]
labels=['<10 mi','10-19 mi','20-29 mi']
groups = df.groupby(['Employment Status', pd.cut(df['Commute (Miles)'], bins=bins, labels=labels)])
cm=groups.size().reset_index().rename(columns={0:"Count"})
cm.head(6)
cm1_act=int(cm.iloc[0,2])
cm1_term=int(cm.iloc[3,2])
cm1_total=int((cm1_act + cm1_term))
cm2_act=int(cm.iloc[1,2])
cm2_term=int(cm.iloc[4,2])
cm2_total=int((cm2_act + cm2_term))
cm3_act=int(cm.iloc[2,2])
cm3_term=int(cm.iloc[5,2])
cm3_total=int((cm3_act + cm3_term))
rate_cm1_act=(cm1_act/cm1_total*100)
rate_cm1_term=(cm1_term/cm1_total*100)
rate_cm2_act=(cm2_act/cm2_total*100)
rate_cm2_term=(cm2_term/cm2_total*100)
rate_cm3_act=(cm3_act/cm3_total*100)
rate_cm3_term=(cm3_term/cm3_total*100)
cm_dict={}
cm_dict['factor'] = 'Commute (Miles)'
cm_dict['Category'] = [labels]
cm_dict['Count'] = [cm1_total, cm2_total, cm3_total]
cm_dict['Active'] = [rate_cm1_act, rate_cm2_act, rate_cm3_act]
cm_dict['Terminated'] = [rate_cm1_term, rate_cm2_term, rate_cm3_term]
```
| github_jupyter |
# Implementing the Gradient Descent Algorithm
In this lab, we'll implement the basic functions of the Gradient Descent algorithm to find the boundary in a small dataset. First, we'll start with some functions that will help us plot and visualize the data.
```
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
#Some helper functions for plotting and drawing lines
def plot_points(X, y):
admitted = X[np.argwhere(y==1)]
rejected = X[np.argwhere(y==0)]
plt.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected], s = 25, color = 'blue', edgecolor = 'k')
plt.scatter([s[0][0] for s in admitted], [s[0][1] for s in admitted], s = 25, color = 'red', edgecolor = 'k')
def display(m, b, color='g--'):
plt.xlim(-0.05,1.05)
plt.ylim(-0.05,1.05)
x = np.arange(-10, 10, 0.1)
plt.plot(x, m*x+b, color)
```
## Reading and plotting the data
```
data = pd.read_csv('data.csv', header=None)
X = np.array(data[[0,1]])
y = np.array(data[2])
plot_points(X,y)
plt.show()
```
## TODO: Implementing the basic functions
Here is your turn to shine. Implement the following formulas, as explained in the text.
- Sigmoid activation function
$$\sigma(x) = \frac{1}{1+e^{-x}}$$
- Output (prediction) formula
$$\hat{y} = \sigma(w_1 x_1 + w_2 x_2 + b)$$
- Error function
$$Error(y, \hat{y}) = - y \log(\hat{y}) - (1-y) \log(1-\hat{y})$$
- The function that updates the weights
$$ w_i \longrightarrow w_i + \alpha (y - \hat{y}) x_i$$
$$ b \longrightarrow b + \alpha (y - \hat{y})$$
```
# Implement the following functions
# Activation (sigmoid) function
def sigmoid(x):
exp = np.exp(-x)
return 1/(1+exp)
# Output (prediction) formula
def output_formula(features, weights, bias):
return sigmoid(np.dot(features,weights)+bias)
# Error (log-loss) formula
def error_formula(y, output):
return -y*np.log(output)- (1 - y) * np.log(1-output)
# Gradient descent step
def update_weights(x, y, weights, bias, learnrate):
output = output_formula(x , weights , bias)
error = y- output
weights += learnrate*x*error
bias += learnrate*error
return weights, bias
# # Activation (sigmoid) function
# def sigmoid(x):
# return 1 / (1 + np.exp(-x))
# def output_formula(features, weights, bias):
# return sigmoid(np.dot(features, weights) + bias)
# def error_formula(y, output):
# return - y*np.log(output) - (1 - y) * np.log(1-output)
# def update_weights(x, y, weights, bias, learnrate):
# output = output_formula(x, weights, bias)
# d_error = y - output
# weights += learnrate * d_error * x
# bias += learnrate * d_error
# return weights, bias
```
## Training function
This function will help us iterate the gradient descent algorithm through all the data, for a number of epochs. It will also plot the data, and some of the boundary lines obtained as we run the algorithm.
```
np.random.seed(44)
epochs = 100
learnrate = 0.01
def train(features, targets, epochs, learnrate, graph_lines=False):
errors = []
n_records, n_features = features.shape
last_loss = None
weights = np.random.normal(scale=1 / n_features**.5, size=n_features)
bias = 0
for e in range(epochs):
del_w = np.zeros(weights.shape)
for x, y in zip(features, targets):
output = output_formula(x, weights, bias)
error = error_formula(y, output)
weights, bias = update_weights(x, y, weights, bias, learnrate)
# Printing out the log-loss error on the training set
out = output_formula(features, weights, bias)
loss = np.mean(error_formula(targets, out))
errors.append(loss)
if e % (epochs / 10) == 0:
print("\n========== Epoch", e,"==========")
if last_loss and last_loss < loss:
print("Train loss: ", loss, " WARNING - Loss Increasing")
else:
print("Train loss: ", loss)
last_loss = loss
predictions = out > 0.5
accuracy = np.mean(predictions == targets)
print("Accuracy: ", accuracy)
if graph_lines and e % (epochs / 100) == 0:
display(-weights[0]/weights[1], -bias/weights[1])
# Plotting the solution boundary
plt.title("Solution boundary")
display(-weights[0]/weights[1], -bias/weights[1], 'black')
# Plotting the data
plot_points(features, targets)
plt.show()
# Plotting the error
plt.title("Error Plot")
plt.xlabel('Number of epochs')
plt.ylabel('Error')
plt.plot(errors)
plt.show()
```
## Time to train the algorithm!
When we run the function, we'll obtain the following:
- 10 updates with the current training loss and accuracy
- A plot of the data and some of the boundary lines obtained. The final one is in black. Notice how the lines get closer and closer to the best fit, as we go through more epochs.
- A plot of the error function. Notice how it decreases as we go through more epochs.
```
train(X, y, epochs, learnrate, True)
```
| github_jupyter |
```
import json
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import quad
from scipy.special import comb
from tabulate import tabulate
%matplotlib inline
```
## Expected numbers on Table 3.
```
rows = []
datasets = {
'Binary': 2,
'AG news': 4,
'CIFAR10': 10,
'CIFAR100': 100,
'Wiki3029': 3029,
}
def expectations(C: int) -> float:
"""
C is the number of latent classes.
"""
e = 0.
for k in range(1, C + 1):
e += C / k
return e
for dataset_name, C in datasets.items():
e = expectations(C)
rows.append((dataset_name, C, np.ceil(e)))
# ImageNet is non-uniform label distribution on the training dataset
data = json.load(open("./imagenet_count.json"))
counts = np.array(list(data.values()))
total_num = np.sum(counts)
prob = counts / total_num
def integrand(t: float, prob: np.ndarray) -> float:
return 1. - np.prod(1 - np.exp(-prob * t))
rows.append(("ImageNet", len(prob), np.ceil(quad(integrand, 0, np.inf, args=(prob))[0])))
print(tabulate(rows, headers=["Dataset", "\# classes", "\mathbb{E}[K+1]"]))
```
## Probability $\upsilon$
```
def prob(C, N):
"""
C: the number of latent class
N: the number of samples to draw
"""
theoretical = []
for n in range(C, N + 1):
p = 0.
for m in range(C - 1):
p += comb(C - 1, m) * ((-1) ** m) * np.exp((n - 1) * np.log(1. - (m + 1) / C))
theoretical.append((n, max(p, 0.)))
return np.array(theoretical)
# example of CIFAR-10
C = 10
for N in [32, 63, 128, 256, 512]:
p = np.sum(prob(C, N).T[1])
print("{:3d} {:.7f}".format(N, p))
# example of CIFAR-100
C = 100
ps = []
ns = []
for N in 128 * np.arange(1, 9):
p = np.sum(prob(C, N).T[1])
print("{:4d} {}".format(N, p))
ps.append(p)
ns.append(N)
```
## Simulation
```
n_loop = 10
rnd = np.random.RandomState(7)
labels = np.arange(C).repeat(100)
results = {}
for N in ns:
num_iters = int(len(labels) / N)
total_samples_for_bounds = float(num_iters * N * (n_loop))
for _ in range(n_loop):
rnd.shuffle(labels)
for batch_id in range(len(labels) // N):
if len(set(labels[N * batch_id:N * (batch_id + 1)])) == C:
results[N] = results.get(N, 0.) + N / total_samples_for_bounds
else:
results[N] = results.get(N, 0.) + 0.
xs = []
ys = []
for k, v in results.items():
print(k, v)
ys.append(v)
xs.append(k)
plt.plot(ns, ps, label="Theoretical")
plt.plot(xs, ys, label="Empirical")
plt.ylabel("probability")
plt.xlabel("$K+1$")
plt.title("CIFAR-100 simulation")
plt.legend()
```
| github_jupyter |
# PageRank Performance Benchmarking
# Skip notebook test
This notebook benchmarks performance of running PageRank within cuGraph against NetworkX. NetworkX contains several implementations of PageRank. This benchmark will compare cuGraph versus the defaukt Nx implementation as well as the SciPy version
Notebook Credits
Original Authors: Bradley Rees
Last Edit: 08/16/2020
RAPIDS Versions: 0.15
Test Hardware
GV100 32G, CUDA 10,0
Intel(R) Core(TM) CPU i7-7800X @ 3.50GHz
32GB system memory
### Test Data
| File Name | Num of Vertices | Num of Edges |
|:---------------------- | --------------: | -----------: |
| preferentialAttachment | 100,000 | 999,970 |
| caidaRouterLevel | 192,244 | 1,218,132 |
| coAuthorsDBLP | 299,067 | 1,955,352 |
| dblp-2010 | 326,186 | 1,615,400 |
| citationCiteseer | 268,495 | 2,313,294 |
| coPapersDBLP | 540,486 | 30,491,458 |
| coPapersCiteseer | 434,102 | 32,073,440 |
| as-Skitter | 1,696,415 | 22,190,596 |
### Timing
What is not timed: Reading the data
What is timmed: (1) creating a Graph, (2) running PageRank
The data file is read in once for all flavors of PageRank. Each timed block will craete a Graph and then execute the algorithm. The results of the algorithm are not compared. If you are interested in seeing the comparison of results, then please see PageRank in the __notebooks__ repo.
## NOTICE
_You must have run the __dataPrep__ script prior to running this notebook so that the data is downloaded_
See the README file in this folder for a discription of how to get the data
## Now load the required libraries
```
# Import needed libraries
import gc
import time
import rmm
import cugraph
import cudf
# NetworkX libraries
import networkx as nx
from scipy.io import mmread
try:
import matplotlib
except ModuleNotFoundError:
os.system('pip install matplotlib')
import matplotlib.pyplot as plt; plt.rcdefaults()
import numpy as np
```
### Define the test data
```
# Test File
data = {
'preferentialAttachment' : './data/preferentialAttachment.mtx',
'caidaRouterLevel' : './data/caidaRouterLevel.mtx',
'coAuthorsDBLP' : './data/coAuthorsDBLP.mtx',
'dblp' : './data/dblp-2010.mtx',
'citationCiteseer' : './data/citationCiteseer.mtx',
'coPapersDBLP' : './data/coPapersDBLP.mtx',
'coPapersCiteseer' : './data/coPapersCiteseer.mtx',
'as-Skitter' : './data/as-Skitter.mtx'
}
```
### Define the testing functions
```
# Data reader - the file format is MTX, so we will use the reader from SciPy
def read_mtx_file(mm_file):
print('Reading ' + str(mm_file) + '...')
M = mmread(mm_file).asfptype()
return M
# CuGraph PageRank
def cugraph_call(M, max_iter, tol, alpha):
gdf = cudf.DataFrame()
gdf['src'] = M.row
gdf['dst'] = M.col
print('\tcuGraph Solving... ')
t1 = time.time()
# cugraph Pagerank Call
G = cugraph.DiGraph()
G.from_cudf_edgelist(gdf, source='src', destination='dst', renumber=False)
df = cugraph.pagerank(G, alpha=alpha, max_iter=max_iter, tol=tol)
t2 = time.time() - t1
return t2
# Basic NetworkX PageRank
def networkx_call(M, max_iter, tol, alpha):
nnz_per_row = {r: 0 for r in range(M.get_shape()[0])}
for nnz in range(M.getnnz()):
nnz_per_row[M.row[nnz]] = 1 + nnz_per_row[M.row[nnz]]
for nnz in range(M.getnnz()):
M.data[nnz] = 1.0/float(nnz_per_row[M.row[nnz]])
M = M.tocsr()
if M is None:
raise TypeError('Could not read the input graph')
if M.shape[0] != M.shape[1]:
raise TypeError('Shape is not square')
# should be autosorted, but check just to make sure
if not M.has_sorted_indices:
print('sort_indices ... ')
M.sort_indices()
z = {k: 1.0/M.shape[0] for k in range(M.shape[0])}
print('\tNetworkX Solving... ')
# start timer
t1 = time.time()
Gnx = nx.DiGraph(M)
pr = nx.pagerank(Gnx, alpha, z, max_iter, tol)
t2 = time.time() - t1
return t2
# SciPy PageRank
def networkx_scipy_call(M, max_iter, tol, alpha):
nnz_per_row = {r: 0 for r in range(M.get_shape()[0])}
for nnz in range(M.getnnz()):
nnz_per_row[M.row[nnz]] = 1 + nnz_per_row[M.row[nnz]]
for nnz in range(M.getnnz()):
M.data[nnz] = 1.0/float(nnz_per_row[M.row[nnz]])
M = M.tocsr()
if M is None:
raise TypeError('Could not read the input graph')
if M.shape[0] != M.shape[1]:
raise TypeError('Shape is not square')
# should be autosorted, but check just to make sure
if not M.has_sorted_indices:
print('sort_indices ... ')
M.sort_indices()
z = {k: 1.0/M.shape[0] for k in range(M.shape[0])}
# SciPy Pagerank Call
print('\tSciPy Solving... ')
t1 = time.time()
Gnx = nx.DiGraph(M)
pr = nx.pagerank_scipy(Gnx, alpha, z, max_iter, tol)
t2 = time.time() - t1
return t2
```
### Run the benchmarks
```
# arrays to capture performance gains
time_cu = []
time_nx = []
time_sp = []
perf_nx = []
perf_sp = []
names = []
# init libraries by doing a simple task
v = './data/preferentialAttachment.mtx'
M = read_mtx_file(v)
trapids = cugraph_call(M, 100, 0.00001, 0.85)
del M
for k,v in data.items():
gc.collect()
# Saved the file Name
names.append(k)
# read the data
M = read_mtx_file(v)
# call cuGraph - this will be the baseline
trapids = cugraph_call(M, 100, 0.00001, 0.85)
time_cu.append(trapids)
# Now call NetworkX
tn = networkx_call(M, 100, 0.00001, 0.85)
speedUp = (tn / trapids)
perf_nx.append(speedUp)
time_nx.append(tn)
# Now call SciPy
tsp = networkx_scipy_call(M, 100, 0.00001, 0.85)
speedUp = (tsp / trapids)
perf_sp.append(speedUp)
time_sp.append(tsp)
print("cuGraph (" + str(trapids) + ") Nx (" + str(tn) + ") SciPy (" + str(tsp) + ")" )
del M
```
### plot the output
```
%matplotlib inline
plt.figure(figsize=(10,8))
bar_width = 0.35
index = np.arange(len(names))
_ = plt.bar(index, perf_nx, bar_width, color='g', label='vs Nx')
_ = plt.bar(index + bar_width, perf_sp, bar_width, color='b', label='vs SciPy')
plt.xlabel('Datasets')
plt.ylabel('Speedup')
plt.title('PageRank Performance Speedup')
plt.xticks(index + (bar_width / 2), names)
plt.xticks(rotation=90)
# Text on the top of each barplot
for i in range(len(perf_nx)):
plt.text(x = (i - 0.55) + bar_width, y = perf_nx[i] + 25, s = round(perf_nx[i], 1), size = 12)
for i in range(len(perf_sp)):
plt.text(x = (i - 0.1) + bar_width, y = perf_sp[i] + 25, s = round(perf_sp[i], 1), size = 12)
plt.legend()
plt.show()
```
# Dump the raw stats
```
perf_nx
perf_sp
time_cu
time_nx
time_sp
```
___
Copyright (c) 2020, NVIDIA CORPORATION.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
___
| github_jupyter |
```
pip install contractions
import pandas as pd
import boto3, sys,sagemaker
import pandas as pd
import pandas as pd
import numpy as np
import nltk
import string
import contractions
from nltk.tokenize import word_tokenize
from nltk.tokenize import sent_tokenize
from nltk.corpus import stopwords, wordnet
from nltk.stem import WordNetLemmatizer
# plt.xticks(rotation=70)
pd.options.mode.chained_assignment = None
pd.set_option('display.max_colwidth', 100)
%matplotlib inline
pd.set_option('display.max_rows', None)
pd.set_option('display.max_columns', None)
df = pd.read_csv('cleanedText.csv', index_col=0)
df['no_punc'][1]
table = df.loc[:,['TEXT']]
nltk.download('punkt')
table['tokenizedSent'] = table['TEXT'].apply(sent_tokenize)
table['tokenizedSent'][1]
table['tokenized_words'] = table['tokenizedSent'].apply(lambda x: [word_tokenize(sent) for sent in x])
# punc = string.punctuation
# table['no_punc/numbers'] = table['tokenizedSent'].apply(lambda x: [word.lower() for word in x if word not in punc])
table['tokenized_words'][1]
table['no_punc/numbers'] = table['tokenized_words'].apply(lambda x: [[word.lower() for word in sent if word not in punc and word.isalpha()] for sent in x])
table['no_punc/numbers'][1]
nltk.download('stopwords')
stop_words = set(stopwords.words('english'))
table['stopwords_removed'] = table['no_punc/numbers'].apply(lambda x: [[word for word in sent if word not in stop_words]for sent in x])
table['stopwords_removed'][1]
nltk.download('averaged_perceptron_tagger')
table['pos_tags'] = table['stopwords_removed'].apply(lambda x: [nltk.tag.pos_tag(sent) for sent in x if len(sent)> 0 ])
table['pos_tags'][1]
nltk.download('wordnet')
def get_wordnet_pos(tag):
if tag.startswith('J'):
return wordnet.ADJ
elif tag.startswith('V'):
return wordnet.VERB
elif tag.startswith('N'):
return wordnet.NOUN
elif tag.startswith('R'):
return wordnet.ADV
else:
return wordnet.NOUN
table['wordnet_pos'] = table['pos_tags'].apply(lambda x: [[(word, get_wordnet_pos(pos_tag)) for (word, pos_tag) in sent]for sent in x])
table['wordnet_pos'][1]
wnl = WordNetLemmatizer()
table['lemmatized'] = table['wordnet_pos'].apply(lambda x: [[wnl.lemmatize(word, tag) for word, tag in sent] for sent in x])
table.head()
table.to_csv('sentenceRowClean.csv')
pip install gensim
import logging
from gensim.models.word2vec import Word2Vec
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
w2v_model2 = Word2Vec(table['lemmatized'], vector_size=100, min_count=2)
y = list(table['lemmatized'])
sentences = []
for row in y:
for sent in row:
sentences.append(sent)
import logging
from gensim.models.word2vec import Word2Vec
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
w2v_model2 = Word2Vec(sentences, vector_size=100, min_count=2)
w2v_model2.wv.most_similar(['disease'])
w2v_model2.wv.most_similar(['cancer'])
w2v_model2.save('w2vec2')
model1[w]
```
| github_jupyter |
# Langmuir-enhanced entrainment
This notebook reproduces Fig. 15 of [Li et al., 2019](https://doi.org/10.1029/2019MS001810).
```
import sys
import numpy as np
from scipy import stats
import matplotlib.pyplot as plt
from matplotlib import cm
from mpl_toolkits.axes_grid1.inset_locator import inset_axes
sys.path.append("../../../gotmtool")
from gotmtool import *
def plot_hLL_dpedt(hLL, dpedt, casename_list, ax=None, xlabel_on=True):
if ax is None:
ax = plt.gca()
idx_WD05 = [('WD05' in casename) for casename in casename_list]
idx_WD08 = [('WD08' in casename) for casename in casename_list]
idx_WD10 = [('WD10' in casename) for casename in casename_list]
b0_str = [casename[2:4] for casename in casename_list]
b0 = np.array([float(tmp[0])*100 if 'h' in tmp else float(tmp) for tmp in b0_str])
b0_min = b0.min()
b0_max = b0.max()
ax.plot(hLL, dpedt, color='k', linewidth=1, linestyle=':', zorder=1)
im = ax.scatter(hLL[idx_WD05], dpedt[idx_WD05], c=b0[idx_WD05], marker='d', edgecolors='k',
linewidth=1, zorder=2, label='$U_{10}=5$ m s$^{-1}$', cmap='bone_r', vmin=b0_min, vmax=b0_max)
ax.scatter(hLL[idx_WD08], dpedt[idx_WD08], c=b0[idx_WD08], marker='s', edgecolors='k',
linewidth=1, zorder=2, label='$U_{10}=8$ m s$^{-1}$', cmap='bone_r', vmin=b0_min, vmax=b0_max)
ax.scatter(hLL[idx_WD10], dpedt[idx_WD10], c=b0[idx_WD10], marker='^', edgecolors='k',
linewidth=1, zorder=2, label='$U_{10}=10$ m s$^{-1}$', cmap='bone_r', vmin=b0_min, vmax=b0_max)
ax.legend(loc='upper left')
# add colorbar
ax_inset = inset_axes(ax, width="30%", height="3%", loc='lower right',
bbox_to_anchor=(-0.05, 0.1, 1, 1),
bbox_transform=ax.transAxes,
borderpad=0,)
cb = plt.colorbar(im, cax=ax_inset, orientation='horizontal', shrink=0.35,
ticks=[5, 100, 300, 500])
cb.ax.set_xticklabels(['-5','-100','-300','-500'])
ax.text(0.75, 0.2, '$Q_0$ (W m$^{-2}$)', color='black', transform=ax.transAxes,
fontsize=10, va='top', ha='left')
# get axes ratio
ll, ur = ax.get_position() * plt.gcf().get_size_inches()
width, height = ur - ll
axes_ratio = height / width
# add arrow and label
add_arrow(ax, 0.6, 0.2, 0.3, 0.48, axes_ratio, color='gray', text='Increasing Convection')
add_arrow(ax, 0.3, 0.25, -0.2, 0.1, axes_ratio, color='black', text='Increasing Langmuir')
add_arrow(ax, 0.65, 0.75, -0.25, 0.01, axes_ratio, color='black', text='Increasing Langmuir')
ax.set_xscale('log')
ax.set_yscale('log')
if xlabel_on:
ax.set_xlabel('$h/\kappa L$', fontsize=14)
ax.set_ylabel('$d\mathrm{PE}/dt$', fontsize=14)
ax.set_xlim([3e-3, 4e1])
ax.set_ylim([2e-4, 5e-2])
# set the tick labels font
for label in (ax.get_xticklabels() + ax.get_yticklabels()):
label.set_fontsize(14)
def plot_hLL_R(hLL, R, colors, legend_list, ax=None, xlabel_on=True):
if ax is None:
ax = plt.gca()
ax.axhline(y=1, linewidth=1, color='black')
nm = R.shape[0]
for i in np.arange(nm):
ax.scatter(hLL, R[i,:], color=colors[i], edgecolors='k', linewidth=0.5, zorder=10)
ax.set_xscale('log')
ax.set_xlim([3e-3, 4e1])
if xlabel_on:
ax.set_xlabel('$h/L_L$', fontsize=14)
ax.set_ylabel('$R$', fontsize=14)
# set the tick labels font
for label in (ax.get_xticklabels() + ax.get_yticklabels()):
label.set_fontsize(14)
# legend
if nm > 1:
xshift = 0.2 + 0.05*(11-nm)
xx = np.arange(nm)+1
xx = xx*0.06+xshift
yy = np.ones(xx.size)*0.1
for i in np.arange(nm):
ax.text(xx[i], yy[i], legend_list[i], color='black', transform=ax.transAxes,
fontsize=12, rotation=45, va='bottom', ha='left')
ax.scatter(xx[i], 0.07, s=60, color=colors[i], edgecolors='k', linewidth=1, transform=ax.transAxes)
def add_arrow(ax, x, y, dx, dy, axes_ratio, color='black', text=None):
ax.arrow(x, y, dx, dy, width=0.006, color=color, transform=ax.transAxes)
if text is not None:
dl = np.sqrt(dx**2+dy**2)
xx = x + 0.5*dx + dy/dl*0.06
yy = y + 0.5*dy - dx/dl*0.06
angle = np.degrees(np.arctan(dy/dx*axes_ratio))
ax.text(xx, yy, text, color=color, transform=ax.transAxes, fontsize=11,
rotation=angle, va='center', ha='center')
```
### Load LF17 data
```
# load LF17 data
lf17_data = np.load('LF17_dPEdt.npz')
us0 = lf17_data['us0']
b0 = lf17_data['b0']
ustar = lf17_data['ustar']
hb = lf17_data['hb']
dpedt = lf17_data['dpedt']
casenames = lf17_data['casenames']
ncase = len(casenames)
# get parameter h/L_L= w*^3/u*^2/u^s(0)
inds = us0==0
us0[inds] = np.nan
hLL = b0*hb/ustar**2/us0
```
### Compute the rate of change in potential energy in GOTM runs
```
turbmethods = [
'GLS-C01A',
'KPP-CVMix',
'KPPLT-VR12',
'KPPLT-LF17',
]
ntm = len(turbmethods)
cmap = cm.get_cmap('rainbow')
if ntm == 1:
colors = ['gray']
else:
colors = cmap(np.linspace(0,1,ntm))
m = Model(name='Entrainment-LF17', environ='../../.gotm_env.yaml')
gotmdir = m.environ['gotmdir_run']+'/'+m.name
print(gotmdir)
# Coriolis parameter (s^{-1})
f = 4*np.pi/86400*np.sin(np.pi/4)
# Inertial period (s)
Ti = 2*np.pi/f
# get dPEdt from GOTM run
rdpedt = np.zeros([ntm, ncase])
for i in np.arange(ntm):
print(turbmethods[i])
for j in np.arange(ncase):
sim = Simulation(path=gotmdir+'/'+casenames[j]+'/'+turbmethods[i])
var_gotm = sim.load_data().Epot
epot_gotm = var_gotm.data.squeeze()
dtime = var_gotm.time - var_gotm.time[0]
time_gotm = (dtime.dt.days*86400.+dtime.dt.seconds).data
# starting index for the last inertial period
t0_gotm = time_gotm[-1]-Ti
tidx0_gotm = np.argmin(np.abs(time_gotm-t0_gotm))
# linear fit
xx_gotm = time_gotm[tidx0_gotm:]-time_gotm[tidx0_gotm]
yy_gotm = epot_gotm[tidx0_gotm:]-epot_gotm[tidx0_gotm]
slope_gotm, intercept_gotm, r_value_gotm, p_value_gotm, std_err_gotm = stats.linregress(xx_gotm,yy_gotm)
rdpedt[i,j] = slope_gotm/dpedt[j]
fig, axarr = plt.subplots(2, 1, sharex='col')
fig.set_size_inches(6, 7)
plt.subplots_adjust(left=0.15, right=0.95, bottom=0.09, top=0.95, hspace=0.1)
plot_hLL_dpedt(hLL, dpedt, casenames, ax=axarr[0])
plot_hLL_R(hLL, rdpedt, colors, turbmethods, ax=axarr[1])
axarr[0].text(0.04, 0.14, '(a)', color='black', transform=axarr[0].transAxes,
fontsize=14, va='top', ha='left')
axarr[1].text(0.88, 0.94, '(b)', color='black', transform=axarr[1].transAxes,
fontsize=14, va='top', ha='left')
```
| github_jupyter |
##Tirmzi Analysis
n=1000 m+=1000 nm-=120 istep= 4 min=150 max=700
```
import sys
sys.path
import matplotlib.pyplot as plt
import numpy as np
import os
from scipy import signal
ls
import capsol.newanalyzecapsol as ac
ac.get_gridparameters
import glob
folders = glob.glob("FortranOutputTest/*/")
folders
all_data= dict()
for folder in folders:
params = ac.get_gridparameters(folder + 'capsol.in')
data = ac.np.loadtxt(folder + 'Z-U.dat')
process_data = ac.process_data(params, data, smoothing=False, std=5*10**-9)
all_data[folder]= (process_data)
all_params= dict()
for folder in folders:
params=ac.get_gridparameters(folder + 'capsol.in')
all_params[folder]= (params)
all_data
all_data.keys()
for key in {key: params for key, params in all_params.items() if params['Thickness_sample'] == 1.0}:
data=all_data[key]
thickness =all_params[key]['Thickness_sample']
rtip= all_params[key]['Rtip']
er=all_params[key]['eps_r']
plt.plot(data['z'], data['c'], label= f'{rtip} nm, {er}, {thickness} nm')
plt.title('C v. Z for 1nm thick sample')
plt.ylabel("C(m)")
plt.xlabel("Z(m)")
plt.legend()
plt.savefig("C' v. Z for 1nm thick sample 06-28-2021.png")
for key in {key: params for key, params in all_params.items() if params['Thickness_sample'] == 10.0}:
data=all_data[key]
thickness =all_params[key]['Thickness_sample']
rtip= all_params[key]['Rtip']
er=all_params[key]['eps_r']
plt.plot(data['z'], data['c'], label= f'{rtip} nm, {er}, {thickness} nm')
plt.title('C v. Z for 10nm thick sample')
plt.ylabel("C(m)")
plt.xlabel("Z(m)")
plt.legend()
plt.savefig("C' v. Z for varying sample thickness, 06-28-2021.png")
for key in {key: params for key, params in all_params.items() if params['Thickness_sample'] == 100.0}:
data=all_data[key]
thickness =all_params[key]['Thickness_sample']
rtip= all_params[key]['Rtip']
er=all_params[key]['eps_r']
plt.plot(data['z'], data['c'], label= f'{rtip} nm, {er}, {thickness} nm')
plt.title('C v. Z for 100nm sample')
plt.ylabel("C(m)")
plt.xlabel("Z(m)")
plt.legend()
plt.savefig("C' v. Z for varying sample thickness, 06-28-2021.png")
for key in {key: params for key, params in all_params.items() if params['Thickness_sample'] == 500.0}:
data=all_data[key]
thickness =all_params[key]['Thickness_sample']
rtip= all_params[key]['Rtip']
er=all_params[key]['eps_r']
plt.plot(data['z'], data['c'], label= f'{rtip} nm, {er}, {thickness} nm')
plt.title('C v. Z for 500nm sample')
plt.ylabel("C(m)")
plt.xlabel("Z(m)")
plt.legend()
plt.savefig("C' v. Z for varying sample thickness, 06-28-2021.png")
```
cut off last experiment because capacitance was off the scale
```
for params in all_params.values():
print(params['Thickness_sample'])
print(params['m-'])
all_params
for key in {key: params for key, params in all_params.items() if params['Thickness_sample'] == 1.0}:
data=all_data[key]
thickness=all_params[key]['Thickness_sample']
rtip= all_params[key]['Rtip']
er=all_params[key]['eps_r']
s=slice(4,-3)
plt.plot(data['z'][s], data['cz'][s], label=f'{rtip} nm, {er}, {thickness} nm' )
plt.title('Cz vs. Z for 1.0nm')
plt.ylabel("Cz")
plt.xlabel("Z(m)")
plt.legend()
plt.savefig("Cz v. Z for varying sample thickness, 06-28-2021.png")
for key in {key: params for key, params in all_params.items() if params['Thickness_sample'] == 10.0}:
data=all_data[key]
thickness=all_params[key]['Thickness_sample']
rtip= all_params[key]['Rtip']
er=all_params[key]['eps_r']
s=slice(4,-3)
plt.plot(data['z'][s], data['cz'][s], label=f'{rtip} nm, {er}, {thickness} nm' )
plt.title('Cz vs. Z for 10.0nm')
plt.ylabel("Cz")
plt.xlabel("Z(m)")
plt.legend()
plt.savefig("Cz v. Z for varying sample thickness, 06-28-2021.png")
for key in {key: params for key, params in all_params.items() if params['Thickness_sample'] == 100.0}:
data=all_data[key]
thickness=all_params[key]['Thickness_sample']
rtip= all_params[key]['Rtip']
er=all_params[key]['eps_r']
s=slice(4,-3)
plt.plot(data['z'][s], data['cz'][s], label=f'{rtip} nm, {er}, {thickness} nm' )
plt.title('Cz vs. Z for 100.0nm')
plt.ylabel("Cz")
plt.xlabel("Z(m)")
plt.legend()
plt.savefig("Cz v. Z for varying sample thickness, 06-28-2021.png")
for key in {key: params for key, params in all_params.items() if params['Thickness_sample'] == 500.0}:
data=all_data[key]
thickness=all_params[key]['Thickness_sample']
rtip= all_params[key]['Rtip']
er=all_params[key]['eps_r']
s=slice(4,-3)
plt.plot(data['z'][s], data['cz'][s], label=f'{rtip} nm, {er}, {thickness} nm' )
plt.title('Cz vs. Z for 500.0nm')
plt.ylabel("Cz")
plt.xlabel("Z(m)")
plt.legend()
plt.savefig("Cz v. Z for varying sample thickness, 06-28-2021.png")
hoepker_data= np.loadtxt("Default Dataset (2).csv" , delimiter= ",")
hoepker_data
for key in {key: params for key, params in all_params.items() if params['Thickness_sample'] == 1.0}:
data=all_data[key]
thickness=all_params[key]['Thickness_sample']
rtip= all_params[key]['Rtip']
er=all_params[key]['eps_r']
s=slice(5,-5)
plt.plot(data['z'][s], data['czz'][s], label=f'{rtip} nm, {er}, {thickness} nm' )
plt.title('Czz vs. Z for 1.0nm')
plt.ylabel("Czz")
plt.xlabel("Z(m)")
plt.legend()
plt.savefig("Czz v. Z for varying sample thickness, 06-28-2021.png")
params
for key in {key: params for key, params in all_params.items() if params['Thickness_sample'] == 10.0}:
data=all_data[key]
thickness=all_params[key]['Thickness_sample']
rtip= all_params[key]['Rtip']
er=all_params[key]['eps_r']
s=slice(5,-5)
plt.plot(data['z'][s], data['czz'][s], label=f'{rtip} nm, {er}, {thickness} nm' )
plt.title('Czz vs. Z for 10.0nm')
plt.ylabel("Czz")
plt.xlabel("Z(m)")
plt.legend()
plt.savefig("Czz v. Z for varying sample thickness, 06-28-2021.png")
for key in {key: params for key, params in all_params.items() if params['Thickness_sample'] == 100.0}:
data=all_data[key]
thickness=all_params[key]['Thickness_sample']
rtip= all_params[key]['Rtip']
er=all_params[key]['eps_r']
s=slice(5,-5)
plt.plot(data['z'][s], data['czz'][s], label=f'{rtip} nm, {er}, {thickness} nm' )
plt.title('Czz vs. Z for 100.0nm')
plt.ylabel("Czz")
plt.xlabel("Z(m)")
plt.legend()
plt.savefig("Czz v. Z for varying sample thickness, 06-28-2021.png")
for key in {key: params for key, params in all_params.items() if params['Thickness_sample'] == 500.0}:
data=all_data[key]
thickness=all_params[key]['Thickness_sample']
rtip= all_params[key]['Rtip']
er=all_params[key]['eps_r']
s=slice(5,-5)
plt.plot(data['z'][s], data['czz'][s], label=f'{rtip} nm, {er}, {thickness} nm' )
plt.title('Czz vs. Z for 500.0 nm')
plt.ylabel("Czz")
plt.xlabel("Z(m)")
plt.legend()
plt.savefig("Czz v. Z for varying sample thickness, 06-28-2021.png")
for key in {key: params for key, params in all_params.items() if params['Thickness_sample'] == 1.0}:
data=all_data[key]
thickness=all_params[key]['Thickness_sample']
rtip= all_params[key]['Rtip']
er=all_params[key]['eps_r']
s=slice(8,-8)
plt.plot(data['z'][s], data['alpha'][s], label=f'{rtip} nm, {er}, {thickness} nm' )
plt.title('alpha vs. Z for 1.0nm')
plt.ylabel("$\\alpha$")
plt.xlabel("Z(m)")
plt.legend()
plt.savefig("Alpha v. Z for varying sample thickness, 06-28-2021.png")
for key in {key: params for key, params in all_params.items() if params['Thickness_sample'] == 10.0}:
data=all_data[key]
thickness=all_params[key]['Thickness_sample']
rtip= all_params[key]['Rtip']
er=all_params[key]['eps_r']
s=slice(8,-8)
plt.plot(data['z'][s], data['alpha'][s], label=f'{rtip} nm, {er}, {thickness} nm' )
plt.title('Alpha vs. Z for 10.0 nm')
plt.ylabel("$\\alpha$")
plt.xlabel("Z(m)")
plt.legend()
plt.savefig("Czz v. Z for varying sample thickness, 06-28-2021.png")
for key in {key: params for key, params in all_params.items() if params['Thickness_sample'] == 100.0}:
data=all_data[key]
thickness=all_params[key]['Thickness_sample']
rtip= all_params[key]['Rtip']
er=all_params[key]['eps_r']
s=slice(8,-8)
plt.plot(data['z'][s], data['alpha'][s], label=f'{rtip} nm, {er}, {thickness} nm' )
plt.title('Alpha vs. Z for 100.0nm')
plt.ylabel("$\\alpha$")
plt.xlabel("Z(m)")
plt.legend()
plt.savefig("Czz v. Z for varying sample thickness, 06-28-2021.png")
for key in {key: params for key, params in all_params.items() if params['Thickness_sample'] == 500.0}:
data=all_data[key]
thickness=all_params[key]['Thickness_sample']
rtip= all_params[key]['Rtip']
er=all_params[key]['eps_r']
s=slice(8,-8)
plt.plot(data['z'][s], data['alpha'][s], label=f'{rtip} nm, {er}, {thickness} nm' )
plt.title('Alpha vs. Z for 500.0nm')
plt.ylabel("$\\alpha$")
plt.xlabel("Z(m)")
plt.legend()
plt.savefig("Czz v. Z for varying sample thickness, 06-28-2021.png")
data
from scipy.optimize import curve_fit
def Cz_model(z, a, n, b,):
return(a*z**n + b)
all_data.keys()
data= all_data['capsol-calc\\0001-capsol\\']
z= data['z'][1:-1]
cz= data['cz'][1:-1]
popt, pcov= curve_fit(Cz_model, z, cz, p0=[cz[0]*z[0], -1, 0])
a=popt[0]
n=popt[1]
b=popt[2]
std_devs= np.sqrt(pcov.diagonal())
sigma_a = std_devs[0]
sigma_n = std_devs[1]
model_output= Cz_model(z, a, n, b)
rmse= np.sqrt(np.mean((cz - model_output)**2))
f"a= {a} ± {sigma_a}"
f"n= {n}± {sigma_n}"
model_output
"Root Mean Square Error"
rmse/np.mean(-cz)
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
r = np.random.randn((1000))
S0 = 1
S = np.cumsum(r) + S0
T = 2
mu = 0.
sigma = 0.01
S0 = 20
dt = 0.01
N = round(T/dt)
t = np.linspace(0, T, N)
W = np.random.standard_normal(size = N)
W = np.cumsum(W)*np.sqrt(dt) ### standard brownian motion ###
X = (mu-0.5*sigma**2)*t + sigma*W
S = S0*np.exp(X) ### geometric brownian motion ###
plt.plot(t, S)
plt.show()
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
from blackscholes import geometric_brownian_motion, blackScholes
from scipy.stats import norm
geometric_brownian_motion(mu=0., sigma=0.01, s0=1, dt=0.01);
t = 2.
dt = 0.01
N = int(round(t / dt))
np.linspace(0, t, N)
tt = np.linspace(0, t, N)
W = norm((N))
@interact(mu=(-0.02, 0.05, 0.01), sigma=(0.01, 0.1, 0.005), S0=(1,100,10), dt=(0.001, 0.1, 0.001))
def plot_gbm(mu, sigma, S0, dt):
s, t = geometric_brownian_motion(mu=mu, sigma=sigma, t=2, dt=dt, s0=S0)
pd.Series(t, s).plot()
plt.show()
df.ix[0.1:,:].gamma.plot()
tau = np.clip( np.linspace(1.0, .0, 101), 0.0000001, 100)
S = 1.
K = 1.
sigma = 1
df = pd.DataFrame.from_dict(blackScholes(tau, S, K, sigma))
df.index = tau
@interact(mu=(-0.02, 0.05, 0.01), sigma=(0.01, 0.1, 0.005), S0=(1,100,10), dt=(0.001, 0.1, 0.001))
def plot_gbm(mu, sigma, S0, dt):
s, t = geometric_brownian_motion(mu=mu, sigma=sigma, t=2, dt=dt, s0=S0)
pd.Series(t, s).plot()
plt.show()
```
## Q-learning
- Initialize $V(s)$ arbitrarily
- Repeat for each episode
- Initialize s
- Repeat (for each step of episode)
- - $\alpha \leftarrow$ action given by $\pi$ for $s$
- - Take action a, observe reward r, and next state s'
- - $V(s) \leftarrow V(s) + \alpha [r = \gamma V(s') - V(s)]$
- - $s \leftarrow s'$
- until $s$ is terminal
```
import td
import scipy as sp
α = 0.05
γ = 0.1
td_learning = td.TD(α, γ)
```
## Black Scholes
$${\displaystyle d_{1}={\frac {1}{\sigma {\sqrt {T-t}}}}\left[\ln \left({\frac {S_{t}}{K}}\right)+(r-q+{\frac {1}{2}}\sigma ^{2})(T-t)\right]}$$
$${\displaystyle C(S_{t},t)=e^{-r(T-t)}[FN(d_{1})-KN(d_{2})]\,}$$
$${\displaystyle d_{2}=d_{1}-\sigma {\sqrt {T-t}}={\frac {1}{\sigma {\sqrt {T-t}}}}\left[\ln \left({\frac {S_{t}}{K}}\right)+(r-q-{\frac {1}{2}}\sigma ^{2})(T-t)\right]}$$
```
d_1 = lambda σ, T, t, S, K: 1. / σ / np.sqrt(T - t) * (np.log(S / K) + 0.5 * (σ ** 2) * (T-t))
d_2 = lambda σ, T, t, S, K: 1. / σ / np.sqrt(T - t) * (np.log(S / K) - 0.5 * (σ ** 2) * (T-t))
call = lambda σ, T, t, S, K: S * sp.stats.norm.cdf( d_1(σ, T, t, S, K) ) - K * sp.stats.norm.cdf( d_2(σ, T, t, S, K) )
plt.plot(np.linspace(0.1, 4., 100), call(1., 1., .9, np.linspace(0.1, 4., 100), 1.))
d_1(1., 1., 0., 1.9, 1)
plt.plot(d_1(1., 1., 0., np.linspace(0.1, 2.9, 10), 1))
plt.plot(np.linspace(0.01, 1.9, 100), sp.stats.norm.cdf(d_1(1., 1., 0.2, np.linspace(0.01, 1.9, 100), 1)))
plt.plot(np.linspace(0.01, 1.9, 100), sp.stats.norm.cdf(d_1(1., 1., 0.6, np.linspace(0.01, 1.9, 100), 1)))
plt.plot(np.linspace(0.01, 1.9, 100), sp.stats.norm.cdf(d_1(1., 1., 0.9, np.linspace(0.01, 1.9, 100), 1)))
plt.plot(np.linspace(0.01, 1.9, 100), sp.stats.norm.cdf(d_1(1., 1., 0.99, np.linspace(0.01, 1.9, 100), 1)))
def iterate_series(n=1000, S0 = 1):
while True:
r = np.random.randn((n))
S = np.cumsum(r) + S0
yield S, r
def iterate_world(n=1000, S0=1, N=5):
for (s, r) in take(N, iterate_series(n=n, S0=S0)):
t, t_0 = 0, 0
for t in np.linspace(0, len(s)-1, 100):
r = s[int(t)] / s[int(t_0)]
yield r, s[int(t)]
t_0 = t
from cytoolz import take
import gym
import gym_bs
from test_cem_future import *
import pandas as pd
import numpy as np
# df.iloc[3] = (0.2, 1, 3)
df
rwd, df, agent = noisy_evaluation(np.array([0.1, 0, 0]))
rwd
df
agent;
env.observation_space
```
| github_jupyter |
# Plotting with Matplotlib
IPython works with the [Matplotlib](http://matplotlib.org/) plotting library, which integrates Matplotlib with IPython's display system and event loop handling.
## matplotlib mode
To make plots using Matplotlib, you must first enable IPython's matplotlib mode.
To do this, run the `%matplotlib` magic command to enable plotting in the current Notebook.
This magic takes an optional argument that specifies which Matplotlib backend should be used. Most of the time, in the Notebook, you will want to use the `inline` backend, which will embed plots inside the Notebook:
```
%matplotlib inline
```
You can also use Matplotlib GUI backends in the Notebook, such as the Qt backend (`%matplotlib qt`). This will use Matplotlib's interactive Qt UI in a floating window to the side of your browser. Of course, this only works if your browser is running on the same system as the Notebook Server. You can always call the `display` function to paste figures into the Notebook document.
## Making a simple plot
With matplotlib enabled, plotting should just work.
```
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(0, 3*np.pi, 500)
plt.plot(x, np.sin(x**2))
plt.title('A simple chirp');
```
These images can be resized by dragging the handle in the lower right corner. Double clicking will return them to their original size.
One thing to be aware of is that by default, the `Figure` object is cleared at the end of each cell, so you will need to issue all plotting commands for a single figure in a single cell.
## Loading Matplotlib demos with %load
IPython's `%load` magic can be used to load any Matplotlib demo by its URL:
```
# %load http://matplotlib.org/mpl_examples/showcase/integral_demo.py
"""
Plot demonstrating the integral as the area under a curve.
Although this is a simple example, it demonstrates some important tweaks:
* A simple line plot with custom color and line width.
* A shaded region created using a Polygon patch.
* A text label with mathtext rendering.
* figtext calls to label the x- and y-axes.
* Use of axis spines to hide the top and right spines.
* Custom tick placement and labels.
"""
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.patches import Polygon
def func(x):
return (x - 3) * (x - 5) * (x - 7) + 85
a, b = 2, 9 # integral limits
x = np.linspace(0, 10)
y = func(x)
fig, ax = plt.subplots()
plt.plot(x, y, 'r', linewidth=2)
plt.ylim(bottom=0)
# Make the shaded region
ix = np.linspace(a, b)
iy = func(ix)
verts = [(a, 0)] + list(zip(ix, iy)) + [(b, 0)]
poly = Polygon(verts, facecolor='0.9', edgecolor='0.5')
ax.add_patch(poly)
plt.text(0.5 * (a + b), 30, r"$\int_a^b f(x)\mathrm{d}x$",
horizontalalignment='center', fontsize=20)
plt.figtext(0.9, 0.05, '$x$')
plt.figtext(0.1, 0.9, '$y$')
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.xaxis.set_ticks_position('bottom')
ax.set_xticks((a, b))
ax.set_xticklabels(('$a$', '$b$'))
ax.set_yticks([])
plt.show()
```
Matplotlib 1.4 introduces an interactive backend for use in the notebook,
called 'nbagg'. You can enable this with `%matplotlib notebook`.
With this backend, you will get interactive panning and zooming of matplotlib figures in your browser.
```
%matplotlib widget
plt.figure()
x = np.linspace(0, 5 * np.pi, 1000)
for n in range(1, 4):
plt.plot(np.sin(n * x))
plt.show()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/mjvakili/MLcourse/blob/master/day2/nn_qso_finder.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Let's start by importing the libraries that we need for this exercise.
```
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
import matplotlib
from sklearn.model_selection import train_test_split
#matplotlib settings
matplotlib.rcParams['xtick.major.size'] = 7
matplotlib.rcParams['xtick.labelsize'] = 'x-large'
matplotlib.rcParams['ytick.major.size'] = 7
matplotlib.rcParams['ytick.labelsize'] = 'x-large'
matplotlib.rcParams['xtick.top'] = False
matplotlib.rcParams['ytick.right'] = False
matplotlib.rcParams['ytick.direction'] = 'in'
matplotlib.rcParams['xtick.direction'] = 'in'
matplotlib.rcParams['font.size'] = 15
matplotlib.rcParams['figure.figsize'] = [7,7]
#We need the astroml library to fetch the photometric datasets of sdss qsos and stars
pip install astroml
from astroML.datasets import fetch_dr7_quasar
from astroML.datasets import fetch_sdss_sspp
quasars = fetch_dr7_quasar()
stars = fetch_sdss_sspp()
# Data procesing taken from
#https://www.astroml.org/book_figures/chapter9/fig_star_quasar_ROC.html by Jake Van der Plus
# stack colors into matrix X
Nqso = len(quasars)
Nstars = len(stars)
X = np.empty((Nqso + Nstars, 4), dtype=float)
X[:Nqso, 0] = quasars['mag_u'] - quasars['mag_g']
X[:Nqso, 1] = quasars['mag_g'] - quasars['mag_r']
X[:Nqso, 2] = quasars['mag_r'] - quasars['mag_i']
X[:Nqso, 3] = quasars['mag_i'] - quasars['mag_z']
X[Nqso:, 0] = stars['upsf'] - stars['gpsf']
X[Nqso:, 1] = stars['gpsf'] - stars['rpsf']
X[Nqso:, 2] = stars['rpsf'] - stars['ipsf']
X[Nqso:, 3] = stars['ipsf'] - stars['zpsf']
y = np.zeros(Nqso + Nstars, dtype=int)
y[:Nqso] = 1
X = X/np.max(X, axis=0)
# split into training and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size = 0.9)
#Now let's build a simple Sequential model in which fully connected layers come after one another
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(), #this flattens input
tf.keras.layers.Dense(128, activation = "relu"),
tf.keras.layers.Dense(64, activation = "relu"),
tf.keras.layers.Dense(32, activation = "relu"),
tf.keras.layers.Dense(32, activation = "relu"),
tf.keras.layers.Dense(1, activation="sigmoid")
])
model.compile(optimizer='adam', loss='binary_crossentropy')
history = model.fit(X_train, y_train, validation_data = (X_test, y_test), batch_size = 32, epochs=20, verbose = 1)
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(loss))
plt.plot(epochs, loss, lw = 5, label='Training loss')
plt.plot(epochs, val_loss, lw = 5, label='validation loss')
plt.title('Loss')
plt.legend(loc=0)
plt.show()
prob = model.predict_proba(X_test) #model probabilities
from sklearn.metrics import confusion_matrix
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_test, prob)
plt.loglog(fpr, tpr, lw = 4)
plt.xlabel('false positive rate')
plt.ylabel('true positive rate')
plt.xlim(0.0, 0.15)
plt.ylim(0.6, 1.01)
plt.show()
plt.plot(thresholds, tpr, lw = 4)
plt.plot(thresholds, fpr, lw = 4)
plt.xlim(0,1)
plt.yscale("log")
plt.show()
#plt.xlabel('false positive rate')
#plt.ylabel('true positive rate')
##plt.xlim(0.0, 0.15)
#plt.ylim(0.6, 1.01)
#Now let's look at the confusion matrix
y_pred = model.predict(X_test)
z_pred = np.zeros(y_pred.shape[0], dtype = int)
mask = np.where(y_pred>.5)[0]
z_pred[mask] = 1
confusion_matrix(y_test, z_pred.astype(int))
import os, signal
os.kill(os.getpid(), signal.SIGKILL)
```
#Exercise1:
Try to change the number of layers, batchsize, as well as the default learning rate, one at a time. See which one can make a more significant impact on the performance of the model.
#Exercise 2:
Write a simple function for visualizing the predicted decision boundaries in the feature space. Try to identify the regions of the parameter space which contribute significantly to the false positive rates.
#Exercise 3:
This dataset is a bit imbalanced in that the QSOs are outnumbered by the stars. Can you think of a wighting scheme to pass to the loss function, such that the detection rate of QSOs increases?
| github_jupyter |
# Udacity PyTorch Scholarship Final Lab Challenge Guide
**A hands-on guide to get 90% + accuracy and complete the challenge**
**By [Soumya Ranjan Behera](https://www.linkedin.com/in/soumya044)**
## This Tutorial will be divided into Two Parts,
### [1. Model Building and Training](https://www.kaggle.com/soumya044/udacity-pytorch-final-lab-guide-part-1/)
### [2. Submit in Udcaity's Workspace for evaluation](https://www.kaggle.com/soumya044/udacity-pytorch-final-lab-guide-part-2/)
**Note:** This tutorial is like a template or guide for newbies to overcome the fear of the final lab challenge. My intent is not to promote plagiarism or any means of cheating. Users are encourage to take this tutorial as a baseline and build their own better model. Cheers!
**Fork this Notebook and Run it from Top-To-Bottom Step by Step**
# Part 1: Build and Train a Model
**Credits:** The dataset credit goes to [Lalu Erfandi Maula Yusnu](https://www.kaggle.com/nunenuh)
## 1. Import Data set and visualiza some data
```
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import os
print(os.listdir("../input/"))
# Any results you write to the current directory are saved as output.
```
**Import some visualization Libraries**
```
import matplotlib.pyplot as plt
%matplotlib inline
import cv2
# Set Train and Test Directory Variables
TRAIN_DATA_DIR = "../input/flower_data/flower_data/train/"
VALID_DATA_DIR = "../input/flower_data/flower_data/valid/"
#Visualiza Some Images of any Random Directory-cum-Class
FILE_DIR = str(np.random.randint(1,103))
print("Class Directory: ",FILE_DIR)
for file_name in os.listdir(os.path.join(TRAIN_DATA_DIR, FILE_DIR))[1:3]:
img_array = cv2.imread(os.path.join(TRAIN_DATA_DIR, FILE_DIR, file_name))
img_array = cv2.resize(img_array,(224, 224), interpolation = cv2.INTER_CUBIC)
plt.imshow(img_array)
plt.show()
print(img_array.shape)
```
## 2. Data Preprocessing (Image Augmentation)
**Import PyTorch libraries**
```
import torch
import torchvision
from torchvision import datasets, models, transforms
import torch.nn as nn
torch.__version__
```
**Note:** **Look carefully! Kaggle uses v1.0.0 while Udcaity workspace has v0.4.0 (Some issues may arise but we'll solve them)**
```
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
```
**Make a Class Variable i.e a list of Target Categories (List of 102 species) **
```
# I used os.listdir() to maintain the ordering
classes = os.listdir(VALID_DATA_DIR)
```
**Load and Transform (Image Augmentation)**
Soucre: https://github.com/udacity/deep-learning-v2-pytorch/blob/master/convolutional-neural-networks/cifar-cnn/cifar10_cnn_augmentation.ipynb
```
# Load and transform data using ImageFolder
# VGG-16 Takes 224x224 images as input, so we resize all of them
data_transform = transforms.Compose([transforms.RandomResizedCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])])
train_data = datasets.ImageFolder(TRAIN_DATA_DIR, transform=data_transform)
test_data = datasets.ImageFolder(VALID_DATA_DIR, transform=data_transform)
# print out some data stats
print('Num training images: ', len(train_data))
print('Num test images: ', len(test_data))
```
### Find more on Image Transforms using PyTorch Here (https://pytorch.org/docs/stable/torchvision/transforms.html)
## 3. Make a DataLoader
```
# define dataloader parameters
batch_size = 32
num_workers=0
# prepare data loaders
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
num_workers=num_workers, shuffle=True)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers, shuffle=True)
```
**Visualize Sample Images**
```
# Visualize some sample data
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
plt.imshow(np.transpose(images[idx], (1, 2, 0)))
ax.set_title(classes[labels[idx]])
```
**Here plt.imshow() clips our data into [0,....,255] range to show the images. The Warning message is due to our Transform Function. We can Ignore it.**
## 4. Use a Pre-Trained Model (VGG16)
Here we used a VGG16. You can experiment with other models.
References: https://github.com/udacity/deep-learning-v2-pytorch/blob/master/transfer-learning/Transfer_Learning_Solution.ipynb
**Try More Models: ** https://pytorch.org/docs/stable/torchvision/models.html
```
# Load the pretrained model from pytorch
model = models.<ModelNameHere>(pretrained=True)
print(model)
```
### We can see from above output that the last ,i.e, 6th Layer is a Fully-connected Layer with in_features=4096, out_features=1000
```
print(model.classifier[6].in_features)
print(model.classifier[6].out_features)
# The above lines work for vgg only. For other models refer to print(model) and look for last FC layer
```
**Freeze Training for all 'Features Layers', Only Train Classifier Layers**
```
# Freeze training for all "features" layers
for param in model.features.parameters():
param.requires_grad = False
#For models like ResNet or Inception use the following,
# Freeze training for all "features" layers
# for _, param in model.named_parameters():
# param.requires_grad = False
```
## Let's Add our own Last Layer which will have 102 out_features for 102 species
```
# VGG16
n_inputs = model.classifier[6].in_features
#Others
# n_inputs = model.fc.in_features
# add last linear layer (n_inputs -> 102 flower classes)
# new layers automatically have requires_grad = True
last_layer = nn.Linear(n_inputs, len(classes))
# VGG16
model.classifier[6] = last_layer
# Others
#model.fc = last_layer
# if GPU is available, move the model to GPU
if train_on_gpu:
model.cuda()
# check to see that your last layer produces the expected number of outputs
#VGG
print(model.classifier[6].out_features)
#Others
#print(model.fc.out_features)
```
# 5. Specify our Loss Function and Optimzer
```
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = #TODO
# specify optimizer (stochastic gradient descent) and learning rate = 0.01 or 0.001
optimizer = #TODO
```
# 6. Train our Model and Save necessary checkpoints
```
# Define epochs (between 50-200)
epochs = 20
# initialize tracker for minimum validation loss
valid_loss_min = np.Inf # set initial "min" to infinity
# Some lists to keep track of loss and accuracy during each epoch
epoch_list = []
train_loss_list = []
val_loss_list = []
train_acc_list = []
val_acc_list = []
# Start epochs
for epoch in range(epochs):
#adjust_learning_rate(optimizer, epoch)
# monitor training loss
train_loss = 0.0
val_loss = 0.0
###################
# train the model #
###################
# Set the training mode ON -> Activate Dropout Layers
model.train() # prepare model for training
# Calculate Accuracy
correct = 0
total = 0
# Load Train Images with Labels(Targets)
for data, target in train_loader:
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
if type(output) == tuple:
output, _ = output
# Calculate Training Accuracy
predicted = torch.max(output.data, 1)[1]
# Total number of labels
total += len(target)
# Total correct predictions
correct += (predicted == target).sum()
# calculate the loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update running training loss
train_loss += loss.item()*data.size(0)
# calculate average training loss over an epoch
train_loss = train_loss/len(train_loader.dataset)
# Avg Accuracy
accuracy = 100 * correct / float(total)
# Put them in their list
train_acc_list.append(accuracy)
train_loss_list.append(train_loss)
# Implement Validation like K-fold Cross-validation
# Set Evaluation Mode ON -> Turn Off Dropout
model.eval() # Required for Evaluation/Test
# Calculate Test/Validation Accuracy
correct = 0
total = 0
with torch.no_grad():
for data, target in test_loader:
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# Predict Output
output = model(data)
if type(output) == tuple:
output, _ = output
# Calculate Loss
loss = criterion(output, target)
val_loss += loss.item()*data.size(0)
# Get predictions from the maximum value
predicted = torch.max(output.data, 1)[1]
# Total number of labels
total += len(target)
# Total correct predictions
correct += (predicted == target).sum()
# calculate average training loss and accuracy over an epoch
val_loss = val_loss/len(test_loader.dataset)
accuracy = 100 * correct/ float(total)
# Put them in their list
val_acc_list.append(accuracy)
val_loss_list.append(val_loss)
# Print the Epoch and Training Loss Details with Validation Accuracy
print('Epoch: {} \tTraining Loss: {:.4f}\t Val. acc: {:.2f}%'.format(
epoch+1,
train_loss,
accuracy
))
# save model if validation loss has decreased
if val_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
val_loss))
# Save Model State on Checkpoint
torch.save(model.state_dict(), 'model.pt')
valid_loss_min = val_loss
# Move to next epoch
epoch_list.append(epoch + 1)
```
## Load Model State from Checkpoint
```
model.load_state_dict(torch.load('model.pt'))
```
## Save the whole Model (Pickling)
```
#Save/Pickle the Model
torch.save(model, 'classifier.pth')
```
# 7. Visualize Model Training and Validation
```
# Training / Validation Loss
plt.plot(epoch_list,train_loss_list)
plt.plot(val_loss_list)
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.title("Training/Validation Loss vs Number of Epochs")
plt.legend(['Train', 'Valid'], loc='upper right')
plt.show()
# Train/Valid Accuracy
plt.plot(epoch_list,train_acc_list)
plt.plot(val_acc_list)
plt.xlabel("Epochs")
plt.ylabel("Training/Validation Accuracy")
plt.title("Accuracy vs Number of Epochs")
plt.legend(['Train', 'Valid'], loc='best')
plt.show()
```
From the above graphs we get some really impressive results
**Overall Accuracy
**
```
val_acc = sum(val_acc_list[:]).item()/len(val_acc_list)
print("Validation Accuracy of model = {} %".format(val_acc))
```
# 8. Test our Model Performance
```
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
img = images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
model.eval() # Required for Evaluation/Test
# get sample outputs
output = model(images)
if type(output) == tuple:
output, _ = output
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(20, 5))
for idx in np.arange(12):
ax = fig.add_subplot(3, 4, idx+1, xticks=[], yticks=[])
plt.imshow(np.transpose(img[idx], (1, 2, 0)))
ax.set_title("Pr: {} Ac: {}".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
```
**We can see that the Correctly Classifies Results are Marked in "Green" and the misclassifies ones are "Red"**
## 8.1 Test our Model Performance with Gabriele Picco's Program
**Credits: ** **Gabriele Picco** (https://github.com/GabrielePicco/deep-learning-flower-identifier)
**Special Instruction:**
1. **Uncomment the following two code cells while running the notebook.**
2. Comment these two blocks while **Commit**, otherwise you will get an error "Too many Output Files" in Kaggle Only.
3. If you find a solution to this then let me know.
```
# !git clone https://github.com/GabrielePicco/deep-learning-flower-identifier
# !pip install airtable
# import sys
# sys.path.insert(0, 'deep-learning-flower-identifier')
# from test_model_pytorch_facebook_challenge import calc_accuracy
# calc_accuracy(model, input_image_size=224, use_google_testset=False)
```
## **Congrats! We got almost 90% accuracy with just a simple configuration!**
(We will get almost 90% accuracy in Gabriele's Test Suite. Just Uncomment above two code cells and see.)
# 9. Export our Model Checkpoint File or Model Pickle File
**Just Right-click on Below link and Copy the Link**
**And Proceed to [Part 2 Tutorial](https://www.kaggle.com/soumya044/udacity-pytorch-final-lab-guide-part-2/)**
## Links Here:
**Model State Checkpoint File: [model.pt](./model.pt)** (Preferred)
**Classifier Pickle File: [classifier.pth](./classifier.pth)**
(Right-click on model.pt and copy the link address)
* If the links don't work then just modify the (link) as ./model.pt or ./classifier.pth
# **Proceed To Part 2: [Click Here](https://www.kaggle.com/soumya044/udacity-pytorch-final-lab-guide-part-2/)**
# Thank You
If you liked this kernel please **Upvote**. Don't forget to drop a comment or suggestion.
### *Soumya Ranjan Behera*
Let's stay Connected! [LinkedIn](https://www.linkedin.com/in/soumya044)
**Happy Coding !**
| github_jupyter |
```
import numpy as np
from scipy import pi
import matplotlib.pyplot as plt
import pickle as cPickle
#Sine wave
N = 128
def get_sine_wave():
x_sin = np.array([0.0 for i in range(N)])
# print(x_sin)
for i in range(N):
# print("h")
x_sin[i] = np.sin(2.0*pi*i/16.0)
plt.plot(x_sin)
plt.title('Sine wave')
plt.show()
y_sin = np.fft.fftshift(np.fft.fft(x_sin[:16], 16))
plt.plot(abs(y_sin))
plt.title('FFT sine wave')
plt.show()
return x_sin
def get_bpsk_carrier():
x = np.fromfile('gnuradio_dumps/bpsk_carrier', dtype = 'float32')
x_bpsk_carrier = x[9000:9000+N]
plt.plot(x_bpsk_carrier)
plt.title('BPSK carrier')
plt.show()
# y_bpsk_carrier = np.fft.fft(x_bpsk_carrier, N)
# plt.plot(abs(y_bpsk_carrier))
# plt.title('FFT BPSK carrier')
# plt.show()
def get_qpsk_carrier():
x = np.fromfile('gnuradio_dumps/qpsk_carrier', dtype = 'float32')
x_qpsk_carrier = x[12000:12000+N]
plt.plot(x_qpsk_carrier)
plt.title('QPSK carrier')
plt.show()
# y_qpsk_carrier = np.fft.fft(x_qpsk_carrier, N)
# plt.plot(abs(y_qpsk_carrier))
# plt.title('FFT QPSK carrier')
# plt.show()
def get_bpsk():
x = np.fromfile('gnuradio_dumps/bpsk', dtype = 'complex64')
x_bpsk = x[9000:9000+N]
plt.plot(x_bpsk.real)
plt.plot(x_bpsk.imag)
plt.title('BPSK')
plt.show()
# y_bpsk = np.fft.fft(x_bpsk, N)
# plt.plot(abs(y_bpsk))
# plt.title('FFT BPSK')
# plt.show()
def get_qpsk():
x = np.fromfile('gnuradio_dumps/qpsk', dtype = 'complex64')
x_qpsk = x[11000:11000+N]
plt.plot(x_qpsk.real)
plt.plot(x_qpsk.imag)
plt.title('QPSK')
plt.show()
# y_qpsk = np.fft.fft(x_bpsk, N)
# plt.plot(abs(y_bqsk))
# plt.title('FFT QPSK')
# plt.show()
def load_dataset(location="../../datasets/radioml.dat"):
f = open(location, "rb")
ds = cPickle.load(f, encoding = 'latin-1')
return ds
def get_from_dataset(dataset, key):
"""Returns complex version of dataset[key][500]"""
xr = dataset[key][500][0]
xi = dataset[key][500][1]
plt.plot(xr)
plt.plot(xi)
plt.title(key)
plt.show()
return xr
x_sin = get_sine_wave()
x_bpsk_carrier = get_bpsk_carrier()
x_qpsk_carrier = get_qpsk_carrier()
x_bpsk = get_bpsk()
x_qpsk = get_qpsk()
ds = load_dataset()
x_amssb = get_from_dataset(dataset=ds, key=('AM-SSB', 16))
x_amdsb = get_from_dataset(dataset=ds, key= ('AM-DSB', 18))
x_gfsk = get_from_dataset(dataset=ds, key=('GFSK', 18))
nfft = 16
cyclo_averaging = 8
offsets = [0,1,2,3,4,5,6,7]
def compute_cyclo_fft(data, nfft):
data_reshape = np.reshape(data, (-1, nfft))
y = np.fft.fftshift(np.fft.fft(data_reshape, axis=1), axes=1)
return y.T
def compute_cyclo_ifft(data, nfft):
return np.fft.fftshift(np.fft.fft(data))
def single_fft_cyclo(fft, offset):
left = np.roll(fft, -offset)
right = np.roll(fft, offset)
spec = right * np.conj(left)
return spec
def create_sc(spec, offset):
left = np.roll(spec, -offset)
right = np.roll(spec, offset)
denom = left * right
denom_norm = np.sqrt(denom)
return np.divide(spec, denom_norm)
def cyclo_stationary(data):
# fft
cyc_fft = compute_cyclo_fft(data, nfft)
# average
num_ffts = int(cyc_fft.shape[0])
cyc_fft = cyc_fft[:num_ffts]
cyc_fft = np.mean(np.reshape(cyc_fft, (nfft, cyclo_averaging)), axis=1)
print(cyc_fft)
plt.title('cyc_fft')
plt.plot(abs(cyc_fft))
plt.show()
specs = np.zeros((len(offsets)*16), dtype=np.complex64)
scs = np.zeros((len(offsets)*16), dtype=np.complex64)
cdp = {offset: 0 for offset in offsets}
for j, offset in enumerate(offsets):
spec = single_fft_cyclo(cyc_fft, offset)
print(spec)
plt.plot(abs(spec))
plt.title(offset)
plt.show()
sc = create_sc(spec, offset)
specs[j*16:j*16+16] = spec
scs[j*16:j*16+16] = sc
cdp[offset] = max(sc)
return specs, scs, cdp
specs, scs, cdp = cyclo_stationary(x_sin)
plt.plot(np.arange(128), scs.real)
plt.plot(np.arange(128), scs.imag)
plt.show()
```
| github_jupyter |
# Exercise: Find correspondences between old and modern english
The purpose of this execise is to use two vecsigrafos, one built on UMBC and Wordnet and another one produced by directly running Swivel against a corpus of Shakespeare's complete works, to try to find corelations between old and modern English, e.g. "thou" -> "you", "dost" -> "do", "raiment" -> "clothing". For example, you can try to pick a set of 100 words in "ye olde" English corpus and see how they correlate to UMBC over WordNet.

Next, we prepare the embeddings from the Shakespeare corpus and load a UMBC vecsigrafo, which will provide the two vector spaces to correlate.
## Download a small text corpus
First, we download the corpus into our environment. We will use the Shakespeare's complete works corpus, published as part of Project Gutenberg and pbublicly available.
```
import os
%ls
#!rm -r tutorial
!git clone https://github.com/HybridNLP2018/tutorial
```
Let us see if the corpus is where we think it is:
```
%cd tutorial/lit
%ls
```
Downloading Swivel
```
!wget http://expertsystemlab.com/hybridNLP18/swivel.zip
!unzip swivel.zip
!rm swivel/*
!rm swivel.zip
```
## Learn the Swivel embeddings over the Old Shakespeare corpus
### Calculating the co-occurrence matrix
```
corpus_path = '/content/tutorial/lit/shakespeare_complete_works.txt'
coocs_path = '/content/tutorial/lit/coocs'
shard_size = 512
freq=3
!python /content/tutorial/scripts/swivel/prep.py --input={corpus_path} --output_dir={coocs_path} --shard_size={shard_size} --min_count={freq}
%ls {coocs_path} | head -n 10
```
### Learning the embeddings from the matrix
```
vec_path = '/content/tutorial/lit/vec/'
!python /content/tutorial/scripts/swivel/swivel.py --input_base_path={coocs_path} \
--output_base_path={vec_path} \
--num_epochs=20 --dim=300 \
--submatrix_rows={shard_size} --submatrix_cols={shard_size}
```
Checking the context of the 'vec' directory. Should contain checkpoints of the model plus tsv files for column and row embeddings.
```
os.listdir(vec_path)
```
Converting tsv to bin:
```
!python /content/tutorial/scripts/swivel/text2bin.py --vocab={vec_path}vocab.txt --output={vec_path}vecs.bin \
{vec_path}row_embedding.tsv \
{vec_path}col_embedding.tsv
%ls {vec_path}
```
### Read stored binary embeddings and inspect them
```
import importlib.util
spec = importlib.util.spec_from_file_location("vecs", "/content/tutorial/scripts/swivel/vecs.py")
m = importlib.util.module_from_spec(spec)
spec.loader.exec_module(m)
shakespeare_vecs = m.Vecs(vec_path + 'vocab.txt', vec_path + 'vecs.bin')
```
##Basic method to print the k nearest neighbors for a given word
```
def k_neighbors(vec, word, k=10):
res = vec.neighbors(word)
if not res:
print('%s is not in the vocabulary, try e.g. %s' % (word, vecs.random_word_in_vocab()))
else:
for word, sim in res[:10]:
print('%0.4f: %s' % (sim, word))
k_neighbors(shakespeare_vecs, 'strife')
k_neighbors(shakespeare_vecs,'youth')
```
## Load vecsigrafo from UMBC over WordNet
```
%ls
!wget https://zenodo.org/record/1446214/files/vecsigrafo_umbc_tlgs_ls_f_6e_160d_row_embedding.tar.gz
%ls
!tar -xvzf vecsigrafo_umbc_tlgs_ls_f_6e_160d_row_embedding.tar.gz
!rm vecsigrafo_umbc_tlgs_ls_f_6e_160d_row_embedding.tar.gz
umbc_wn_vec_path = '/content/tutorial/lit/vecsi_tlgs_wnscd_ls_f_6e_160d/'
```
Extracting the vocabulary from the .tsv file:
```
with open(umbc_wn_vec_path + 'vocab.txt', 'w', encoding='utf_8') as f:
with open(umbc_wn_vec_path + 'row_embedding.tsv', 'r', encoding='utf_8') as vec_lines:
vocab = [line.split('\t')[0].strip() for line in vec_lines]
for word in vocab:
print(word, file=f)
```
Converting tsv to bin:
```
!python /content/tutorial/scripts/swivel/text2bin.py --vocab={umbc_wn_vec_path}vocab.txt --output={umbc_wn_vec_path}vecs.bin \
{umbc_wn_vec_path}row_embedding.tsv
%ls
umbc_wn_vecs = m.Vecs(umbc_wn_vec_path + 'vocab.txt', umbc_wn_vec_path + 'vecs.bin')
k_neighbors(umbc_wn_vecs, 'lem_California')
```
# Add your solution to the proposed exercise here
Follow the instructions given in the prvious lesson (*Vecsigrafos for curating and interlinking knowledge graphs*) to find correlation between terms in old Enlgish extracted from the Shakespeare corpus and terms in modern English extracted from UMBC. You will need to generate a dictionary relating pairs of lemmas between the two vocabularies and use to produce a pair of translation matrices to transform vectors from one vector space to the other. Then apply the k_neighbors method to identify the correlations.
# Conclusion
This notebook proposes the use of Shakespeare's complete works and UMBC to provide the student with embeddings that can be exploited for different operations between the two vector spaces. Particularly, we propose to identify terms and their correlations over such spaces.
# Acknowledgements
In memory of Dr. Jack Brandabur, whose passion for Shakespeare and Cervantes inspired this notebook.
| github_jupyter |
<a href="https://colab.research.google.com/github/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_04_3_regression.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# T81-558: Applications of Deep Neural Networks
**Module 4: Training for Tabular Data**
* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)
* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).
# Module 4 Material
* Part 4.1: Encoding a Feature Vector for Keras Deep Learning [[Video]](https://www.youtube.com/watch?v=Vxz-gfs9nMQ&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_04_1_feature_encode.ipynb)
* Part 4.2: Keras Multiclass Classification for Deep Neural Networks with ROC and AUC [[Video]](https://www.youtube.com/watch?v=-f3bg9dLMks&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_04_2_multi_class.ipynb)
* **Part 4.3: Keras Regression for Deep Neural Networks with RMSE** [[Video]](https://www.youtube.com/watch?v=wNhBUC6X5-E&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_04_3_regression.ipynb)
* Part 4.4: Backpropagation, Nesterov Momentum, and ADAM Neural Network Training [[Video]](https://www.youtube.com/watch?v=VbDg8aBgpck&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_04_4_backprop.ipynb)
* Part 4.5: Neural Network RMSE and Log Loss Error Calculation from Scratch [[Video]](https://www.youtube.com/watch?v=wmQX1t2PHJc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_04_5_rmse_logloss.ipynb)
# Google CoLab Instructions
The following code ensures that Google CoLab is running the correct version of TensorFlow.
```
try:
%tensorflow_version 2.x
COLAB = True
print("Note: using Google CoLab")
except:
print("Note: not using Google CoLab")
COLAB = False
```
# Part 4.3: Keras Regression for Deep Neural Networks with RMSE
Regression results are evaluated differently than classification. Consider the following code that trains a neural network for regression on the data set **jh-simple-dataset.csv**.
```
import pandas as pd
from scipy.stats import zscore
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
# Read the data set
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv",
na_values=['NA','?'])
# Generate dummies for job
df = pd.concat([df,pd.get_dummies(df['job'],prefix="job")],axis=1)
df.drop('job', axis=1, inplace=True)
# Generate dummies for area
df = pd.concat([df,pd.get_dummies(df['area'],prefix="area")],axis=1)
df.drop('area', axis=1, inplace=True)
# Generate dummies for product
df = pd.concat([df,pd.get_dummies(df['product'],prefix="product")],axis=1)
df.drop('product', axis=1, inplace=True)
# Missing values for income
med = df['income'].median()
df['income'] = df['income'].fillna(med)
# Standardize ranges
df['income'] = zscore(df['income'])
df['aspect'] = zscore(df['aspect'])
df['save_rate'] = zscore(df['save_rate'])
df['subscriptions'] = zscore(df['subscriptions'])
# Convert to numpy - Classification
x_columns = df.columns.drop('age').drop('id')
x = df[x_columns].values
y = df['age'].values
# Create train/test
x_train, x_test, y_train, y_test = train_test_split(
x, y, test_size=0.25, random_state=42)
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation
from tensorflow.keras.callbacks import EarlyStopping
# Build the neural network
model = Sequential()
model.add(Dense(25, input_dim=x.shape[1], activation='relu')) # Hidden 1
model.add(Dense(10, activation='relu')) # Hidden 2
model.add(Dense(1)) # Output
model.compile(loss='mean_squared_error', optimizer='adam')
monitor = EarlyStopping(monitor='val_loss', min_delta=1e-3,
patience=5, verbose=1, mode='auto', restore_best_weights=True)
model.fit(x_train,y_train,validation_data=(x_test,y_test),callbacks=[monitor],verbose=2,epochs=1000)
```
### Mean Square Error
The mean square error is the sum of the squared differences between the prediction ($\hat{y}$) and the expected ($y$). MSE values are not of a particular unit. If an MSE value has decreased for a model, that is good. However, beyond this, there is not much more you can determine. Low MSE values are desired.
$ \mbox{MSE} = \frac{1}{n} \sum_{i=1}^n \left(\hat{y}_i - y_i\right)^2 $
```
from sklearn import metrics
# Predict
pred = model.predict(x_test)
# Measure MSE error.
score = metrics.mean_squared_error(pred,y_test)
print("Final score (MSE): {}".format(score))
```
### Root Mean Square Error
The root mean square (RMSE) is essentially the square root of the MSE. Because of this, the RMSE error is in the same units as the training data outcome. Low RMSE values are desired.
$ \mbox{RMSE} = \sqrt{\frac{1}{n} \sum_{i=1}^n \left(\hat{y}_i - y_i\right)^2} $
```
import numpy as np
# Measure RMSE error. RMSE is common for regression.
score = np.sqrt(metrics.mean_squared_error(pred,y_test))
print("Final score (RMSE): {}".format(score))
```
### Lift Chart
To generate a lift chart, perform the following activities:
* Sort the data by expected output. Plot the blue line above.
* For every point on the x-axis plot the predicted value for that same data point. This is the green line above.
* The x-axis is just 0 to 100% of the dataset. The expected always starts low and ends high.
* The y-axis is ranged according to the values predicted.
Reading a lift chart:
* The expected and predict lines should be close. Notice where one is above the ot other.
* The below chart is the most accurate on lower age.
```
# Regression chart.
def chart_regression(pred, y, sort=True):
t = pd.DataFrame({'pred': pred, 'y': y.flatten()})
if sort:
t.sort_values(by=['y'], inplace=True)
plt.plot(t['y'].tolist(), label='expected')
plt.plot(t['pred'].tolist(), label='prediction')
plt.ylabel('output')
plt.legend()
plt.show()
# Plot the chart
chart_regression(pred.flatten(),y_test)
```
| github_jupyter |
```
# flake8: noqa
##########################################################
# Relative Imports
##########################################################
import sys
from os.path import isfile
from os.path import join
def find_pkg(name: str, depth: int):
if depth <= 0:
ret = None
else:
d = [".."] * depth
path_parts = d + [name, "__init__.py"]
if isfile(join(*path_parts)):
ret = d
else:
ret = find_pkg(name, depth - 1)
return ret
def find_and_ins_syspath(name: str, depth: int):
path_parts = find_pkg(name, depth)
if path_parts is None:
raise RuntimeError("Could not find {}. Try increasing depth.".format(name))
path = join(*path_parts)
if path not in sys.path:
sys.path.insert(0, path)
try:
import caldera
except ImportError:
find_and_ins_syspath("caldera", 3)
##########################################################
# Main
##########################################################
import copy
import hydra
from examples.traversals.training import TrainingModule
from examples.traversals.data import DataGenerator, DataConfig
from examples.traversals.configuration import Config
from examples.traversals.configuration.data import Uniform, DiscreteUniform
from typing import TypeVar
from pytorch_lightning import Trainer
from examples.traversals.loggers import logger
from omegaconf import DictConfig, OmegaConf
from rich.panel import Panel
from rich import print
from rich.syntax import Syntax
C = TypeVar("C")
def prime_the_model(model: TrainingModule, config: Config):
logger.info("Priming the model with data")
config_copy: DataConfig = copy.deepcopy(config.data)
config_copy.train.num_graphs = 10
config_copy.eval.num_graphs = 0
data_copy = DataGenerator(config_copy, progress_bar=False)
for a, b in data_copy.train_loader():
model.model.forward(a, 10)
break
def print_title():
print(Panel("Training Example: [red]Traversal", title="[red]caldera"))
def print_model(model: TrainingModule):
print(Panel("Network", expand=False))
print(model)
def print_yaml(cfg: Config):
print(Panel("Configuration", expand=False))
print(Syntax(OmegaConf.to_yaml(cfg), "yaml"))
# def config_override(cfg: DictConfig):
# # defaults
# cfg.hyperparameters.lr = 1e-3
# cfg.hyperparameters.train_core_processing_steps = 10
# cfg.hyperparameters.eval_core_processing_steps = 10
#
# cfg.data.train.num_graphs = 5000
# cfg.data.train.num_nodes = DiscreteUniform(10, 100)
# cfg.data.train.density = Uniform(0.01, 0.03)
# cfg.data.train.path_length = DiscreteUniform(5, 10)
# cfg.data.train.composition_density = Uniform(0.01, 0.02)
# cfg.data.train.batch_size = 512
# cfg.data.train.shuffle = False
#
# cfg.data.eval.num_graphs = 500
# cfg.data.eval.num_nodes = DiscreteUniform(10, 100)
# cfg.data.eval.density = Uniform(0.01, 0.03)
# cfg.data.eval.path_length = DiscreteUniform(5, 10)
# cfg.data.eval.composition_density = Uniform(0.01, 0.02)
# cfg.data.eval.batch_size = "${data.eval.num_graphs}"
# cfg.data.eval.shuffle = False
# @hydra.main(config_path="conf", config_name="config")
# def main(hydra_cfg: DictConfig):
# print_title()
# logger.setLevel(hydra_cfg.log_level)
# if hydra_cfg.log_level.upper() == 'DEBUG':
# verbose = True
# else:
# verbose = False
# # really unclear why hydra has so many unclear validation issues with structure configs using ConfigStore
# # this correctly assigns the correct structured config
# # and updates from the passed hydra config
# # annoying... but this resolves all these issues
# cfg = OmegaConf.structured(Config())
# cfg.update(hydra_cfg)
# # debug
# if verbose:
# print_yaml(cfg)
# from pytorch_lightning.loggers import WandbLogger
# wandb_logger = WandbLogger(project='pytorchlightning')
# # explicitly convert the DictConfig back to Config object
# # has the added benefit of performing validation upfront
# # before any expensive training or logging initiates
# config = Config.from_dict_config(cfg)
# # initialize the training module
# training_module = TrainingModule(config)
# logger.info("Priming the model with data")
# prime_the_model(training_module, config)
# logger.debug(Panel("Model", expand=False))
# if verbose:
# print_model(training_module)
# logger.info("Generating data...")
# data = DataGenerator(config.data)
# data.init()
# logger.info("Beginning training...")
# trainer = Trainer(gpus=config.gpus, logger=wandb_logger)
# trainer.fit(
# training_module,
# train_dataloader=data.train_loader(),
# val_dataloaders=data.eval_loader(),
# )
# if __name__ == "__main__":
# main()
from examples.traversals.configuration import get_config
config = get_config( as_config_class=True)
data = DataGenerator(config.data)
data.init()
training_module = TrainingModule(config)
logger.info("Priming the model with data")
prime_the_model(training_module, config)
dir(data)
from torch import optim
from tqdm.auto import tqdm
import torch
from caldera.data import GraphTuple
def mse_tuple(criterion, device, a, b):
loss = torch.tensor(0.0, dtype=torch.float32, device=device)
assert len(a) == len(b)
for i, (_a, _b) in enumerate(zip(a, b)):
assert _a.shape == _b.shape
l = criterion(_a, _b)
loss += l
return loss
def train(network, loader, cuda: bool = False):
device = 'cpu'
if cuda and torch.cuda.is_available():
device = 'cuda:' + str(torch.cuda.current_device())
network.eval()
network.to(device)
input_batch, target_batch = loader.first()
input_batch = input_batch.detach()
input_batch.to(device)
network(input_batch, 1)
optimizer = optim.AdamW(network.parameters(), lr=1e-2)
loss_func = torch.nn.MSELoss()
losses = []
for epoch in range(20):
print(epoch)
running_loss = 0.
network.train()
for input_batch, target_batch in loader:
optimizer.zero_grad()
out_batch = network(input_batch, 5)[-1]
out_tuple = GraphTuple(out_batch.e, out_batch.x, out_batch.g)
target_tuple = GraphTuple(target_batch.e, target_batch.x, target_batch.g)
loss = mse_tuple(loss_func, device, out_tuple, target_tuple)
loss.backward()
running_loss = running_loss + loss.item()
optimizer.step()
print(running_loss)
losses.append(running_loss)
return losses
# loader = DataLoaders.sigmoid_circuit(1000, 10)
train(training_module.model, data.train_loader())
inp, targ = data.eval_loader().first()
from caldera.transforms.networkx import NetworkxAttachNumpyBool
g = targ.to_networkx_list()[0]
to_bool = NetworkxAttachNumpyBool('node', 'features', 'x')
graphs = to_bool(targ.to_networkx_list())
graphs[0].nodes(data=True)
from matplotlib import cm
import matplotlib.pyplot as plt
import numpy as np
import networkx as nx
%matplotlib inline
def edge_colors(g, key, cmap):
edgecolors = list()
edgelist = list(g.edges)
edgefeat = list()
for e in edgelist:
edata = g.edges[e]
edgefeat.append(edata[key][0].item())
edgefeat = np.array(edgefeat)
edgecolors = cmap(edgefeat)
return edgecolors
nx.draw_networkx_edges(g, pos=pos, edge_color=edgecolors, arrows=False)
def node_colors(g, key, cmap):
nodecolors = list()
nodelist = list(g.nodes)
nodefeat = list()
for n in nodelist:
ndata = g.nodes[n]
nodefeat.append(ndata[key][0].item())
nodefeat = np.array(nodefeat)
nodecolors = cmap(nodefeat)
return nodecolors
nx.draw_networkx_nodes(g, pos=pos, node_size=10, node_color=nodecolors)
def plot_graph(g, ax, cmap, key='features', seed=1):
pos = nx.layout.spring_layout(g, seed=seed)
nx.draw_networkx_edges(g, ax=ax, pos=pos, edge_color=edge_colors(g, key, cmap), arrows=False);
nx.draw_networkx_nodes(g, ax=ax, pos=pos, node_size=10, node_color=node_colors(g, key, cmap))
def comparison_plot(out_g, expected_g):
fig, axes = plt.subplots(1, 2, figsize=(5, 2.5))
axes[0].axis('off')
axes[1].axis('off')
axes[0].set_title("out")
plot_graph(out_g, axes[0], cm.plasma)
axes[1].set_title("expected")
plot_graph(expected_g, axes[1], cm.plasma)
return fig, axes
def validate_compare_plot(trainer, plmodel):
eval_loader = trainer.val_dataloaders[0]
for x, y in eval_loader:
break
plmodel.eval()
y_hat = plmodel.model.forward(x, 10)[-1]
y_graphs = y.to_networkx_list()
y_hat_graphs = y_hat.to_networkx_list()
idx = 0
yg = y_graphs[idx]
yhg = y_hat_graphs[idx]
return comparison_plot(yhg, yg)
fig, axes = validate_compare_plot(trainer, training_module)
from pytorch_lightning.loggers import WandbLogger
wandb_logger = WandbLogger(project='pytorchlightning')
wandb_logger.experiment
wandb.Image?
import wandb
import io
from PIL import Image
import matplotlib.pyplot as plt
def fig_to_pil(fig):
buf = io.BytesIO()
fig.savefig(buf, format='png')
buf.seek(0)
im = Image.open(buf)
# buf.close()
return im
wandb_logger.experiment.log({'s': [wandb.Image(fig_to_pil(fig))]} )
wandb_logger.experiment.log
import io
from PIL import Image
import matplotlib.pyplot as plt
buf = io.BytesIO()
fig.savefig(buf, format='png')
buf.seek(0)
im = Image.open(buf)
im.show()
buf.close()
str(buf)
x.to_networkx_list()[0].nodes(data=True)
def comparison_plot(out_g, expected_g):
fig, axes = plt.subplots(1, 2, figsize=(5, 2.5))
axes[0].axis('off')
axes[1].axis('off')
axes[0].set_title("out")
plot_graph(out_g, axes[0], cm.plasma)
axes[1].set_title("expected")
plot_graph(expected_g, axes[1], cm.plasma)
x, y = data.eval_loader().first()
y_hat = training_module.model.forward(x, 10)[-1]
y_graphs = y.to_networkx_list()
y_hat_graphs = y_hat.to_networkx_list()
idx = 0
yg = y_graphs[idx]
yhg = y_hat_graphs[idx]
comparison_plot(yhg, yg)
g = random_graph((100, 150), d=(0.01, 0.03), e=None)
annotate_shortest_path(g)
# nx.draw(g)
pos = nx.layout.spring_layout(g)
nodelist = list(g.nodes)
node_color = []
for n in nodelist:
node_color.append(g.nodes[n]['target'][0])
edge_list = []
edge_color = []
for n1, n2, edata in g.edges(data=True):
edge_list.append((n1, n2))
edge_color.append(edata['target'][0])
print(node_color)
nx.draw_networkx_edges(g, pos=pos, width=0.5, edge_color=edge_color)
nx.draw_networkx_nodes(g, pos=pos, node_color=node_color, node_size=10)
NetworkxAttachNumpyBool?
g.nodes(data=True)
from caldera.transforms.networkx import NetworkxApplyToFeature
NetworkxApplyToFeature('features', edge_func= lambda x: list(x))(g)
import time
from rich.progress import Progress as RichProgress
from contextlib import contextmanager
from dataclasses import dataclass
@dataclass
class TaskEvent:
task_id: int
name: str
class TaskProgress(object):
DEFAULT_REFRESH_PER_SECOND = 4.
def __init__(self,
progress = None,
task_id: int = None,
refresh_rate_per_second: int = DEFAULT_REFRESH_PER_SECOND,
parent = None):
self.task_id = task_id
self.children = []
self.parent = parent
self.progress = progress or RichProgress()
self.last_updated = time.time()
self.refresh_rate_per_second = refresh_rate_per_second
def self_task(self, *args, **kwargs):
task_id = self.progress.add_task(*args, **kwargs)
self.task_id = task_id
def add_task(self, *args, **kwargs):
task_id = self.progress.add_task(*args, **kwargs)
new_task = self.__class__(self.progress, task_id, self.refresh_rate_per_second, parent=self)
self.children.append(new_task)
return new_task
@property
def _task(self):
return self.progress.tasks[self.task_id]
def listen(self, event: TaskEvent):
if event.name == 'refresh':
completed = sum(t._task.completed for t in self.children)
total = sum(t._task.total for t in self.children)
self.update(completed=completed/total, total=1., refresh=True)
elif event.name == 'finished':
self.finish()
def emit_up(self, event_name):
if self.parent:
self.parent.listen(TaskEvent(task_id=self.task_id, name=event_name))
def emit_down(self, event_name: TaskEvent):
for child in self.children:
print("sending to child")
child.listen(TaskEvent(task_id=self.task_id, name=event_name))
def update(self, *args, **kwargs):
now = time.time()
if 'refresh' not in kwargs:
if now - self.last_updated > 1. / self.refresh_rate_per_second:
kwargs['refresh'] = True
else:
kwargs['refresh'] = False
if kwargs['refresh']:
self.emit_up('refresh')
self.last_updated = now
self.progress.update(self.task_id, *args, **kwargs)
def is_complete(self):
return self.completed >= self.task.total
def finish(self):
self.progress.update(self.task_id, completed=self._task.total, refresh=True)
self.emit_down('finished')
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
self.progress.__exit__(exc_type, exc_val, exc_tb)
self.finish()
with TaskProgress() as progress:
progress.self_task('main', total=10)
bar1 = progress.add_task('bar1', total=10)
bar2 = progress.add_task('bar2', total=10)
for _ in range(10):
bar1.update(advance=1)
time.sleep(0.1)
for _ in range(10):
bar2.update(advance=1)
time.sleep(0.1)
bar.progress.tasks[0].completed
import torch
target = torch.ones([1, 64], dtype=torch.float32) # 64 classes, batch size = 10
output = torch.full([1, 64], 1.5) # A prediction (logit)
print(target)
print(output)
# pos_weight = torch.ones([64]) # All weights are equal to 1
criterion = torch.nn.BCEWithLogitsLoss()
criterion(output, target) # -log(sigmoid(1.5))
from caldera.data import GraphBatch
batch = GraphBatch.random_batch(2, 5, 4, 3)
graphs = batch.to_networkx_list()
import networkx as nx
nx.draw(graphs[0])
expected = torch.randn(batch.x.shape)
x = batch.x
x = torch.nn.Softmax()(x)
print(x.sum(axis=1))
x, expected
x = torch.nn.BCELoss()(x, expected)
x
import torch
x = torch.randn(10, 10)
torch.stack([x, x]).shape
```
| github_jupyter |
# About this Notebook
In this notebook, we provide the tensor factorization implementation using an iterative Alternating Least Square (ALS), which is a good starting point for understanding tensor factorization.
```
import numpy as np
from numpy.linalg import inv as inv
```
# Part 1: Matrix Computation Concepts
## 1) Kronecker product
- **Definition**:
Given two matrices $A\in\mathbb{R}^{m_1\times n_1}$ and $B\in\mathbb{R}^{m_2\times n_2}$, then, the **Kronecker product** between these two matrices is defined as
$$A\otimes B=\left[ \begin{array}{cccc} a_{11}B & a_{12}B & \cdots & a_{1m_2}B \\ a_{21}B & a_{22}B & \cdots & a_{2m_2}B \\ \vdots & \vdots & \ddots & \vdots \\ a_{m_11}B & a_{m_12}B & \cdots & a_{m_1m_2}B \\ \end{array} \right]$$
where the symbol $\otimes$ denotes Kronecker product, and the size of resulted $A\otimes B$ is $(m_1m_2)\times (n_1n_2)$ (i.e., $m_1\times m_2$ columns and $n_1\times n_2$ rows).
- **Example**:
If $A=\left[ \begin{array}{cc} 1 & 2 \\ 3 & 4 \\ \end{array} \right]$ and $B=\left[ \begin{array}{ccc} 5 & 6 & 7\\ 8 & 9 & 10 \\ \end{array} \right]$, then, we have
$$A\otimes B=\left[ \begin{array}{cc} 1\times \left[ \begin{array}{ccc} 5 & 6 & 7\\ 8 & 9 & 10\\ \end{array} \right] & 2\times \left[ \begin{array}{ccc} 5 & 6 & 7\\ 8 & 9 & 10\\ \end{array} \right] \\ 3\times \left[ \begin{array}{ccc} 5 & 6 & 7\\ 8 & 9 & 10\\ \end{array} \right] & 4\times \left[ \begin{array}{ccc} 5 & 6 & 7\\ 8 & 9 & 10\\ \end{array} \right] \\ \end{array} \right]$$
$$=\left[ \begin{array}{cccccc} 5 & 6 & 7 & 10 & 12 & 14 \\ 8 & 9 & 10 & 16 & 18 & 20 \\ 15 & 18 & 21 & 20 & 24 & 28 \\ 24 & 27 & 30 & 32 & 36 & 40 \\ \end{array} \right]\in\mathbb{R}^{4\times 6}.$$
## 2) Khatri-Rao product (`kr_prod`)
- **Definition**:
Given two matrices $A=\left( \boldsymbol{a}_1,\boldsymbol{a}_2,...,\boldsymbol{a}_r \right)\in\mathbb{R}^{m\times r}$ and $B=\left( \boldsymbol{b}_1,\boldsymbol{b}_2,...,\boldsymbol{b}_r \right)\in\mathbb{R}^{n\times r}$ with same number of columns, then, the **Khatri-Rao product** (or **column-wise Kronecker product**) between $A$ and $B$ is given as follows,
$$A\odot B=\left( \boldsymbol{a}_1\otimes \boldsymbol{b}_1,\boldsymbol{a}_2\otimes \boldsymbol{b}_2,...,\boldsymbol{a}_r\otimes \boldsymbol{b}_r \right)\in\mathbb{R}^{(mn)\times r},$$
where the symbol $\odot$ denotes Khatri-Rao product, and $\otimes$ denotes Kronecker product.
- **Example**:
If $A=\left[ \begin{array}{cc} 1 & 2 \\ 3 & 4 \\ \end{array} \right]=\left( \boldsymbol{a}_1,\boldsymbol{a}_2 \right) $ and $B=\left[ \begin{array}{cc} 5 & 6 \\ 7 & 8 \\ 9 & 10 \\ \end{array} \right]=\left( \boldsymbol{b}_1,\boldsymbol{b}_2 \right) $, then, we have
$$A\odot B=\left( \boldsymbol{a}_1\otimes \boldsymbol{b}_1,\boldsymbol{a}_2\otimes \boldsymbol{b}_2 \right) $$
$$=\left[ \begin{array}{cc} \left[ \begin{array}{c} 1 \\ 3 \\ \end{array} \right]\otimes \left[ \begin{array}{c} 5 \\ 7 \\ 9 \\ \end{array} \right] & \left[ \begin{array}{c} 2 \\ 4 \\ \end{array} \right]\otimes \left[ \begin{array}{c} 6 \\ 8 \\ 10 \\ \end{array} \right] \\ \end{array} \right]$$
$$=\left[ \begin{array}{cc} 5 & 12 \\ 7 & 16 \\ 9 & 20 \\ 15 & 24 \\ 21 & 32 \\ 27 & 40 \\ \end{array} \right]\in\mathbb{R}^{6\times 2}.$$
```
def kr_prod(a, b):
return np.einsum('ir, jr -> ijr', a, b).reshape(a.shape[0] * b.shape[0], -1)
A = np.array([[1, 2], [3, 4]])
B = np.array([[5, 6], [7, 8], [9, 10]])
print(kr_prod(A, B))
```
## 3) CP decomposition
### CP Combination (`cp_combination`)
- **Definition**:
The CP decomposition factorizes a tensor into a sum of outer products of vectors. For example, for a third-order tensor $\mathcal{Y}\in\mathbb{R}^{m\times n\times f}$, the CP decomposition can be written as
$$\hat{\mathcal{Y}}=\sum_{s=1}^{r}\boldsymbol{u}_{s}\circ\boldsymbol{v}_{s}\circ\boldsymbol{x}_{s},$$
or element-wise,
$$\hat{y}_{ijt}=\sum_{s=1}^{r}u_{is}v_{js}x_{ts},\forall (i,j,t),$$
where vectors $\boldsymbol{u}_{s}\in\mathbb{R}^{m},\boldsymbol{v}_{s}\in\mathbb{R}^{n},\boldsymbol{x}_{s}\in\mathbb{R}^{f}$ are columns of factor matrices $U\in\mathbb{R}^{m\times r},V\in\mathbb{R}^{n\times r},X\in\mathbb{R}^{f\times r}$, respectively. The symbol $\circ$ denotes vector outer product.
- **Example**:
Given matrices $U=\left[ \begin{array}{cc} 1 & 2 \\ 3 & 4 \\ \end{array} \right]\in\mathbb{R}^{2\times 2}$, $V=\left[ \begin{array}{cc} 1 & 2 \\ 3 & 4 \\ 5 & 6 \\ \end{array} \right]\in\mathbb{R}^{3\times 2}$ and $X=\left[ \begin{array}{cc} 1 & 5 \\ 2 & 6 \\ 3 & 7 \\ 4 & 8 \\ \end{array} \right]\in\mathbb{R}^{4\times 2}$, then if $\hat{\mathcal{Y}}=\sum_{s=1}^{r}\boldsymbol{u}_{s}\circ\boldsymbol{v}_{s}\circ\boldsymbol{x}_{s}$, then, we have
$$\hat{Y}_1=\hat{\mathcal{Y}}(:,:,1)=\left[ \begin{array}{ccc} 31 & 42 & 65 \\ 63 & 86 & 135 \\ \end{array} \right],$$
$$\hat{Y}_2=\hat{\mathcal{Y}}(:,:,2)=\left[ \begin{array}{ccc} 38 & 52 & 82 \\ 78 & 108 & 174 \\ \end{array} \right],$$
$$\hat{Y}_3=\hat{\mathcal{Y}}(:,:,3)=\left[ \begin{array}{ccc} 45 & 62 & 99 \\ 93 & 130 & 213 \\ \end{array} \right],$$
$$\hat{Y}_4=\hat{\mathcal{Y}}(:,:,4)=\left[ \begin{array}{ccc} 52 & 72 & 116 \\ 108 & 152 & 252 \\ \end{array} \right].$$
```
def cp_combine(U, V, X):
return np.einsum('is, js, ts -> ijt', U, V, X)
U = np.array([[1, 2], [3, 4]])
V = np.array([[1, 3], [2, 4], [5, 6]])
X = np.array([[1, 5], [2, 6], [3, 7], [4, 8]])
print(cp_combine(U, V, X))
print()
print('tensor size:')
print(cp_combine(U, V, X).shape)
```
## 4) Tensor Unfolding (`ten2mat`)
Using numpy reshape to perform 3rd rank tensor unfold operation. [[**link**](https://stackoverflow.com/questions/49970141/using-numpy-reshape-to-perform-3rd-rank-tensor-unfold-operation)]
```
def ten2mat(tensor, mode):
return np.reshape(np.moveaxis(tensor, mode, 0), (tensor.shape[mode], -1), order = 'F')
X = np.array([[[1, 2, 3, 4], [3, 4, 5, 6]],
[[5, 6, 7, 8], [7, 8, 9, 10]],
[[9, 10, 11, 12], [11, 12, 13, 14]]])
print('tensor size:')
print(X.shape)
print('original tensor:')
print(X)
print()
print('(1) mode-1 tensor unfolding:')
print(ten2mat(X, 0))
print()
print('(2) mode-2 tensor unfolding:')
print(ten2mat(X, 1))
print()
print('(3) mode-3 tensor unfolding:')
print(ten2mat(X, 2))
```
# Part 2: Tensor CP Factorization using ALS (TF-ALS)
Regarding CP factorization as a machine learning problem, we could perform a learning task by minimizing the loss function over factor matrices, that is,
$$\min _{U, V, X} \sum_{(i, j, t) \in \Omega}\left(y_{i j t}-\sum_{r=1}^{R}u_{ir}v_{jr}x_{tr}\right)^{2}.$$
Within this optimization problem, multiplication among three factor matrices (acted as parameters) makes this problem difficult. Alternatively, we apply the ALS algorithm for CP factorization.
In particular, the optimization problem for each row $\boldsymbol{u}_{i}\in\mathbb{R}^{R},\forall i\in\left\{1,2,...,M\right\}$ of factor matrix $U\in\mathbb{R}^{M\times R}$ is given by
$$\min _{\boldsymbol{u}_{i}} \sum_{j,t:(i, j, t) \in \Omega}\left[y_{i j t}-\boldsymbol{u}_{i}^\top\left(\boldsymbol{x}_{t}\odot\boldsymbol{v}_{j}\right)\right]\left[y_{i j t}-\boldsymbol{u}_{i}^\top\left(\boldsymbol{x}_{t}\odot\boldsymbol{v}_{j}\right)\right]^\top.$$
The least square for this optimization is
$$u_{i} \Leftarrow\left(\sum_{j, t, i, j, t ) \in \Omega} \left(x_{t} \odot v_{j}\right)\left(x_{t} \odot v_{j}\right)^{\top}\right)^{-1}\left(\sum_{j, t :(i, j, t) \in \Omega} y_{i j t} \left(x_{t} \odot v_{j}\right)\right), \forall i \in\{1,2, \ldots, M\}.$$
The alternating least squares for $V\in\mathbb{R}^{N\times R}$ and $X\in\mathbb{R}^{T\times R}$ are
$$\boldsymbol{v}_{j}\Leftarrow\left(\sum_{i,t:(i,j,t)\in\Omega}\left(\boldsymbol{x}_{t}\odot\boldsymbol{u}_{i}\right)\left(\boldsymbol{x}_{t}\odot\boldsymbol{u}_{i}\right)^\top\right)^{-1}\left(\sum_{i,t:(i,j,t)\in\Omega}y_{ijt}\left(\boldsymbol{x}_{t}\odot\boldsymbol{u}_{i}\right)\right),\forall j\in\left\{1,2,...,N\right\},$$
$$\boldsymbol{x}_{t}\Leftarrow\left(\sum_{i,j:(i,j,t)\in\Omega}\left(\boldsymbol{v}_{j}\odot\boldsymbol{u}_{i}\right)\left(\boldsymbol{v}_{j}\odot\boldsymbol{u}_{i}\right)^\top\right)^{-1}\left(\sum_{i,j:(i,j,t)\in\Omega}y_{ijt}\left(\boldsymbol{v}_{j}\odot\boldsymbol{u}_{i}\right)\right),\forall t\in\left\{1,2,...,T\right\}.$$
```
def CP_ALS(sparse_tensor, rank, maxiter):
dim1, dim2, dim3 = sparse_tensor.shape
dim = np.array([dim1, dim2, dim3])
U = 0.1 * np.random.rand(dim1, rank)
V = 0.1 * np.random.rand(dim2, rank)
X = 0.1 * np.random.rand(dim3, rank)
pos = np.where(sparse_tensor != 0)
binary_tensor = np.zeros((dim1, dim2, dim3))
binary_tensor[pos] = 1
tensor_hat = np.zeros((dim1, dim2, dim3))
for iters in range(maxiter):
for order in range(dim.shape[0]):
if order == 0:
var1 = kr_prod(X, V).T
elif order == 1:
var1 = kr_prod(X, U).T
else:
var1 = kr_prod(V, U).T
var2 = kr_prod(var1, var1)
var3 = np.matmul(var2, ten2mat(binary_tensor, order).T).reshape([rank, rank, dim[order]])
var4 = np.matmul(var1, ten2mat(sparse_tensor, order).T)
for i in range(dim[order]):
var_Lambda = var3[ :, :, i]
inv_var_Lambda = inv((var_Lambda + var_Lambda.T)/2 + 10e-12 * np.eye(rank))
vec = np.matmul(inv_var_Lambda, var4[:, i])
if order == 0:
U[i, :] = vec.copy()
elif order == 1:
V[i, :] = vec.copy()
else:
X[i, :] = vec.copy()
tensor_hat = cp_combine(U, V, X)
mape = np.sum(np.abs(sparse_tensor[pos] - tensor_hat[pos])/sparse_tensor[pos])/sparse_tensor[pos].shape[0]
rmse = np.sqrt(np.sum((sparse_tensor[pos] - tensor_hat[pos]) ** 2)/sparse_tensor[pos].shape[0])
if (iters + 1) % 100 == 0:
print('Iter: {}'.format(iters + 1))
print('Training MAPE: {:.6}'.format(mape))
print('Training RMSE: {:.6}'.format(rmse))
print()
return tensor_hat, U, V, X
```
# Part 3: Data Organization
## 1) Matrix Structure
We consider a dataset of $m$ discrete time series $\boldsymbol{y}_{i}\in\mathbb{R}^{f},i\in\left\{1,2,...,m\right\}$. The time series may have missing elements. We express spatio-temporal dataset as a matrix $Y\in\mathbb{R}^{m\times f}$ with $m$ rows (e.g., locations) and $f$ columns (e.g., discrete time intervals),
$$Y=\left[ \begin{array}{cccc} y_{11} & y_{12} & \cdots & y_{1f} \\ y_{21} & y_{22} & \cdots & y_{2f} \\ \vdots & \vdots & \ddots & \vdots \\ y_{m1} & y_{m2} & \cdots & y_{mf} \\ \end{array} \right]\in\mathbb{R}^{m\times f}.$$
## 2) Tensor Structure
We consider a dataset of $m$ discrete time series $\boldsymbol{y}_{i}\in\mathbb{R}^{nf},i\in\left\{1,2,...,m\right\}$. The time series may have missing elements. We partition each time series into intervals of predifined length $f$. We express each partitioned time series as a matrix $Y_{i}$ with $n$ rows (e.g., days) and $f$ columns (e.g., discrete time intervals per day),
$$Y_{i}=\left[ \begin{array}{cccc} y_{11} & y_{12} & \cdots & y_{1f} \\ y_{21} & y_{22} & \cdots & y_{2f} \\ \vdots & \vdots & \ddots & \vdots \\ y_{n1} & y_{n2} & \cdots & y_{nf} \\ \end{array} \right]\in\mathbb{R}^{n\times f},i=1,2,...,m,$$
therefore, the resulting structure is a tensor $\mathcal{Y}\in\mathbb{R}^{m\times n\times f}$.
**How to transform a data set into something we can use for time series imputation?**
# Part 4: Experiments on Guangzhou Data Set
```
import scipy.io
tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/tensor.mat')
dense_tensor = tensor['tensor']
random_matrix = scipy.io.loadmat('../datasets/Guangzhou-data-set/random_matrix.mat')
random_matrix = random_matrix['random_matrix']
random_tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/random_tensor.mat')
random_tensor = random_tensor['random_tensor']
missing_rate = 0.2
# =============================================================================
### Random missing (RM) scenario:
binary_tensor = np.round(random_tensor + 0.5 - missing_rate)
# =============================================================================
# =============================================================================
### Non-random missing (NM) scenario:
# binary_tensor = np.zeros(dense_tensor.shape)
# for i1 in range(dense_tensor.shape[0]):
# for i2 in range(dense_tensor.shape[1]):
# binary_tensor[i1, i2, :] = np.round(random_matrix[i1, i2] + 0.5 - missing_rate)
# =============================================================================
sparse_tensor = np.multiply(dense_tensor, binary_tensor)
```
**Question**: Given only the partially observed data $\mathcal{Y}\in\mathbb{R}^{m\times n\times f}$, how can we impute the unknown missing values?
The main influential factors for such imputation model are:
- `rank`.
- `maxiter`.
```
import time
start = time.time()
rank = 80
maxiter = 1000
tensor_hat, U, V, X = CP_ALS(sparse_tensor, rank, maxiter)
pos = np.where((dense_tensor != 0) & (sparse_tensor == 0))
final_mape = np.sum(np.abs(dense_tensor[pos] - tensor_hat[pos])/dense_tensor[pos])/dense_tensor[pos].shape[0]
final_rmse = np.sqrt(np.sum((dense_tensor[pos] - tensor_hat[pos]) ** 2)/dense_tensor[pos].shape[0])
print('Final Imputation MAPE: {:.6}'.format(final_mape))
print('Final Imputation RMSE: {:.6}'.format(final_rmse))
print()
end = time.time()
print('Running time: %d seconds'%(end - start))
```
**Experiment results** of missing data imputation using TF-ALS:
| scenario |`rank`| `maxiter`| mape | rmse |
|:----------|-----:|---------:|-----------:|----------:|
|**20%, RM**| 80 | 1000 | **0.0833** | **3.5928**|
|**40%, RM**| 80 | 1000 | **0.0837** | **3.6190**|
|**20%, NM**| 10 | 1000 | **0.1027** | **4.2960**|
|**40%, NM**| 10 | 1000 | **0.1028** | **4.3274**|
# Part 5: Experiments on Birmingham Data Set
```
import scipy.io
tensor = scipy.io.loadmat('../datasets/Birmingham-data-set/tensor.mat')
dense_tensor = tensor['tensor']
random_matrix = scipy.io.loadmat('../datasets/Birmingham-data-set/random_matrix.mat')
random_matrix = random_matrix['random_matrix']
random_tensor = scipy.io.loadmat('../datasets/Birmingham-data-set/random_tensor.mat')
random_tensor = random_tensor['random_tensor']
missing_rate = 0.3
# =============================================================================
### Random missing (RM) scenario:
binary_tensor = np.round(random_tensor + 0.5 - missing_rate)
# =============================================================================
# =============================================================================
### Non-random missing (NM) scenario:
# binary_tensor = np.zeros(dense_tensor.shape)
# for i1 in range(dense_tensor.shape[0]):
# for i2 in range(dense_tensor.shape[1]):
# binary_tensor[i1, i2, :] = np.round(random_matrix[i1,i2] + 0.5 - missing_rate)
# =============================================================================
sparse_tensor = np.multiply(dense_tensor, binary_tensor)
import time
start = time.time()
rank = 30
maxiter = 1000
tensor_hat, U, V, X = CP_ALS(sparse_tensor, rank, maxiter)
pos = np.where((dense_tensor != 0) & (sparse_tensor == 0))
final_mape = np.sum(np.abs(dense_tensor[pos] - tensor_hat[pos])/dense_tensor[pos])/dense_tensor[pos].shape[0]
final_rmse = np.sqrt(np.sum((dense_tensor[pos] - tensor_hat[pos]) ** 2)/dense_tensor[pos].shape[0])
print('Final Imputation MAPE: {:.6}'.format(final_mape))
print('Final Imputation RMSE: {:.6}'.format(final_rmse))
print()
end = time.time()
print('Running time: %d seconds'%(end - start))
```
**Experiment results** of missing data imputation using TF-ALS:
| scenario |`rank`| `maxiter`| mape | rmse |
|:----------|-----:|---------:|-----------:|-----------:|
|**10%, RM**| 30 | 1000 | **0.0615** | **18.5005**|
|**30%, RM**| 30 | 1000 | **0.0583** | **18.9148**|
|**10%, NM**| 10 | 1000 | **0.1447** | **41.6710**|
|**30%, NM**| 10 | 1000 | **0.1765** | **63.8465**|
# Part 6: Experiments on Hangzhou Data Set
```
import scipy.io
tensor = scipy.io.loadmat('../datasets/Hangzhou-data-set/tensor.mat')
dense_tensor = tensor['tensor']
random_matrix = scipy.io.loadmat('../datasets/Hangzhou-data-set/random_matrix.mat')
random_matrix = random_matrix['random_matrix']
random_tensor = scipy.io.loadmat('../datasets/Hangzhou-data-set/random_tensor.mat')
random_tensor = random_tensor['random_tensor']
missing_rate = 0.4
# =============================================================================
### Random missing (RM) scenario:
binary_tensor = np.round(random_tensor + 0.5 - missing_rate)
# =============================================================================
# =============================================================================
### Non-random missing (NM) scenario:
# binary_tensor = np.zeros(dense_tensor.shape)
# for i1 in range(dense_tensor.shape[0]):
# for i2 in range(dense_tensor.shape[1]):
# binary_tensor[i1, i2, :] = np.round(random_matrix[i1, i2] + 0.5 - missing_rate)
# =============================================================================
sparse_tensor = np.multiply(dense_tensor, binary_tensor)
import time
start = time.time()
rank = 50
maxiter = 1000
tensor_hat, U, V, X = CP_ALS(sparse_tensor, rank, maxiter)
pos = np.where((dense_tensor != 0) & (sparse_tensor == 0))
final_mape = np.sum(np.abs(dense_tensor[pos] - tensor_hat[pos])/dense_tensor[pos])/dense_tensor[pos].shape[0]
final_rmse = np.sqrt(np.sum((dense_tensor[pos] - tensor_hat[pos]) ** 2)/dense_tensor[pos].shape[0])
print('Final Imputation MAPE: {:.6}'.format(final_mape))
print('Final Imputation RMSE: {:.6}'.format(final_rmse))
print()
end = time.time()
print('Running time: %d seconds'%(end - start))
```
**Experiment results** of missing data imputation using TF-ALS:
| scenario |`rank`| `maxiter`| mape | rmse |
|:----------|-----:|---------:|-----------:|----------:|
|**20%, RM**| 50 | 1000 | **0.1991** |**111.303**|
|**40%, RM**| 50 | 1000 | **0.2098** |**100.315**|
|**20%, NM**| 5 | 1000 | **0.2837** |**42.6136**|
|**40%, NM**| 5 | 1000 | **0.2811** |**38.4201**|
# Part 7: Experiments on New York Data Set
```
import scipy.io
tensor = scipy.io.loadmat('../datasets/NYC-data-set/tensor.mat')
dense_tensor = tensor['tensor']
rm_tensor = scipy.io.loadmat('../datasets/NYC-data-set/rm_tensor.mat')
rm_tensor = rm_tensor['rm_tensor']
nm_tensor = scipy.io.loadmat('../datasets/NYC-data-set/nm_tensor.mat')
nm_tensor = nm_tensor['nm_tensor']
missing_rate = 0.1
# =============================================================================
### Random missing (RM) scenario
### Set the RM scenario by:
# binary_tensor = np.round(rm_tensor + 0.5 - missing_rate)
# =============================================================================
# =============================================================================
### Non-random missing (NM) scenario
### Set the NM scenario by:
binary_tensor = np.zeros(dense_tensor.shape)
for i1 in range(dense_tensor.shape[0]):
for i2 in range(dense_tensor.shape[1]):
for i3 in range(61):
binary_tensor[i1, i2, i3 * 24 : (i3 + 1) * 24] = np.round(nm_tensor[i1, i2, i3] + 0.5 - missing_rate)
# =============================================================================
sparse_tensor = np.multiply(dense_tensor, binary_tensor)
import time
start = time.time()
rank = 30
maxiter = 1000
tensor_hat, U, V, X = CP_ALS(sparse_tensor, rank, maxiter)
pos = np.where((dense_tensor != 0) & (sparse_tensor == 0))
final_mape = np.sum(np.abs(dense_tensor[pos] - tensor_hat[pos])/dense_tensor[pos])/dense_tensor[pos].shape[0]
final_rmse = np.sqrt(np.sum((dense_tensor[pos] - tensor_hat[pos]) ** 2)/dense_tensor[pos].shape[0])
print('Final Imputation MAPE: {:.6}'.format(final_mape))
print('Final Imputation RMSE: {:.6}'.format(final_rmse))
print()
end = time.time()
print('Running time: %d seconds'%(end - start))
```
**Experiment results** of missing data imputation using TF-ALS:
| scenario |`rank`| `maxiter`| mape | rmse |
|:----------|-----:|---------:|-----------:|----------:|
|**10%, RM**| 30 | 1000 | **0.5262** | **6.2444**|
|**30%, RM**| 30 | 1000 | **0.5488** | **6.8968**|
|**10%, NM**| 30 | 1000 | **0.5170** | **5.9863**|
|**30%, NM**| 30 | 100 | **-** | **-**|
# Part 8: Experiments on Seattle Data Set
```
import pandas as pd
dense_mat = pd.read_csv('../datasets/Seattle-data-set/mat.csv', index_col = 0)
RM_mat = pd.read_csv('../datasets/Seattle-data-set/RM_mat.csv', index_col = 0)
dense_mat = dense_mat.values
RM_mat = RM_mat.values
dense_tensor = dense_mat.reshape([dense_mat.shape[0], 28, 288])
RM_tensor = RM_mat.reshape([RM_mat.shape[0], 28, 288])
missing_rate = 0.2
# =============================================================================
### Random missing (RM) scenario
### Set the RM scenario by:
binary_tensor = np.round(RM_tensor + 0.5 - missing_rate)
# =============================================================================
sparse_tensor = np.multiply(dense_tensor, binary_tensor)
import time
start = time.time()
rank = 50
maxiter = 1000
tensor_hat, U, V, X = CP_ALS(sparse_tensor, rank, maxiter)
pos = np.where((dense_tensor != 0) & (sparse_tensor == 0))
final_mape = np.sum(np.abs(dense_tensor[pos] - tensor_hat[pos])/dense_tensor[pos])/dense_tensor[pos].shape[0]
final_rmse = np.sqrt(np.sum((dense_tensor[pos] - tensor_hat[pos]) ** 2)/dense_tensor[pos].shape[0])
print('Final Imputation MAPE: {:.6}'.format(final_mape))
print('Final Imputation RMSE: {:.6}'.format(final_rmse))
print()
end = time.time()
print('Running time: %d seconds'%(end - start))
import pandas as pd
dense_mat = pd.read_csv('../datasets/Seattle-data-set/mat.csv', index_col = 0)
RM_mat = pd.read_csv('../datasets/Seattle-data-set/RM_mat.csv', index_col = 0)
dense_mat = dense_mat.values
RM_mat = RM_mat.values
dense_tensor = dense_mat.reshape([dense_mat.shape[0], 28, 288])
RM_tensor = RM_mat.reshape([RM_mat.shape[0], 28, 288])
missing_rate = 0.4
# =============================================================================
### Random missing (RM) scenario
### Set the RM scenario by:
binary_tensor = np.round(RM_tensor + 0.5 - missing_rate)
# =============================================================================
sparse_tensor = np.multiply(dense_tensor, binary_tensor)
import time
start = time.time()
rank = 50
maxiter = 1000
tensor_hat, U, V, X = CP_ALS(sparse_tensor, rank, maxiter)
pos = np.where((dense_tensor != 0) & (sparse_tensor == 0))
final_mape = np.sum(np.abs(dense_tensor[pos] - tensor_hat[pos])/dense_tensor[pos])/dense_tensor[pos].shape[0]
final_rmse = np.sqrt(np.sum((dense_tensor[pos] - tensor_hat[pos]) ** 2)/dense_tensor[pos].shape[0])
print('Final Imputation MAPE: {:.6}'.format(final_mape))
print('Final Imputation RMSE: {:.6}'.format(final_rmse))
print()
end = time.time()
print('Running time: %d seconds'%(end - start))
import pandas as pd
dense_mat = pd.read_csv('../datasets/Seattle-data-set/mat.csv', index_col = 0)
NM_mat = pd.read_csv('../datasets/Seattle-data-set/NM_mat.csv', index_col = 0)
dense_mat = dense_mat.values
NM_mat = NM_mat.values
dense_tensor = dense_mat.reshape([dense_mat.shape[0], 28, 288])
missing_rate = 0.2
# =============================================================================
### Non-random missing (NM) scenario
### Set the NM scenario by:
binary_tensor = np.zeros((dense_mat.shape[0], 28, 288))
for i1 in range(binary_tensor.shape[0]):
for i2 in range(binary_tensor.shape[1]):
binary_tensor[i1, i2, :] = np.round(NM_mat[i1, i2] + 0.5 - missing_rate)
# =============================================================================
sparse_tensor = np.multiply(dense_tensor, binary_tensor)
import time
start = time.time()
rank = 10
maxiter = 1000
tensor_hat, U, V, X = CP_ALS(sparse_tensor, rank, maxiter)
pos = np.where((dense_tensor != 0) & (sparse_tensor == 0))
final_mape = np.sum(np.abs(dense_tensor[pos] - tensor_hat[pos])/dense_tensor[pos])/dense_tensor[pos].shape[0]
final_rmse = np.sqrt(np.sum((dense_tensor[pos] - tensor_hat[pos]) ** 2)/dense_tensor[pos].shape[0])
print('Final Imputation MAPE: {:.6}'.format(final_mape))
print('Final Imputation RMSE: {:.6}'.format(final_rmse))
print()
end = time.time()
print('Running time: %d seconds'%(end - start))
import pandas as pd
dense_mat = pd.read_csv('../datasets/Seattle-data-set/mat.csv', index_col = 0)
NM_mat = pd.read_csv('../datasets/Seattle-data-set/NM_mat.csv', index_col = 0)
dense_mat = dense_mat.values
NM_mat = NM_mat.values
dense_tensor = dense_mat.reshape([dense_mat.shape[0], 28, 288])
missing_rate = 0.4
# =============================================================================
### Non-random missing (NM) scenario
### Set the NM scenario by:
binary_tensor = np.zeros((dense_mat.shape[0], 28, 288))
for i1 in range(binary_tensor.shape[0]):
for i2 in range(binary_tensor.shape[1]):
binary_tensor[i1, i2, :] = np.round(NM_mat[i1, i2] + 0.5 - missing_rate)
# =============================================================================
sparse_tensor = np.multiply(dense_tensor, binary_tensor)
import time
start = time.time()
rank = 10
maxiter = 1000
tensor_hat, U, V, X = CP_ALS(sparse_tensor, rank, maxiter)
pos = np.where((dense_tensor != 0) & (sparse_tensor == 0))
final_mape = np.sum(np.abs(dense_tensor[pos] - tensor_hat[pos])/dense_tensor[pos])/dense_tensor[pos].shape[0]
final_rmse = np.sqrt(np.sum((dense_tensor[pos] - tensor_hat[pos]) ** 2)/dense_tensor[pos].shape[0])
print('Final Imputation MAPE: {:.6}'.format(final_mape))
print('Final Imputation RMSE: {:.6}'.format(final_rmse))
print()
end = time.time()
print('Running time: %d seconds'%(end - start))
```
**Experiment results** of missing data imputation using TF-ALS:
| scenario |`rank`| `maxiter`| mape | rmse |
|:----------|-----:|---------:|-----------:|----------:|
|**20%, RM**| 50 | 1000 | **0.0742** |**4.4929**|
|**40%, RM**| 50 | 1000 | **0.0758** |**4.5574**|
|**20%, NM**| 10 | 1000 | **0.0995** |**5.6331**|
|**40%, NM**| 10 | 1000 | **0.1004** |**5.7034**|
| github_jupyter |
Let's look at:
Number of labels per image (histogram)
Quality score per image for images with multiple labels (sigmoid?)
```
import csv
from itertools import islice
from collections import defaultdict
import pandas as pd
import matplotlib.pyplot as plt
import torch
import torchvision
import numpy as np
CSV_PATH = 'wgangp_data.csv'
realness = {}
# real_votes = defaultdict(int)
# fake_votes = defaultdict(int)
total_votes = defaultdict(int)
correct_votes = defaultdict(int)
with open(CSV_PATH) as f:
dictreader = csv.DictReader(f)
for line in dictreader:
img_name = line['img']
assert(line['realness'] in ('True', 'False'))
assert(line['correctness'] in ('True', 'False'))
realness[img_name] = line['realness'] == 'True'
if line['correctness'] == 'True':
correct_votes[img_name] += 1
total_votes[img_name] += 1
pdx = pd.read_csv(CSV_PATH)
pdx
pdx[pdx.groupby('img').count() > 50]
pdx
#df.img
# print(df.columns)
# print(df['img'])
# How much of the time do people guess "fake"? Slightly more than half!
pdx[pdx.correctness != pdx.realness].count()/pdx.count()
# How much of the time do people guess right? 94.4%
pdx[pdx.correctness].count()/pdx.count()
#90.3% of the time, real images are correctly labeled as real
pdx[pdx.realness][pdx.correctness].count()/pdx[pdx.realness].count()
#98.5% of the time, fake images are correctly labeled as fake
pdx[~pdx.realness][pdx.correctness].count()/pdx[~pdx.realness].count()
len(total_votes.values())
img_dict = {img: [realness[img], correct_votes[img], total_votes[img], correct_votes[img]/total_votes[img]] for img in realness }
# print(img_dict.keys())
#img_dict['celeba500/005077_crop.jpg']
plt.hist([v[3] for k,v in img_dict.items() if 'celeb' in k])
def getVotesDict(img_dict):
votes_dict = defaultdict(int)
for img in total_votes:
votes_dict[img_dict[img][2]] += 1
return votes_dict
votes_dict = getVotesDict(img_dict)
for i in sorted(votes_dict.keys()):
print(i, votes_dict[i])
selected_img_dict = {img:value for img, value in img_dict.items() if img_dict[img][2] > 10}
less_than_50_dict = {img:value for img, value in img_dict.items() if img_dict[img][2] < 10}
imgs_over_50 = list(selected_img_dict.keys())
# print(len(selected_img_dict))
# print(imgs_over_50)
pdx_50 = pdx[pdx.img.apply(lambda x: x in imgs_over_50)]
len(pdx_50)
pdx_under_50 = pdx[pdx.img.apply(lambda x: x not in imgs_over_50)]
len(pdx_under_50)
len(pdx_under_50[pdx_under_50.img.apply(lambda x: 'wgan' not in x)])
correctness = sorted([value[3] for key, value in selected_img_dict.items()])
print(correctness)
plt.hist(correctness)
plt.show()
correctness = sorted([value[3] for key, value in less_than_50_dict.items()])
# print(correctness)
plt.hist(correctness)
plt.show()
ct = []
# selected_img = [img in total_votes.keys() if total_votes[img] > 1 ]
discriminator = torch.load('discriminator.pt', map_location='cpu')
# torch.load_state_dict('discriminator.pt')
discriminator(torch.zeros(64,64,3))
```
| github_jupyter |
# Naive Bayes Classifier
Predicting positivty/negativity of movie reviews using Naive Bayes algorithm
## 1. Import Dataset
Labels:
* 0 : Negative review
* 1 : Positive review
```
import pandas as pd
import warnings
warnings.filterwarnings('ignore')
reviews = pd.read_csv('ratings_train.txt', delimiter='\t')
reviews.head(10)
#divide between negative and positive reviews with more than 30 words in length
neg = reviews[(reviews.document.str.len() >= 30) & (reviews.label == 0)].sample(3000, random_state=43)
pos = reviews[(reviews.document.str.len() >= 30) & (reviews.label == 1)].sample(3000, random_state=43)
pos.head()
#NLP method
import re
import konlpy
from konlpy.tag import Twitter
okt = Twitter()
def parse(s):
s = re.sub(r'[?$.!,-_\'\"(){}~]+', '', s)
try:
return okt.nouns(s)
except:
return []
#okt.morphs is another option
neg['parsed_doc'] = neg.document.apply(parse)
pos['parsed_doc'] = pos.document.apply(parse)
neg.head()
pos.head()
# create 5800 training data / 200 test data
neg_train = neg[:2900]
pos_train = pos[:2900]
neg_test = neg[2900:]
pos_test = pos[2900:]
```
## 2. Create Corpus
```
neg_corpus = set(neg_train.parsed_doc.sum())
pos_corpus = set(pos_train.parsed_doc.sum())
corpus = list((neg_corpus).union(pos_corpus))
print('corpus length : ', len(corpus))
corpus[:10]
```
## 3. Create Bag of Words
```
from collections import OrderedDict
neg_bow_vecs = []
for _, doc in neg.parsed_doc.items():
bow_vecs = OrderedDict()
for w in corpus:
if w in doc:
bow_vecs[w] = 1
else:
bow_vecs[w] = 0
neg_bow_vecs.append(bow_vecs)
pos_bow_vecs = []
for _, doc in pos.parsed_doc.items():
bow_vecs = OrderedDict()
for w in corpus:
if w in doc:
bow_vecs[w] = 1
else:
bow_vecs[w] = 0
pos_bow_vecs.append(bow_vecs)
#bag of word vector example
#this length is equal to the length of the corpus
neg_bow_vecs[0].values()
```
## 4. Model Training
$n$ is the dimension of each document, in other words, the length of corpus <br>
$$\large p(pos|doc) = \Large \frac{p(doc|pos) \cdot p(pos)}{p(doc)}$$
<br>
$$\large p(neg|doc) = \Large \frac{p(doc|neg) \cdot p(neg)}{p(doc)}$$
<br><br>
**Likelihood functions:** <br><br>
$p(word_{i}|pos) = \large \frac{\text{the number of positive documents that contain the word}}{\text{the number of positive documents}}$
$p(word_{i}|neg) = \large \frac{\text{the number of negative documents that contain the word}}{\text{the number of negative documents}}$
```
import numpy as np
corpus[:5]
list(neg_train.parsed_doc.items())[0]
#this counts how many times a word in corpus appeares in neg_train data
neg_words_likelihood_cnts = {}
for w in corpus:
cnt = 0
for _, doc in neg_train.parsed_doc.items():
if w in doc:
cnt += 1
neg_words_likelihood_cnts[w] = cnt
#this counts how many times a word in corpus appeares in pos_train data : p(neg)
pos_words_likelihood_cnts = {}
for w in corpus:
cnt = 0
for _, doc in pos_train.parsed_doc.items():
if w in doc:
cnt += 1
pos_words_likelihood_cnts[w] = cnt
import operator
sorted(neg_words_likelihood_cnts.items(), key=operator.itemgetter(1), reverse=True)[:10]
sorted(pos_words_likelihood_cnts.items(), key=operator.itemgetter(1), reverse=True)[:10]
```
## 5. Classifier
* We represent each documents in terms of bag of words. If the size of Corpus is $n$, this means that each bag of word of document is $n-dimensional$
* When there isn't a word, we use **Laclacian Smoothing**
```
test_data = pd.concat([neg_test, pos_test], axis=0)
test_data.head()
def predict(doc):
pos_prior, neg_prior = 1/2, 1/2 #because we have equal number of pos and neg training documents
# Posterior of pos
pos_prob = np.log(1)
for word in corpus:
if word in doc:
# the word is in the current document and has appeared in pos documents
if word in pos_words_likelihood_cnts:
pos_prob += np.log((pos_words_likelihood_cnts[word] + 1) / (len(pos_train) + len(corpus)))
else:
# the word is in the current document, but has never appeared in pos documents : Laplacian
pos_prob += np.log(1 / (len(pos_train) + len(corpus)))
else:
# the word is not in the current document, but has appeared in pos documents
# we can find the possibility that the word is not in pos
if word in pos_words_likelihood_cnts:
pos_prob += \
np.log((len(pos_train) - pos_words_likelihood_cnts[word] + 1) / (len(pos_train) + len(corpus)))
else:
# the word is not in the current document, and has never appeared in pos documents : Laplacian
pos_prob += np.log((len(pos_train) + 1) / (len(pos_train) + len(corpus)))
pos_prob += np.log(pos_prior)
# Posterior of neg
neg_prob = 1
for word in corpus:
if word in doc:
# 단어가 현재 문장에 존재하고, neg 문장에 나온적이 있는 경우
if word in neg_words_likelihood_cnts:
neg_prob += np.log((neg_words_likelihood_cnts[word] + 1) / (len(neg_train) + len(corpus)))
else:
# 단어가 현재 문장에 존재하고, neg 문장에서 한 번도 나온적이 없는 경우 : 라플라시안 스무딩
neg_prob += np.log(1 / (len(neg_train) + len(corpus)))
else:
# 단어가 현재 문장에 존재하지 않고, neg 문장에 나온적이 있는 경우 (neg에서 해당단어가 없는 확률을 구할 수 있다.)
if word in neg_words_likelihood_cnts:
neg_prob += \
np.log((len(neg_train) - neg_words_likelihood_cnts[word] + 1) / (len(neg_train) + len(corpus)))
else:
# 단어가 현재 문장에 존재하지 않고, pos 문장에서 단 한 번도 나온적이 없는 경우 : 라플라시안 스무딩
neg_prob += np.log((len(neg_train) + 1) / (len(neg_train) + len(corpus)))
neg_prob += np.log(neg_prior)
if pos_prob >= neg_prob:
return 1
else:
return 0
test_data['pred'] = test_data.parsed_doc.apply(predict)
test_data.head()
test_data.shape
sum(test_data.label ^ test_data.pred)
```
There are a total of 200 test documents, and of these 200 tests only 46 were different
```
1 - sum(test_data.label ^ test_data.pred) / len(test_data)
```
We have about 77% accuracy rate, which is relatively high
| github_jupyter |
# Auditing a dataframe
In this notebook, we shall demonstrate how to use `privacypanda` to _audit_ the privacy of your data. `privacypanda` provides a simple function which prints the names of any columns which break privacy. Currently, these are:
- Addresses
- E.g. "10 Downing Street"; "221b Baker St"; "EC2R 8AH"
- Phonenumbers (UK mobile)
- E.g. "+447123456789"
- Email addresses
- Ending in ".com", ".co.uk", ".org", ".edu" (to be expanded soon)
```
%load_ext watermark
%watermark -n -p pandas,privacypanda -g
import pandas as pd
import privacypanda as pp
```
---
## Firstly, we need data
```
data = pd.DataFrame(
{
"user ID": [
1665,
1,
5287,
42,
],
"User email": [
"xxxxxxxxxxxxx",
"xxxxxxxx",
"I'm not giving you that",
"[email protected]",
],
"User address": [
"AB1 1AB",
"",
"XXX XXX",
"EC2R 8AH",
],
"Likes raclette": [
1,
0,
1,
1,
],
}
)
```
You will notice two things about this dataframe:
1. _Some_ of the data has already been anonymized, for example by replacing characters with "x"s. However, the person who collected this data has not been fastidious with its cleaning as there is still some raw, potentially problematic private information. As the dataset grows, it becomes easier to miss entries with private information
2. Not all columns expose privacy: "Likes raclette" is pretty benign information (but be careful, lots of benign information can be combined to form a unique fingerprint identifying an individual - let's not worry about this at the moment, though), and "user ID" is already an anonymized labelling of an individual.
---
# Auditing the data's privacy
As a data scientist, we want a simple way to tell which columns, if any break privacy. More importantly, _how_ they break privacy determines how we deal with them. For example, emails will likely be superfluous information for analysis and can therefore be removed from the data, but age may be important and so we may wish instead to apply differential privacy to the dataset.
We can use `privacypanda`'s `report_privacy` function to see which data is problematic.
```
report = pp.report_privacy(data)
print(report)
```
`report_privacy` returns a `Report` object which stores the privacy issues of each column in the data.
As `privacypanda` is in active development,
this is currently only a simple dictionary of binary "breaks"/"doesn't break" privacy for each column.
We aim to make this information _cell-level_,
i.e. removing/replacing the information in individual cells in order to protect privacy with less information loss.
| github_jupyter |
```
import pandas as pd
import numpy as np
```
## read datafiles
- C-18 for language population
- C-13 for particular age-range population from a state
```
c18=pd.read_excel('datasets/C-18.xlsx',skiprows=6,header=None,engine='openpyxl')
c13=pd.read_excel('datasets/C-13.xls',skiprows=7,header=None)
```
### particular age groups are
- 5-9
- 10-14
- 15-19
- 20-24
- 25-29
- 30-49
- 50-69
- 70+
- Age not stated
## obtain useful data from C-13 and C-18 for age-groups
- first get particular state names for identifying specific states
- get particular age-groups from C-18 file
- make list of particular age group row/col for a particular states
- now just simply iterate through each state to get relevent data and store it into a csv file
- to get total pop of particular age-range I have used C-13 file
- to get total pop that speaks more than 3 languages from a state in a particular age-range I have used C-18 file
```
# STATE_NAMES=[list(np.unique(c18.iloc[:,2].values))]
STATE_NAMES=[]
for state in c18.iloc[:,2].values:
if not (state in STATE_NAMES):
STATE_NAMES.append(state)
AGE_GROUPS=list(c18.iloc[1:10,4].values)
# although it is a bit of manual work but it is worth the efforts
AGE_GROUP_RANGES=[list(range(5,10)),list(range(10,15)),list(range(15,20)),list(range(20,25)),list(range(25,30)),list(range(30,50)),list(range(50,70)),list(range(70,100))+['100+'],['Age not stated']]
useful_data=[]
for i,state in enumerate(STATE_NAMES):
for j,age_grp in enumerate(AGE_GROUPS):
# this list is to get only the years in the particular age-group
true_false_list=[]
for single_year_age in c13.iloc[:,4].values:
if single_year_age in AGE_GROUP_RANGES[j]:
true_false_list.append(True)
else:
true_false_list.append(False)
# here i is the state code
male_pop=c13[(c13.loc[:,1]==i) & (true_false_list)].iloc[:,6].values.sum()
female_pop=c13[(c13.loc[:,1]==i) & (true_false_list)].iloc[:,7].values.sum()
# tri
tri_male=c18[(c18.iloc[:,0]==i) & (c18.iloc[:,4]==age_grp) & (c18.iloc[:,3]=='Total')].iloc[0,9]
tri_female=c18[(c18.iloc[:,0]==i) & (c18.iloc[:,4]==age_grp) & (c18.iloc[:,3]=='Total')].iloc[0,10]
#bi
bi_male=c18[(c18.iloc[:,0]==i) & (c18.iloc[:,4]==age_grp) & (c18.iloc[:,3]=='Total')].iloc[0,6] - tri_male
bi_female=c18[(c18.iloc[:,0]==i) & (c18.iloc[:,4]==age_grp) & (c18.iloc[:,3]=='Total')].iloc[0,7] - tri_female
#uni
uni_male=male_pop-bi_male-tri_male
uni_female=female_pop-bi_female-tri_female
item={
'state-code':i,
'state-name':state,
'age-group':age_grp,
'age-group-male-pop':male_pop,
'age-group-female-pop':female_pop,
'tri-male-ratio':tri_male/male_pop,
'tri-female-ratio':tri_female/female_pop,
'bi-male-ratio':bi_male/male_pop,
'bi-female-ratio':bi_female/female_pop,
'uni-male-ratio':uni_male/male_pop,
'uni-female-ratio':uni_female/female_pop
}
useful_data.append(item)
df=pd.DataFrame(useful_data)
```
## age-analysis
- get highest ratio age-group for a state and store it into csv file
- above process can be repeated for all parts of the question
```
tri_list=[]
bi_list=[]
uni_list=[]
for i in range(36):
male_values=df[df['state-code']==i].sort_values(by='tri-male-ratio',ascending=False).iloc[0,[2,5]].values
female_values=df[df['state-code']==i].sort_values(by='tri-male-ratio',ascending=False).iloc[0,[2,6]].values
tri_item={
'state/ut':i,
'age-group-males':male_values[0],
'ratio-males':male_values[1],
'age-group-females':female_values[0],
'ratio-females':female_values[1]
}
tri_list.append(tri_item)
male_values=df[df['state-code']==i].sort_values(by='bi-male-ratio',ascending=False).iloc[0,[2,7]].values
female_values=df[df['state-code']==i].sort_values(by='bi-male-ratio',ascending=False).iloc[0,[2,8]].values
bi_item={
'state/ut':i,
'age-group-males':male_values[0],
'ratio-males':male_values[1],
'age-group-females':female_values[0],
'ratio-females':female_values[1]
}
bi_list.append(bi_item)
male_values=df[df['state-code']==i].sort_values(by='uni-male-ratio',ascending=False).iloc[0,[2,9]].values
female_values=df[df['state-code']==i].sort_values(by='uni-male-ratio',ascending=False).iloc[0,[2,10]].values
uni_item={
'state/ut':i,
'age-group-males':male_values[0],
'ratio-males':male_values[1],
'age-group-females':female_values[0],
'ratio-females':female_values[1]
}
uni_list.append(uni_item)
```
- convert into pandas dataframes and store into CSVs
```
tri_df=pd.DataFrame(tri_list)
bi_df=pd.DataFrame(bi_list)
uni_df=pd.DataFrame(uni_list)
tri_df.to_csv('outputs/age-gender-a.csv',index=False)
bi_df.to_csv('outputs/age-gender-b.csv',index=False)
uni_df.to_csv('outputs/age-gender-c.csv',index=False)
```
## observations
- in almost all states(and all cases) both highest ratio female and male age-groups are same.
- interestingly in only one language case for all states '5-9' age group dominates, and it is also quite intutive; since at that early stage in life children only speak their mother toung only
```
uni_df
```
| github_jupyter |
```
# Neo4J graph example
# author: Gressling, T
# license: MIT License # code: github.com/gressling/examples
# activity: single example # index: 25-2
# https://gist.github.com/korakot/328aaac51d78e589b4a176228e4bb06f
# download 3.5.8 or neo4j-enterprise-4.0.0-alpha09mr02-unix
!curl https://neo4j.com/artifact.php?name=neo4j-community-3.5.8-unix.tar.gz -v -o neo4j.tar.gz
#!curl https://s3-eu-west-1.amazonaws.com/dist.neo4j.org/neo4j-community-3.5.8-unix.tar.gz?x-amz-security-token=IQoJb3JpZ2luX2VjEBgaCXVzLWVhc3QtMSJIMEYCIQC8JQ87qLW8MutNDC7kLf%2F8lCgTEeFw6XMHe0g6JGiLwQIhALrPjMot9j4eV1EiWsysYUamjICHutsaKG%2Fa%2B05ZJKD%2BKr0DCJD%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEQABoMMTI4OTE2Njc5MzMwIgyekSOUHQOH4V1oebkqkQMSnlGj83iqgQ1X%2Bb9lDsjjfh%2FGggoIuvkn8RO9Ur3Ws24VznIHWrxQTECnTtQfsjhruOUbGJKv%2FlKBy9VU0aLu0zdrNcxeWZedOW09We0xVS4QTBipwW4i0UubWw%2FuDp1vAKPc1wLIq3vuvgflB4sXmTgvkQ%2FcT2%2BoIrvflqmSQ%2Fr9SB9Cqj9iACjxNQZrLs3qv2WgWxUNSsVjJYGXUx1yzx0ckCtxKYZ%2BzVBvqGuG1yULIodkGo4Kfbk5bh7s69gk8N4Gli7cQvYc9ajSFGg5IHXJU7%2BvRWeekX%2F2o7JlCRQogSNlW8kvv7o9ioD6Uj1mkOnR6rMsEv4Xo%2B2buKg7LqaPobmZwyjGnMBvZdndLXq37lAT7%2BP1i%2BVNCC7rak4Aqb3HtFMDZ%2F0nmdAitcKDWG1Am1mnaXiL3s6MZQ88SoU8h5RK0k01M%2FBxU9ZPwfi%2Bm8OAC%2Bgh6QyP9f7CPqSdI%2Fy8BSthxARcwWxl2ZwhHtUu7jqFf601aTu0iof%2FP2tH9oxty4kdH%2BI64qo7JCr9%2BzDx4OT9BTrqAfGlw5dReXwyh%2BSnYxW%2BB42cGs2JDcrFohn6UGdG3BtI%2FsjAFymH0vkCtXfN3irUPPzOoFzx216v%2F4GFfGebIpWqr85hT2f%2F28fck2XPbXiv%2BXSeffQdc8UkSL7dMHbquZ%2BmdOtCNlMhOWpnov5J7aICj9uY4AR60kNUSW3N4nra3CXjNEJWy%2B8ft49e6lnR9iKlVFxdwoXb1YAEx4egpctFcffoiaIEk2GinHjShAQgApOZFgOLe%2FDC9%2BnIwhxL7rSUfP7Ox%2FbEJF%2Br6VNYJddoD6D8xF2OVo%2FxZzv4M6eyw6Squ5r6i4LM7g%3D%3D&AWSAccessKeyId=ASIAR4BAINKRGKUIBRUS&Expires=1605973931&Signature=gzC025ItqNNdXpCJkGsm%2FvQt2WU%3D -o neo4j.tar.gz
# decompress and rename
!tar -xf neo4j.tar.gz # or --strip-components=1
!mv neo4j-community-3.5.8 nj
# disable password, and start server
!sed -i '/#dbms.security.auth_enabled/s/^#//g' nj/conf/neo4j.conf
!nj/bin/neo4j start
# from neo4j import GraphDatabase
# !pip install py2neo
from py2neo import Graph
graph = Graph("bolt://localhost:7687", auth=("neo4j", "password"))
graph.delete_all()
# define the entities of the graph (nodes)
from py2neo import Node
laboratory = Node("Laboratory", name="Laboratory 1")
lab1 = Node("Person", name="Peter", employee_ID=2)
lab2 = Node("Person", name="Susan", employee_ID=4)
sample1 = Node("Sample", name="A-12213", weight=45.7)
sample2 = Node("Sample", name="B-33443", weight=48.0)
# shared sample between two experiments
sample3 = Node("Sample", name="AB-33443", weight=24.3)
experiment1 = Node("Experiment", name="Screening-45")
experiment2 = Node("Experiment", name="Screening/w/Sol")
graph.create(laboratory | lab1 | lab2 | sample1 | sample2 | experiment1 |
experiment2)
# Define the relationships of the graph (edges)
from py2neo import Relationship
graph.create(Relationship(lab1, "works in", laboratory))
graph.create(Relationship(lab2, "works in", laboratory))
graph.create(Relationship(lab1, "performs", sample1))
graph.create(Relationship(lab2, "performs", sample2))
graph.create(Relationship(lab2, "performs", sample3))
graph.create(Relationship(sample1, "partof", experiment1))
graph.create(Relationship(sample2, "partof", experiment2))
graph.create(Relationship(sample3, "partof", experiment2))
graph.create(Relationship(sample3, "partof", experiment1))
import neo4jupyter
neo4jupyter.init_notebook_mode()
neo4jupyter.draw(graph)
```
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License");
```
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# tf.data: Build TensorFlow input pipelines
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/data"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/data.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/data.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/data.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
The `tf.data` API enables you to build complex input pipelines from simple,
reusable pieces. For example, the pipeline for an image model might aggregate
data from files in a distributed file system, apply random perturbations to each
image, and merge randomly selected images into a batch for training. The
pipeline for a text model might involve extracting symbols from raw text data,
converting them to embedding identifiers with a lookup table, and batching
together sequences of different lengths. The `tf.data` API makes it possible to
handle large amounts of data, read from different data formats, and perform
complex transformations.
The `tf.data` API introduces a `tf.data.Dataset` abstraction that represents a
sequence of elements, in which each element consists of one or more components.
For example, in an image pipeline, an element might be a single training
example, with a pair of tensor components representing the image and its label.
There are two distinct ways to create a dataset:
* A data **source** constructs a `Dataset` from data stored in memory or in
one or more files.
* A data **transformation** constructs a dataset from one or more
`tf.data.Dataset` objects.
```
import tensorflow as tf
import pathlib
import os
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
np.set_printoptions(precision=4)
```
## Basic mechanics
<a id="basic-mechanics"/>
To create an input pipeline, you must start with a data *source*. For example,
to construct a `Dataset` from data in memory, you can use
`tf.data.Dataset.from_tensors()` or `tf.data.Dataset.from_tensor_slices()`.
Alternatively, if your input data is stored in a file in the recommended
TFRecord format, you can use `tf.data.TFRecordDataset()`.
Once you have a `Dataset` object, you can *transform* it into a new `Dataset` by
chaining method calls on the `tf.data.Dataset` object. For example, you can
apply per-element transformations such as `Dataset.map()`, and multi-element
transformations such as `Dataset.batch()`. See the documentation for
`tf.data.Dataset` for a complete list of transformations.
The `Dataset` object is a Python iterable. This makes it possible to consume its
elements using a for loop:
```
dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1])
dataset
for elem in dataset:
print(elem.numpy())
```
Or by explicitly creating a Python iterator using `iter` and consuming its
elements using `next`:
```
it = iter(dataset)
print(next(it).numpy())
```
Alternatively, dataset elements can be consumed using the `reduce`
transformation, which reduces all elements to produce a single result. The
following example illustrates how to use the `reduce` transformation to compute
the sum of a dataset of integers.
```
print(dataset.reduce(0, lambda state, value: state + value).numpy())
```
<!-- TODO(jsimsa): Talk about `tf.function` support. -->
<a id="dataset_structure"></a>
### Dataset structure
A dataset contains elements that each have the same (nested) structure and the
individual components of the structure can be of any type representable by
`tf.TypeSpec`, including `tf.Tensor`, `tf.sparse.SparseTensor`, `tf.RaggedTensor`,
`tf.TensorArray`, or `tf.data.Dataset`.
The `Dataset.element_spec` property allows you to inspect the type of each
element component. The property returns a *nested structure* of `tf.TypeSpec`
objects, matching the structure of the element, which may be a single component,
a tuple of components, or a nested tuple of components. For example:
```
dataset1 = tf.data.Dataset.from_tensor_slices(tf.random.uniform([4, 10]))
dataset1.element_spec
dataset2 = tf.data.Dataset.from_tensor_slices(
(tf.random.uniform([4]),
tf.random.uniform([4, 100], maxval=100, dtype=tf.int32)))
dataset2.element_spec
dataset3 = tf.data.Dataset.zip((dataset1, dataset2))
dataset3.element_spec
# Dataset containing a sparse tensor.
dataset4 = tf.data.Dataset.from_tensors(tf.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4]))
dataset4.element_spec
# Use value_type to see the type of value represented by the element spec
dataset4.element_spec.value_type
```
The `Dataset` transformations support datasets of any structure. When using the
`Dataset.map()`, and `Dataset.filter()` transformations,
which apply a function to each element, the element structure determines the
arguments of the function:
```
dataset1 = tf.data.Dataset.from_tensor_slices(
tf.random.uniform([4, 10], minval=1, maxval=10, dtype=tf.int32))
dataset1
for z in dataset1:
print(z.numpy())
dataset2 = tf.data.Dataset.from_tensor_slices(
(tf.random.uniform([4]),
tf.random.uniform([4, 100], maxval=100, dtype=tf.int32)))
dataset2
dataset3 = tf.data.Dataset.zip((dataset1, dataset2))
dataset3
for a, (b,c) in dataset3:
print('shapes: {a.shape}, {b.shape}, {c.shape}'.format(a=a, b=b, c=c))
```
## Reading input data
### Consuming NumPy arrays
See [Loading NumPy arrays](../tutorials/load_data/numpy.ipynb) for more examples.
If all of your input data fits in memory, the simplest way to create a `Dataset`
from them is to convert them to `tf.Tensor` objects and use
`Dataset.from_tensor_slices()`.
```
train, test = tf.keras.datasets.fashion_mnist.load_data()
images, labels = train
images = images/255
dataset = tf.data.Dataset.from_tensor_slices((images, labels))
dataset
```
Note: The above code snippet will embed the `features` and `labels` arrays
in your TensorFlow graph as `tf.constant()` operations. This works well for a
small dataset, but wastes memory---because the contents of the array will be
copied multiple times---and can run into the 2GB limit for the `tf.GraphDef`
protocol buffer.
### Consuming Python generators
Another common data source that can easily be ingested as a `tf.data.Dataset` is the python generator.
Caution: While this is a convienient approach it has limited portability and scalibility. It must run in the same python process that created the generator, and is still subject to the Python [GIL](https://en.wikipedia.org/wiki/Global_interpreter_lock).
```
def count(stop):
i = 0
while i<stop:
yield i
i += 1
for n in count(5):
print(n)
```
The `Dataset.from_generator` constructor converts the python generator to a fully functional `tf.data.Dataset`.
The constructor takes a callable as input, not an iterator. This allows it to restart the generator when it reaches the end. It takes an optional `args` argument, which is passed as the callable's arguments.
The `output_types` argument is required because `tf.data` builds a `tf.Graph` internally, and graph edges require a `tf.dtype`.
```
ds_counter = tf.data.Dataset.from_generator(count, args=[25], output_types=tf.int32, output_shapes = (), )
for count_batch in ds_counter.repeat().batch(10).take(10):
print(count_batch.numpy())
```
The `output_shapes` argument is not *required* but is highly recomended as many tensorflow operations do not support tensors with unknown rank. If the length of a particular axis is unknown or variable, set it as `None` in the `output_shapes`.
It's also important to note that the `output_shapes` and `output_types` follow the same nesting rules as other dataset methods.
Here is an example generator that demonstrates both aspects, it returns tuples of arrays, where the second array is a vector with unknown length.
```
def gen_series():
i = 0
while True:
size = np.random.randint(0, 10)
yield i, np.random.normal(size=(size,))
i += 1
for i, series in gen_series():
print(i, ":", str(series))
if i > 5:
break
```
The first output is an `int32` the second is a `float32`.
The first item is a scalar, shape `()`, and the second is a vector of unknown length, shape `(None,)`
```
ds_series = tf.data.Dataset.from_generator(
gen_series,
output_types=(tf.int32, tf.float32),
output_shapes=((), (None,)))
ds_series
```
Now it can be used like a regular `tf.data.Dataset`. Note that when batching a dataset with a variable shape, you need to use `Dataset.padded_batch`.
```
ds_series_batch = ds_series.shuffle(20).padded_batch(10)
ids, sequence_batch = next(iter(ds_series_batch))
print(ids.numpy())
print()
print(sequence_batch.numpy())
```
For a more realistic example, try wrapping `preprocessing.image.ImageDataGenerator` as a `tf.data.Dataset`.
First download the data:
```
flowers = tf.keras.utils.get_file(
'flower_photos',
'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
untar=True)
```
Create the `image.ImageDataGenerator`
```
img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20)
images, labels = next(img_gen.flow_from_directory(flowers))
print(images.dtype, images.shape)
print(labels.dtype, labels.shape)
ds = tf.data.Dataset.from_generator(
lambda: img_gen.flow_from_directory(flowers),
output_types=(tf.float32, tf.float32),
output_shapes=([32,256,256,3], [32,5])
)
ds.element_spec
for images, label in ds.take(1):
print('images.shape: ', images.shape)
print('labels.shape: ', labels.shape)
```
### Consuming TFRecord data
See [Loading TFRecords](../tutorials/load_data/tf_records.ipynb) for an end-to-end example.
The `tf.data` API supports a variety of file formats so that you can process
large datasets that do not fit in memory. For example, the TFRecord file format
is a simple record-oriented binary format that many TensorFlow applications use
for training data. The `tf.data.TFRecordDataset` class enables you to
stream over the contents of one or more TFRecord files as part of an input
pipeline.
Here is an example using the test file from the French Street Name Signs (FSNS).
```
# Creates a dataset that reads all of the examples from two files.
fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001")
```
The `filenames` argument to the `TFRecordDataset` initializer can either be a
string, a list of strings, or a `tf.Tensor` of strings. Therefore if you have
two sets of files for training and validation purposes, you can create a factory
method that produces the dataset, taking filenames as an input argument:
```
dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file])
dataset
```
Many TensorFlow projects use serialized `tf.train.Example` records in their TFRecord files. These need to be decoded before they can be inspected:
```
raw_example = next(iter(dataset))
parsed = tf.train.Example.FromString(raw_example.numpy())
parsed.features.feature['image/text']
```
### Consuming text data
See [Loading Text](../tutorials/load_data/text.ipynb) for an end to end example.
Many datasets are distributed as one or more text files. The
`tf.data.TextLineDataset` provides an easy way to extract lines from one or more
text files. Given one or more filenames, a `TextLineDataset` will produce one
string-valued element per line of those files.
```
directory_url = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/'
file_names = ['cowper.txt', 'derby.txt', 'butler.txt']
file_paths = [
tf.keras.utils.get_file(file_name, directory_url + file_name)
for file_name in file_names
]
dataset = tf.data.TextLineDataset(file_paths)
```
Here are the first few lines of the first file:
```
for line in dataset.take(5):
print(line.numpy())
```
To alternate lines between files use `Dataset.interleave`. This makes it easier to shuffle files together. Here are the first, second and third lines from each translation:
```
files_ds = tf.data.Dataset.from_tensor_slices(file_paths)
lines_ds = files_ds.interleave(tf.data.TextLineDataset, cycle_length=3)
for i, line in enumerate(lines_ds.take(9)):
if i % 3 == 0:
print()
print(line.numpy())
```
By default, a `TextLineDataset` yields *every* line of each file, which may
not be desirable, for example, if the file starts with a header line, or contains comments. These lines can be removed using the `Dataset.skip()` or
`Dataset.filter()` transformations. Here, you skip the first line, then filter to
find only survivors.
```
titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv")
titanic_lines = tf.data.TextLineDataset(titanic_file)
for line in titanic_lines.take(10):
print(line.numpy())
def survived(line):
return tf.not_equal(tf.strings.substr(line, 0, 1), "0")
survivors = titanic_lines.skip(1).filter(survived)
for line in survivors.take(10):
print(line.numpy())
```
### Consuming CSV data
See [Loading CSV Files](../tutorials/load_data/csv.ipynb), and [Loading Pandas DataFrames](../tutorials/load_data/pandas.ipynb) for more examples.
The CSV file format is a popular format for storing tabular data in plain text.
For example:
```
titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv")
df = pd.read_csv(titanic_file)
df.head()
```
If your data fits in memory the same `Dataset.from_tensor_slices` method works on dictionaries, allowing this data to be easily imported:
```
titanic_slices = tf.data.Dataset.from_tensor_slices(dict(df))
for feature_batch in titanic_slices.take(1):
for key, value in feature_batch.items():
print(" {!r:20s}: {}".format(key, value))
```
A more scalable approach is to load from disk as necessary.
The `tf.data` module provides methods to extract records from one or more CSV files that comply with [RFC 4180](https://tools.ietf.org/html/rfc4180).
The `experimental.make_csv_dataset` function is the high level interface for reading sets of csv files. It supports column type inference and many other features, like batching and shuffling, to make usage simple.
```
titanic_batches = tf.data.experimental.make_csv_dataset(
titanic_file, batch_size=4,
label_name="survived")
for feature_batch, label_batch in titanic_batches.take(1):
print("'survived': {}".format(label_batch))
print("features:")
for key, value in feature_batch.items():
print(" {!r:20s}: {}".format(key, value))
```
You can use the `select_columns` argument if you only need a subset of columns.
```
titanic_batches = tf.data.experimental.make_csv_dataset(
titanic_file, batch_size=4,
label_name="survived", select_columns=['class', 'fare', 'survived'])
for feature_batch, label_batch in titanic_batches.take(1):
print("'survived': {}".format(label_batch))
for key, value in feature_batch.items():
print(" {!r:20s}: {}".format(key, value))
```
There is also a lower-level `experimental.CsvDataset` class which provides finer grained control. It does not support column type inference. Instead you must specify the type of each column.
```
titanic_types = [tf.int32, tf.string, tf.float32, tf.int32, tf.int32, tf.float32, tf.string, tf.string, tf.string, tf.string]
dataset = tf.data.experimental.CsvDataset(titanic_file, titanic_types , header=True)
for line in dataset.take(10):
print([item.numpy() for item in line])
```
If some columns are empty, this low-level interface allows you to provide default values instead of column types.
```
%%writefile missing.csv
1,2,3,4
,2,3,4
1,,3,4
1,2,,4
1,2,3,
,,,
# Creates a dataset that reads all of the records from two CSV files, each with
# four float columns which may have missing values.
record_defaults = [999,999,999,999]
dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults)
dataset = dataset.map(lambda *items: tf.stack(items))
dataset
for line in dataset:
print(line.numpy())
```
By default, a `CsvDataset` yields *every* column of *every* line of the file,
which may not be desirable, for example if the file starts with a header line
that should be ignored, or if some columns are not required in the input.
These lines and fields can be removed with the `header` and `select_cols`
arguments respectively.
```
# Creates a dataset that reads all of the records from two CSV files with
# headers, extracting float data from columns 2 and 4.
record_defaults = [999, 999] # Only provide defaults for the selected columns
dataset = tf.data.experimental.CsvDataset("missing.csv", record_defaults, select_cols=[1, 3])
dataset = dataset.map(lambda *items: tf.stack(items))
dataset
for line in dataset:
print(line.numpy())
```
### Consuming sets of files
There are many datasets distributed as a set of files, where each file is an example.
```
flowers_root = tf.keras.utils.get_file(
'flower_photos',
'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
untar=True)
flowers_root = pathlib.Path(flowers_root)
```
Note: these images are licensed CC-BY, see LICENSE.txt for details.
The root directory contains a directory for each class:
```
for item in flowers_root.glob("*"):
print(item.name)
```
The files in each class directory are examples:
```
list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*'))
for f in list_ds.take(5):
print(f.numpy())
```
Read the data using the `tf.io.read_file` function and extract the label from the path, returning `(image, label)` pairs:
```
def process_path(file_path):
label = tf.strings.split(file_path, os.sep)[-2]
return tf.io.read_file(file_path), label
labeled_ds = list_ds.map(process_path)
for image_raw, label_text in labeled_ds.take(1):
print(repr(image_raw.numpy()[:100]))
print()
print(label_text.numpy())
```
<!--
TODO(mrry): Add this section.
### Handling text data with unusual sizes
-->
## Batching dataset elements
### Simple batching
The simplest form of batching stacks `n` consecutive elements of a dataset into
a single element. The `Dataset.batch()` transformation does exactly this, with
the same constraints as the `tf.stack()` operator, applied to each component
of the elements: i.e. for each component *i*, all elements must have a tensor
of the exact same shape.
```
inc_dataset = tf.data.Dataset.range(100)
dec_dataset = tf.data.Dataset.range(0, -100, -1)
dataset = tf.data.Dataset.zip((inc_dataset, dec_dataset))
batched_dataset = dataset.batch(4)
for batch in batched_dataset.take(4):
print([arr.numpy() for arr in batch])
```
While `tf.data` tries to propagate shape information, the default settings of `Dataset.batch` result in an unknown batch size because the last batch may not be full. Note the `None`s in the shape:
```
batched_dataset
```
Use the `drop_remainder` argument to ignore that last batch, and get full shape propagation:
```
batched_dataset = dataset.batch(7, drop_remainder=True)
batched_dataset
```
### Batching tensors with padding
The above recipe works for tensors that all have the same size. However, many
models (e.g. sequence models) work with input data that can have varying size
(e.g. sequences of different lengths). To handle this case, the
`Dataset.padded_batch` transformation enables you to batch tensors of
different shape by specifying one or more dimensions in which they may be
padded.
```
dataset = tf.data.Dataset.range(100)
dataset = dataset.map(lambda x: tf.fill([tf.cast(x, tf.int32)], x))
dataset = dataset.padded_batch(4, padded_shapes=(None,))
for batch in dataset.take(2):
print(batch.numpy())
print()
```
The `Dataset.padded_batch` transformation allows you to set different padding
for each dimension of each component, and it may be variable-length (signified
by `None` in the example above) or constant-length. It is also possible to
override the padding value, which defaults to 0.
<!--
TODO(mrry): Add this section.
### Dense ragged -> tf.SparseTensor
-->
## Training workflows
### Processing multiple epochs
The `tf.data` API offers two main ways to process multiple epochs of the same
data.
The simplest way to iterate over a dataset in multiple epochs is to use the
`Dataset.repeat()` transformation. First, create a dataset of titanic data:
```
titanic_file = tf.keras.utils.get_file("train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv")
titanic_lines = tf.data.TextLineDataset(titanic_file)
def plot_batch_sizes(ds):
batch_sizes = [batch.shape[0] for batch in ds]
plt.bar(range(len(batch_sizes)), batch_sizes)
plt.xlabel('Batch number')
plt.ylabel('Batch size')
```
Applying the `Dataset.repeat()` transformation with no arguments will repeat
the input indefinitely.
The `Dataset.repeat` transformation concatenates its
arguments without signaling the end of one epoch and the beginning of the next
epoch. Because of this a `Dataset.batch` applied after `Dataset.repeat` will yield batches that straddle epoch boundaries:
```
titanic_batches = titanic_lines.repeat(3).batch(128)
plot_batch_sizes(titanic_batches)
```
If you need clear epoch separation, put `Dataset.batch` before the repeat:
```
titanic_batches = titanic_lines.batch(128).repeat(3)
plot_batch_sizes(titanic_batches)
```
If you would like to perform a custom computation (e.g. to collect statistics) at the end of each epoch then it's simplest to restart the dataset iteration on each epoch:
```
epochs = 3
dataset = titanic_lines.batch(128)
for epoch in range(epochs):
for batch in dataset:
print(batch.shape)
print("End of epoch: ", epoch)
```
### Randomly shuffling input data
The `Dataset.shuffle()` transformation maintains a fixed-size
buffer and chooses the next element uniformly at random from that buffer.
Note: While large buffer_sizes shuffle more thoroughly, they can take a lot of memory, and significant time to fill. Consider using `Dataset.interleave` across files if this becomes a problem.
Add an index to the dataset so you can see the effect:
```
lines = tf.data.TextLineDataset(titanic_file)
counter = tf.data.experimental.Counter()
dataset = tf.data.Dataset.zip((counter, lines))
dataset = dataset.shuffle(buffer_size=100)
dataset = dataset.batch(20)
dataset
```
Since the `buffer_size` is 100, and the batch size is 20, the first batch contains no elements with an index over 120.
```
n,line_batch = next(iter(dataset))
print(n.numpy())
```
As with `Dataset.batch` the order relative to `Dataset.repeat` matters.
`Dataset.shuffle` doesn't signal the end of an epoch until the shuffle buffer is empty. So a shuffle placed before a repeat will show every element of one epoch before moving to the next:
```
dataset = tf.data.Dataset.zip((counter, lines))
shuffled = dataset.shuffle(buffer_size=100).batch(10).repeat(2)
print("Here are the item ID's near the epoch boundary:\n")
for n, line_batch in shuffled.skip(60).take(5):
print(n.numpy())
shuffle_repeat = [n.numpy().mean() for n, line_batch in shuffled]
plt.plot(shuffle_repeat, label="shuffle().repeat()")
plt.ylabel("Mean item ID")
plt.legend()
```
But a repeat before a shuffle mixes the epoch boundaries together:
```
dataset = tf.data.Dataset.zip((counter, lines))
shuffled = dataset.repeat(2).shuffle(buffer_size=100).batch(10)
print("Here are the item ID's near the epoch boundary:\n")
for n, line_batch in shuffled.skip(55).take(15):
print(n.numpy())
repeat_shuffle = [n.numpy().mean() for n, line_batch in shuffled]
plt.plot(shuffle_repeat, label="shuffle().repeat()")
plt.plot(repeat_shuffle, label="repeat().shuffle()")
plt.ylabel("Mean item ID")
plt.legend()
```
## Preprocessing data
The `Dataset.map(f)` transformation produces a new dataset by applying a given
function `f` to each element of the input dataset. It is based on the
[`map()`](https://en.wikipedia.org/wiki/Map_\(higher-order_function\)) function
that is commonly applied to lists (and other structures) in functional
programming languages. The function `f` takes the `tf.Tensor` objects that
represent a single element in the input, and returns the `tf.Tensor` objects
that will represent a single element in the new dataset. Its implementation uses
standard TensorFlow operations to transform one element into another.
This section covers common examples of how to use `Dataset.map()`.
### Decoding image data and resizing it
<!-- TODO(markdaoust): link to image augmentation when it exists -->
When training a neural network on real-world image data, it is often necessary
to convert images of different sizes to a common size, so that they may be
batched into a fixed size.
Rebuild the flower filenames dataset:
```
list_ds = tf.data.Dataset.list_files(str(flowers_root/'*/*'))
```
Write a function that manipulates the dataset elements.
```
# Reads an image from a file, decodes it into a dense tensor, and resizes it
# to a fixed shape.
def parse_image(filename):
parts = tf.strings.split(filename, os.sep)
label = parts[-2]
image = tf.io.read_file(filename)
image = tf.image.decode_jpeg(image)
image = tf.image.convert_image_dtype(image, tf.float32)
image = tf.image.resize(image, [128, 128])
return image, label
```
Test that it works.
```
file_path = next(iter(list_ds))
image, label = parse_image(file_path)
def show(image, label):
plt.figure()
plt.imshow(image)
plt.title(label.numpy().decode('utf-8'))
plt.axis('off')
show(image, label)
```
Map it over the dataset.
```
images_ds = list_ds.map(parse_image)
for image, label in images_ds.take(2):
show(image, label)
```
### Applying arbitrary Python logic
For performance reasons, use TensorFlow operations for
preprocessing your data whenever possible. However, it is sometimes useful to
call external Python libraries when parsing your input data. You can use the `tf.py_function()` operation in a `Dataset.map()` transformation.
For example, if you want to apply a random rotation, the `tf.image` module only has `tf.image.rot90`, which is not very useful for image augmentation.
Note: `tensorflow_addons` has a TensorFlow compatible `rotate` in `tensorflow_addons.image.rotate`.
To demonstrate `tf.py_function`, try using the `scipy.ndimage.rotate` function instead:
```
import scipy.ndimage as ndimage
def random_rotate_image(image):
image = ndimage.rotate(image, np.random.uniform(-30, 30), reshape=False)
return image
image, label = next(iter(images_ds))
image = random_rotate_image(image)
show(image, label)
```
To use this function with `Dataset.map` the same caveats apply as with `Dataset.from_generator`, you need to describe the return shapes and types when you apply the function:
```
def tf_random_rotate_image(image, label):
im_shape = image.shape
[image,] = tf.py_function(random_rotate_image, [image], [tf.float32])
image.set_shape(im_shape)
return image, label
rot_ds = images_ds.map(tf_random_rotate_image)
for image, label in rot_ds.take(2):
show(image, label)
```
### Parsing `tf.Example` protocol buffer messages
Many input pipelines extract `tf.train.Example` protocol buffer messages from a
TFRecord format. Each `tf.train.Example` record contains one or more "features",
and the input pipeline typically converts these features into tensors.
```
fsns_test_file = tf.keras.utils.get_file("fsns.tfrec", "https://storage.googleapis.com/download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001")
dataset = tf.data.TFRecordDataset(filenames = [fsns_test_file])
dataset
```
You can work with `tf.train.Example` protos outside of a `tf.data.Dataset` to understand the data:
```
raw_example = next(iter(dataset))
parsed = tf.train.Example.FromString(raw_example.numpy())
feature = parsed.features.feature
raw_img = feature['image/encoded'].bytes_list.value[0]
img = tf.image.decode_png(raw_img)
plt.imshow(img)
plt.axis('off')
_ = plt.title(feature["image/text"].bytes_list.value[0])
raw_example = next(iter(dataset))
def tf_parse(eg):
example = tf.io.parse_example(
eg[tf.newaxis], {
'image/encoded': tf.io.FixedLenFeature(shape=(), dtype=tf.string),
'image/text': tf.io.FixedLenFeature(shape=(), dtype=tf.string)
})
return example['image/encoded'][0], example['image/text'][0]
img, txt = tf_parse(raw_example)
print(txt.numpy())
print(repr(img.numpy()[:20]), "...")
decoded = dataset.map(tf_parse)
decoded
image_batch, text_batch = next(iter(decoded.batch(10)))
image_batch.shape
```
<a id="time_series_windowing"></a>
### Time series windowing
For an end to end time series example see: [Time series forecasting](../../tutorials/text/time_series.ipynb).
Time series data is often organized with the time axis intact.
Use a simple `Dataset.range` to demonstrate:
```
range_ds = tf.data.Dataset.range(100000)
```
Typically, models based on this sort of data will want a contiguous time slice.
The simplest approach would be to batch the data:
#### Using `batch`
```
batches = range_ds.batch(10, drop_remainder=True)
for batch in batches.take(5):
print(batch.numpy())
```
Or to make dense predictions one step into the future, you might shift the features and labels by one step relative to each other:
```
def dense_1_step(batch):
# Shift features and labels one step relative to each other.
return batch[:-1], batch[1:]
predict_dense_1_step = batches.map(dense_1_step)
for features, label in predict_dense_1_step.take(3):
print(features.numpy(), " => ", label.numpy())
```
To predict a whole window instead of a fixed offset you can split the batches into two parts:
```
batches = range_ds.batch(15, drop_remainder=True)
def label_next_5_steps(batch):
return (batch[:-5], # Take the first 5 steps
batch[-5:]) # take the remainder
predict_5_steps = batches.map(label_next_5_steps)
for features, label in predict_5_steps.take(3):
print(features.numpy(), " => ", label.numpy())
```
To allow some overlap between the features of one batch and the labels of another, use `Dataset.zip`:
```
feature_length = 10
label_length = 3
features = range_ds.batch(feature_length, drop_remainder=True)
labels = range_ds.batch(feature_length).skip(1).map(lambda labels: labels[:label_length])
predicted_steps = tf.data.Dataset.zip((features, labels))
for features, label in predicted_steps.take(5):
print(features.numpy(), " => ", label.numpy())
```
#### Using `window`
While using `Dataset.batch` works, there are situations where you may need finer control. The `Dataset.window` method gives you complete control, but requires some care: it returns a `Dataset` of `Datasets`. See [Dataset structure](#dataset_structure) for details.
```
window_size = 5
windows = range_ds.window(window_size, shift=1)
for sub_ds in windows.take(5):
print(sub_ds)
```
The `Dataset.flat_map` method can take a dataset of datasets and flatten it into a single dataset:
```
for x in windows.flat_map(lambda x: x).take(30):
print(x.numpy(), end=' ')
```
In nearly all cases, you will want to `.batch` the dataset first:
```
def sub_to_batch(sub):
return sub.batch(window_size, drop_remainder=True)
for example in windows.flat_map(sub_to_batch).take(5):
print(example.numpy())
```
Now, you can see that the `shift` argument controls how much each window moves over.
Putting this together you might write this function:
```
def make_window_dataset(ds, window_size=5, shift=1, stride=1):
windows = ds.window(window_size, shift=shift, stride=stride)
def sub_to_batch(sub):
return sub.batch(window_size, drop_remainder=True)
windows = windows.flat_map(sub_to_batch)
return windows
ds = make_window_dataset(range_ds, window_size=10, shift = 5, stride=3)
for example in ds.take(10):
print(example.numpy())
```
Then it's easy to extract labels, as before:
```
dense_labels_ds = ds.map(dense_1_step)
for inputs,labels in dense_labels_ds.take(3):
print(inputs.numpy(), "=>", labels.numpy())
```
### Resampling
When working with a dataset that is very class-imbalanced, you may want to resample the dataset. `tf.data` provides two methods to do this. The credit card fraud dataset is a good example of this sort of problem.
Note: See [Imbalanced Data](../tutorials/keras/imbalanced_data.ipynb) for a full tutorial.
```
zip_path = tf.keras.utils.get_file(
origin='https://storage.googleapis.com/download.tensorflow.org/data/creditcard.zip',
fname='creditcard.zip',
extract=True)
csv_path = zip_path.replace('.zip', '.csv')
creditcard_ds = tf.data.experimental.make_csv_dataset(
csv_path, batch_size=1024, label_name="Class",
# Set the column types: 30 floats and an int.
column_defaults=[float()]*30+[int()])
```
Now, check the distribution of classes, it is highly skewed:
```
def count(counts, batch):
features, labels = batch
class_1 = labels == 1
class_1 = tf.cast(class_1, tf.int32)
class_0 = labels == 0
class_0 = tf.cast(class_0, tf.int32)
counts['class_0'] += tf.reduce_sum(class_0)
counts['class_1'] += tf.reduce_sum(class_1)
return counts
counts = creditcard_ds.take(10).reduce(
initial_state={'class_0': 0, 'class_1': 0},
reduce_func = count)
counts = np.array([counts['class_0'].numpy(),
counts['class_1'].numpy()]).astype(np.float32)
fractions = counts/counts.sum()
print(fractions)
```
A common approach to training with an imbalanced dataset is to balance it. `tf.data` includes a few methods which enable this workflow:
#### Datasets sampling
One approach to resampling a dataset is to use `sample_from_datasets`. This is more applicable when you have a separate `data.Dataset` for each class.
Here, just use filter to generate them from the credit card fraud data:
```
negative_ds = (
creditcard_ds
.unbatch()
.filter(lambda features, label: label==0)
.repeat())
positive_ds = (
creditcard_ds
.unbatch()
.filter(lambda features, label: label==1)
.repeat())
for features, label in positive_ds.batch(10).take(1):
print(label.numpy())
```
To use `tf.data.experimental.sample_from_datasets` pass the datasets, and the weight for each:
```
balanced_ds = tf.data.experimental.sample_from_datasets(
[negative_ds, positive_ds], [0.5, 0.5]).batch(10)
```
Now the dataset produces examples of each class with 50/50 probability:
```
for features, labels in balanced_ds.take(10):
print(labels.numpy())
```
#### Rejection resampling
One problem with the above `experimental.sample_from_datasets` approach is that
it needs a separate `tf.data.Dataset` per class. Using `Dataset.filter`
works, but results in all the data being loaded twice.
The `data.experimental.rejection_resample` function can be applied to a dataset to rebalance it, while only loading it once. Elements will be dropped from the dataset to achieve balance.
`data.experimental.rejection_resample` takes a `class_func` argument. This `class_func` is applied to each dataset element, and is used to determine which class an example belongs to for the purposes of balancing.
The elements of `creditcard_ds` are already `(features, label)` pairs. So the `class_func` just needs to return those labels:
```
def class_func(features, label):
return label
```
The resampler also needs a target distribution, and optionally an initial distribution estimate:
```
resampler = tf.data.experimental.rejection_resample(
class_func, target_dist=[0.5, 0.5], initial_dist=fractions)
```
The resampler deals with individual examples, so you must `unbatch` the dataset before applying the resampler:
```
resample_ds = creditcard_ds.unbatch().apply(resampler).batch(10)
```
The resampler returns creates `(class, example)` pairs from the output of the `class_func`. In this case, the `example` was already a `(feature, label)` pair, so use `map` to drop the extra copy of the labels:
```
balanced_ds = resample_ds.map(lambda extra_label, features_and_label: features_and_label)
```
Now the dataset produces examples of each class with 50/50 probability:
```
for features, labels in balanced_ds.take(10):
print(labels.numpy())
```
## Iterator Checkpointing
Tensorflow supports [taking checkpoints](https://www.tensorflow.org/guide/checkpoint) so that when your training process restarts it can restore the latest checkpoint to recover most of its progress. In addition to checkpointing the model variables, you can also checkpoint the progress of the dataset iterator. This could be useful if you have a large dataset and don't want to start the dataset from the beginning on each restart. Note however that iterator checkpoints may be large, since transformations such as `shuffle` and `prefetch` require buffering elements within the iterator.
To include your iterator in a checkpoint, pass the iterator to the `tf.train.Checkpoint` constructor.
```
range_ds = tf.data.Dataset.range(20)
iterator = iter(range_ds)
ckpt = tf.train.Checkpoint(step=tf.Variable(0), iterator=iterator)
manager = tf.train.CheckpointManager(ckpt, '/tmp/my_ckpt', max_to_keep=3)
print([next(iterator).numpy() for _ in range(5)])
save_path = manager.save()
print([next(iterator).numpy() for _ in range(5)])
ckpt.restore(manager.latest_checkpoint)
print([next(iterator).numpy() for _ in range(5)])
```
Note: It is not possible to checkpoint an iterator which relies on external state such as a `tf.py_function`. Attempting to do so will raise an exception complaining about the external state.
## Using tf.data with tf.keras
The `tf.keras` API simplifies many aspects of creating and executing machine
learning models. Its `.fit()` and `.evaluate()` and `.predict()` APIs support datasets as inputs. Here is a quick dataset and model setup:
```
train, test = tf.keras.datasets.fashion_mnist.load_data()
images, labels = train
images = images/255.0
labels = labels.astype(np.int32)
fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels))
fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32)
model = tf.keras.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(10)
])
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
```
Passing a dataset of `(feature, label)` pairs is all that's needed for `Model.fit` and `Model.evaluate`:
```
model.fit(fmnist_train_ds, epochs=2)
```
If you pass an infinite dataset, for example by calling `Dataset.repeat()`, you just need to also pass the `steps_per_epoch` argument:
```
model.fit(fmnist_train_ds.repeat(), epochs=2, steps_per_epoch=20)
```
For evaluation you can pass the number of evaluation steps:
```
loss, accuracy = model.evaluate(fmnist_train_ds)
print("Loss :", loss)
print("Accuracy :", accuracy)
```
For long datasets, set the number of steps to evaluate:
```
loss, accuracy = model.evaluate(fmnist_train_ds.repeat(), steps=10)
print("Loss :", loss)
print("Accuracy :", accuracy)
```
The labels are not required in when calling `Model.predict`.
```
predict_ds = tf.data.Dataset.from_tensor_slices(images).batch(32)
result = model.predict(predict_ds, steps = 10)
print(result.shape)
```
But the labels are ignored if you do pass a dataset containing them:
```
result = model.predict(fmnist_train_ds, steps = 10)
print(result.shape)
```
| github_jupyter |
# Communication in Crisis
## Acquire
Data: [Los Angeles Parking Citations](https://www.kaggle.com/cityofLA/los-angeles-parking-citations)<br>
Load the dataset and filter for:
- Citations issued from 2017-01-01 to 2021-04-12.
- Street Sweeping violations - `Violation Description` == __"NO PARK/STREET CLEAN"__
Let's acquire the parking citations data from our file.
1. Import libraries.
1. Load the dataset.
1. Display the shape and first/last 2 rows.
1. Display general infomation about the dataset - w/ the # of unique values in each column.
1. Display the number of missing values in each column.
1. Descriptive statistics for all numeric features.
```
# Import libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from scipy import stats
import sys
import time
import folium.plugins as plugins
from IPython.display import HTML
import json
import datetime
import calplot
import folium
import math
sns.set()
from tqdm.notebook import tqdm
import src
# Filter warnings
from warnings import filterwarnings
filterwarnings('ignore')
# Load the data
df = src.get_sweep_data(prepared=False)
# Display the shape and dtypes of each column
print(df.shape)
df.info()
# Display the first two citations
df.head(2)
# Display the last two citations
df.tail(2)
# Display descriptive statistics of numeric columns
df.describe()
df.hist(figsize=(16, 8), bins=15)
plt.tight_layout();
```
__Initial findings__
- `Issue time` and `Marked Time` are quasi-normally distributed. Note: Poisson Distribution
- It's interesting to see the distribution of our activity on earth follows a normal distribution.
- Agencies 50+ write the most parking citations.
- Most fine amounts are less than $100.00
- There are a few null or invalid license plates.
# Prepare
- Remove spaces + capitalization from each column name.
- Cast `Plate Expiry Date` to datetime data type.
- Cast `Issue Date` and `Issue Time` to datetime data types.
- Drop columns missing >=74.42\% of their values.
- Drop missing values.
- Transform Latitude and Longitude columns from NAD1983StatePlaneCaliforniaVFIPS0405 feet projection to EPSG:4326 World Geodetic System 1984: used in GPS [Standard]
- Filter data for street sweeping citations only.
```
# Prepare the data using a function stored in prepare.py
df_citations = src.get_sweep_data(prepared=True)
# Display the first two rows
df_citations.head(2)
# Check the column data types and non-null counts.
df_citations.info()
```
# Exploration
## How much daily revenue is generated from street sweeper citations?
### Daily Revenue from Street Sweeper Citations
Daily street sweeper citations increased in 2020.
```
# Daily street sweeping citation revenue
daily_revenue = df_citations.groupby('issue_date').fine_amount.sum()
daily_revenue.index = pd.to_datetime(daily_revenue.index)
df_sweep = src.street_sweep(data=df_citations)
df_d = src.resample_period(data=df_sweep)
df_m = src.resample_period(data=df_sweep, period='M')
df_d.head()
sns.set_context('talk')
# Plot daily revenue from street sweeping citations
df_d.revenue.plot(figsize=(14, 7), label='Revenue', color='DodgerBlue')
plt.axhline(df_d.revenue.mean(skipna=True), color='black', label='Average Revenue')
plt.title("Daily Revenue from Street Sweeping Citations")
plt.xlabel('')
plt.ylabel("Revenue (in thousand's)")
plt.xticks(rotation=0, horizontalalignment='center', fontsize=13)
plt.yticks(range(0, 1_000_000, 200_000), ['$0', '$200', '$400', '$600', '$800',])
plt.ylim(0, 1_000_000)
plt.legend(loc=2, framealpha=.8);
```
> __Anomaly__: Between March 2020 and October 2020 a Local Emergency was Declared by the Mayor of Los Angeles in response to COVID-19. Street Sweeping was halted to help Angelenos Shelter in Place. _Street Sweeping resumed on 10/15/2020_.
### Anomaly: Declaration of Local Emergency
```
sns.set_context('talk')
# Plot daily revenue from street sweeping citations
df_d.revenue.plot(figsize=(14, 7), label='Revenue', color='DodgerBlue')
plt.axvspan('2020-03-16', '2020-10-14', color='grey', alpha=.25)
plt.text('2020-03-29', 890_000, 'Declaration of\nLocal Emergency', fontsize=11)
plt.title("Daily Revenue from Street Sweeping Citations")
plt.xlabel('')
plt.ylabel("Revenue (in thousand's)")
plt.xticks(rotation=0, horizontalalignment='center', fontsize=13)
plt.yticks(range(0, 1_000_000, 200_000), ['$0', '$200', '$400', '$600', '$800',])
plt.ylim(0, 1_000_000)
plt.legend(loc=2, framealpha=.8);
sns.set_context('talk')
# Plot daily revenue from street sweeping citations
df_d.revenue.plot(figsize=(14, 7), label='Revenue', color='DodgerBlue')
plt.axhline(df_d.revenue.mean(skipna=True), color='black', label='Average Revenue')
plt.axvline(datetime.datetime(2020, 10, 15), color='red', linestyle="--", label='October 15, 2020')
plt.title("Daily Revenue from Street Sweeping Citations")
plt.xlabel('')
plt.ylabel("Revenue (in thousand's)")
plt.xticks(rotation=0, horizontalalignment='center', fontsize=13)
plt.yticks(range(0, 1_000_000, 200_000), ['$0', '$200K', '$400K', '$600K', '$800K',])
plt.ylim(0, 1_000_000)
plt.legend(loc=2, framealpha=.8);
```
## Hypothesis Test
### General Inquiry
Is the daily citation revenue after 10/15/2020 significantly greater than average?
### Z-Score
$H_0$: The daily citation revenue after 10/15/2020 is less than or equal to the average daily revenue.
$H_a$: The daily citation revenue after 10/15/2020 is significantly greater than average.
```
confidence_interval = .997
# Directional Test
alpha = (1 - confidence_interval)/2
# Data to calculate z-scores using precovid values to calculate the mean and std
daily_revenue_precovid = df_d.loc[df_d.index < '2020-03-16']['revenue']
mean_precovid, std_precovid = daily_revenue_precovid.agg(['mean', 'std']).values
mean, std = df_d.agg(['mean', 'std']).values
# Calculating Z-Scores using precovid mean and std
z_scores_precovid = (df_d.revenue - mean_precovid)/std_precovid
z_scores_precovid.index = pd.to_datetime(z_scores_precovid.index)
sig_zscores_pre_covid = z_scores_precovid[z_scores_precovid>3]
# Calculating Z-Scores using entire data
z_scores = (df_d.revenue - mean)/std
z_scores.index = pd.to_datetime(z_scores.index)
sig_zscores = z_scores[z_scores>3]
sns.set_context('talk')
plt.figure(figsize=(12, 6))
sns.histplot(data=z_scores_precovid,
bins=50,
label='preCOVID z-scores')
sns.histplot(data=z_scores,
bins=50,
color='orange',
label='z-scores')
plt.title('Daily citation revenue after 10/15/2020 is significantly greater than average', fontsize=16)
plt.xlabel('Standard Deviations')
plt.ylabel('# of Days')
plt.axvline(3, color='Black', linestyle="--", label='3 Standard Deviations')
plt.xticks(np.linspace(-1, 9, 11))
plt.legend(fontsize=13);
a = stats.zscore(daily_revenue)
fig, ax = plt.subplots(figsize=(8, 8))
stats.probplot(a, plot=ax)
plt.xlabel("Quantile of Normal Distribution")
plt.ylabel("z-score");
```
### p-values
```
p_values_precovid = z_scores_precovid.apply(stats.norm.cdf)
p_values = z_scores_precovid.apply(stats.norm.cdf)
significant_dates_precovid = p_values_precovid[(1-p_values_precovid) < alpha]
significant_dates = p_values[(1-p_values) < alpha]
# The chance of an outcome occuring by random chance
print(f'{alpha:0.3%}')
```
### Cohen's D
```
fractions = [.1, .2, .5, .7, .9]
cohen_d = []
for percentage in fractions:
cohen_d_trial = []
for i in range(10000):
sim = daily_revenue.sample(frac=percentage)
sim_mean = sim.mean()
d = (sim_mean - mean) / (std/math.sqrt(int(len(daily_revenue)*percentage)))
cohen_d_trial.append(d)
cohen_d.append(np.mean(cohen_d_trial))
cohen_d
fractions = [.1, .2, .5, .7, .9]
cohen_d_precovid = []
for percentage in fractions:
cohen_d_trial = []
for i in range(10000):
sim = daily_revenue_precovid.sample(frac=percentage)
sim_mean = sim.mean()
d = (sim_mean - mean_precovid) / (std_precovid/math.sqrt(int(len(daily_revenue_precovid)*percentage)))
cohen_d_trial.append(d)
cohen_d_precovid.append(np.mean(cohen_d_trial))
cohen_d_precovid
```
### Significant Dates with less than a 0.15% chance of occuring
- All dates that are considered significant occur after 10/15/2020
- In the two weeks following 10/15/2020 significant events occured on __Tuesday's and Wednesday's__.
```
dates_precovid = set(list(sig_zscores_pre_covid.index))
dates = set(list(sig_zscores.index))
common_dates = list(dates.intersection(dates_precovid))
common_dates = pd.to_datetime(common_dates).sort_values()
sig_zscores
pd.Series(common_dates.day_name(),
common_dates)
np.random.seed(sum(map(ord, 'calplot')))
all_days = pd.date_range('1/1/2020', '12/22/2020', freq='D')
significant_events = pd.Series(np.ones_like(len(common_dates)), index=common_dates)
calplot.calplot(significant_events, figsize=(18, 12), cmap='coolwarm_r');
```
## Which parts of the city were impacted the most?
```
df_outliers = df_citations.loc[df_citations.issue_date.isin(list(common_dates.astype('str')))]
df_outliers.reset_index(drop=True, inplace=True)
print(df_outliers.shape)
df_outliers.head()
m = folium.Map(location=[34.0522, -118.2437],
min_zoom=8,
max_bounds=True)
mc = plugins.MarkerCluster()
for index, row in df_outliers.iterrows():
mc.add_child(
folium.Marker(location=[str(row['latitude']), str(row['longitude'])],
popup='Cited {} {} at {}'.format(row['day_of_week'],
row['issue_date'],
row['issue_time'][:-3]),
control_scale=True,
clustered_marker=True
)
)
m.add_child(mc)
```
Transfering map to Tablaeu
# Conclusions
# Appendix
## What time(s) are Street Sweeping citations issued?
Most citations are issued during the hours of 8am, 10am, and 12pm.
### Citation Times
```
# Filter street sweeping data for citations issued between
# 8 am and 2 pm, 8 and 14 respectively.
df_citation_times = df_citations.loc[(df_citations.issue_hour >= 8)&(df_citations.issue_hour < 14)]
sns.set_context('talk')
# Issue Hour Plot
df_citation_times.issue_hour.value_counts().sort_index().plot.bar(figsize=(8, 6))
# Axis labels
plt.title('Most Street Sweeper Citations are Issued at 8am')
plt.xlabel('Issue Hour (24HR)')
plt.ylabel('# of Citations (in thousands)')
# Chart Formatting
plt.xticks(rotation=0)
plt.yticks(range(100_000, 400_001,100_000), ['100', '200', '300', '400'])
plt.show()
sns.set_context('talk')
# Issue Minute Plot
df_citation_times.issue_minute.value_counts().sort_index().plot.bar(figsize=(20, 9))
# Axis labels
plt.title('Most Street Sweeper Citations are Issued in the First 30 Minutes')
plt.xlabel('Issue Minute')
plt.ylabel('# of Citations (in thousands)')
# plt.axvspan(0, 30, facecolor='grey', alpha=0.1)
# Chart Formatting
plt.xticks(rotation=0)
plt.yticks(range(5_000, 40_001, 5_000), ['5', '10', '15', '20', '25', '30', '35', '40'])
plt.tight_layout()
plt.show()
```
## Which state has the most Street Sweeping violators?
### License Plate
Over 90% of all street sweeping citations are issued to California Residents.
```
sns.set_context('talk')
fig = df_citations.rp_state_plate.value_counts(normalize=True).nlargest(3).plot.bar(figsize=(12, 6))
# Chart labels
plt.title('California residents receive the most street sweeping citations', fontsize=16)
plt.xlabel('State')
plt.ylabel('% of all Citations')
# Tick Formatting
plt.xticks(rotation=0)
plt.yticks(np.linspace(0, 1, 11), labels=[f'{i:0.0%}' for i in np.linspace(0, 1, 11)])
plt.grid(axis='x', alpha=.5)
plt.tight_layout();
```
## Which street has the most Street Sweeping citations?
The characteristics of the top 3 streets:
1. Vehicles are parked bumper to bumper leaving few parking spaces available
2. Parking spaces have a set time limit
```
df_citations['street_name'] = df_citations.location.str.replace('^[\d+]{2,}', '').str.strip()
sns.set_context('talk')
# Removing the street number and white space from the address
df_citations.street_name.value_counts().nlargest(3).plot.barh(figsize=(16, 6))
# Chart formatting
plt.title('Streets with the Most Street Sweeping Citations', fontsize=24)
plt.xlabel('# of Citations');
```
### __Abbot Kinney Blvd: "Small Boutiques, No Parking"__
> [Abbot Kinney Blvd on Google Maps](https://www.google.com/maps/@33.9923689,-118.4731719,3a,75y,112.99h,91.67t/data=!3m6!1e1!3m4!1sKD3cG40eGmdWxhwqLD1BvA!2e0!7i16384!8i8192)
<img src="./visuals/abbot.png" alt="Abbot" style="width: 450px;" align="left"/>
- Near Venice Beach
- Small businesses and name brand stores line both sides of the street
- Little to no parking in this area
- Residential area inland
- Multiplex style dwellings with available parking spaces
- Weekly Street Sweeping on Monday from 7:30 am - 9:30 am
### __Clinton Street: "Packed Street"__
> [Clinton Street on Google Maps](https://www.google.com/maps/@34.0816611,-118.3306842,3a,75y,70.72h,57.92t/data=!3m9!1e1!3m7!1sdozFgC7Ms3EvaOF4-CeNAg!2e0!7i16384!8i8192!9m2!1b1!2i37)
<img src="./visuals/clinton.png" alt="Clinton" style="width: 600px;" align="Left"/>
- All parking spaces on the street are filled
- Residential Area
- Weekly Street Sweeping on Friday from 8:00 am - 11:00 am
### __Kelton Ave: "2 Hour Time Limit"__
> [Kelton Ave on Google Maps](https://www.google.com/maps/place/Kelton+Ave,+Los+Angeles,+CA/@34.0475262,-118.437594,3a,49.9y,183.92h,85.26t/data=!3m9!1e1!3m7!1s5VICHNYMVEk9utaV5egFYg!2e0!7i16384!8i8192!9m2!1b1!2i25!4m5!3m4!1s0x80c2bb7efb3a05eb:0xe155071f3fe49df3!8m2!3d34.0542999!4d-118.4434919)
<img src="./visuals/kelton.png" width="600" height="600" align="left"/>
- Most parking spaces on this street are available. This is due to the strict 2 hour time limit for parked vehicles without the proper exception permit.
- Multiplex, Residential Area
- Weekly Street Sweeping on Thursday from 10:00 am - 1:00 pm
- Weekly Street Sweeping on Friday from 8:00 am - 10:00 am
## Which street has the most Street Sweeping citations, given the day of the week?
- __Abbot Kinney Blvd__ is the most cited street on __Monday and Tuesday__
- __4th Street East__ is the most cited street on __Saturday and Sunday__
```
# Group by the day of the week and street name
df_day_street = df_citations.groupby(by=['day_of_week', 'street_name'])\
.size()\
.sort_values()\
.groupby(level=0)\
.tail(1)\
.reset_index()\
.rename(columns={0:'count'})
# Create a new column to sort the values by the day of the
# week starting with Monday
df_day_street['order'] = [5, 6, 4, 3, 0, 2, 1]
# Display the street with the most street sweeping citations
# given the day of the week.
df_day_street.sort_values('order').set_index('order')
```
## Which Agencies issue the most street sweeping citations?
The Department of Transportation's __Western, Hollywood, and Valley__ subdivisions issue the most street sweeping citations.
```
sns.set_context('talk')
df_citations.agency.value_counts().nlargest(5).plot.barh(figsize=(12, 6));
# plt.axhspan(2.5, 5, facecolor='0.5', alpha=.8)
plt.title('Agencies With the Most Street Sweeper Citations')
plt.xlabel('# of Citations (in thousands)')
plt.xticks(np.arange(0, 400_001, 100_000), list(np.arange(0, 401, 100)))
plt.yticks([0, 1, 2, 3, 4], labels=['DOT-WESTERN',
'DOT-HOLLYWOOD',
'DOT-VALLEY',
'DOT-SOUTHERN',
'DOT-CENTRAL']);
```
When taking routes into consideration, __"Western"__ Subdivision, route 00500, has issued the most street sweeping citations.
- Is route 00500 larger than other street sweeping routes?
```
top_3_routes = df_citations.groupby(['agency', 'route'])\
.size()\
.nlargest(3)\
.sort_index()\
.rename('num_citations')\
.reset_index()\
.sort_values(by='num_citations', ascending=False)
top_3_routes.agency = ["DOT-WESTERN", "DOT-SOUTHERN", "DOT-CENTRAL"]
data = top_3_routes.set_index(['agency', 'route'])
data.plot(kind='barh', stacked=True, figsize=(12, 6), legend=None)
plt.title("Agency-Route ID's with the most Street Sweeping Citations")
plt.ylabel('')
plt.xlabel('# of Citations (in thousands)')
plt.xticks(np.arange(0, 70_001, 10_000), [str(i) for i in np.arange(0, 71, 10)]);
df_citations['issue_time_num'] = df_citations.issue_time.str.replace(":00", '')
df_citations['issue_time_num'] = df_citations.issue_time_num.str.replace(':', '').astype(np.int)
```
## What is the weekly distibution of citation times?
```
sns.set_context('talk')
plt.figure(figsize=(13, 12))
sns.boxplot(data=df_citations,
x="day_of_week",
y="issue_time_num",
order=["Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday", "Sunday"],
whis=3);
plt.title("Distribution Citation Issue Times Throughout the Week")
plt.xlabel('')
plt.ylabel('Issue Time (24HR)')
plt.yticks(np.arange(0, 2401, 200), [str(i) + ":00" for i in range(0, 25, 2)]);
```
| github_jupyter |
#### New to Plotly?
Plotly's Python library is free and open source! [Get started](https://plot.ly/python/getting-started/) by downloading the client and [reading the primer](https://plot.ly/python/getting-started/).
<br>You can set up Plotly to work in [online](https://plot.ly/python/getting-started/#initialization-for-online-plotting) or [offline](https://plot.ly/python/getting-started/#initialization-for-offline-plotting) mode, or in [jupyter notebooks](https://plot.ly/python/getting-started/#start-plotting-online).
<br>We also have a quick-reference [cheatsheet](https://images.plot.ly/plotly-documentation/images/python_cheat_sheet.pdf) (new!) to help you get started!
#### Version Check
Plotly's python package is updated frequently. Run `pip install plotly --upgrade` to use the latest version.
```
import plotly
plotly.__version__
```
### Custom Discretized Heatmap Colorscale
```
import plotly.plotly as py
py.iplot([{
'z': [
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
],
'type': 'heatmap',
'colorscale': [
# Let first 10% (0.1) of the values have color rgb(0, 0, 0)
[0, 'rgb(0, 0, 0)'],
[0.1, 'rgb(0, 0, 0)'],
# Let values between 10-20% of the min and max of z
# have color rgb(20, 20, 20)
[0.1, 'rgb(20, 20, 20)'],
[0.2, 'rgb(20, 20, 20)'],
# Values between 20-30% of the min and max of z
# have color rgb(40, 40, 40)
[0.2, 'rgb(40, 40, 40)'],
[0.3, 'rgb(40, 40, 40)'],
[0.3, 'rgb(60, 60, 60)'],
[0.4, 'rgb(60, 60, 60)'],
[0.4, 'rgb(80, 80, 80)'],
[0.5, 'rgb(80, 80, 80)'],
[0.5, 'rgb(100, 100, 100)'],
[0.6, 'rgb(100, 100, 100)'],
[0.6, 'rgb(120, 120, 120)'],
[0.7, 'rgb(120, 120, 120)'],
[0.7, 'rgb(140, 140, 140)'],
[0.8, 'rgb(140, 140, 140)'],
[0.8, 'rgb(160, 160, 160)'],
[0.9, 'rgb(160, 160, 160)'],
[0.9, 'rgb(180, 180, 180)'],
[1.0, 'rgb(180, 180, 180)']
],
'colorbar': {
'tick0': 0,
'dtick': 1
}
}], filename='heatmap-discrete-colorscale')
```
### Colorscale for Scatter Plots
```
import plotly.plotly as py
import plotly.graph_objs as go
data = [
go.Scatter(
y=[5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5],
marker=dict(
size=16,
cmax=39,
cmin=0,
color=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39],
colorbar=dict(
title='Colorbar'
),
colorscale='Viridis'
),
mode='markers')
]
fig = go.Figure(data=data)
py.iplot(fig)
```
### Colorscale for Contour Plot
```
import plotly.plotly as py
import plotly.graph_objs as go
data = [
go.Contour(
z=[[10, 10.625, 12.5, 15.625, 20],
[5.625, 6.25, 8.125, 11.25, 15.625],
[2.5, 3.125, 5., 8.125, 12.5],
[0.625, 1.25, 3.125, 6.25, 10.625],
[0, 0.625, 2.5, 5.625, 10]],
colorscale='Jet',
)
]
py.iplot(data, filename='simple-colorscales-colorscale')
```
### Custom Heatmap Colorscale
```
import plotly.plotly as py
import plotly.graph_objs as go
import six.moves.urllib
import json
response = six.moves.urllib.request.urlopen('https://raw.githubusercontent.com/plotly/datasets/master/custom_heatmap_colorscale.json')
dataset = json.load(response)
data = [
go.Heatmap(
z=dataset['z'],
colorscale=[[0.0, 'rgb(165,0,38)'], [0.1111111111111111, 'rgb(215,48,39)'], [0.2222222222222222, 'rgb(244,109,67)'], [0.3333333333333333, 'rgb(253,174,97)'], [0.4444444444444444, 'rgb(254,224,144)'], [0.5555555555555556, 'rgb(224,243,248)'], [0.6666666666666666, 'rgb(171,217,233)'], [0.7777777777777778, 'rgb(116,173,209)'], [0.8888888888888888, 'rgb(69,117,180)'], [1.0, 'rgb(49,54,149)']]
)
]
py.iplot(data, filename='custom-colorscale')
```
### Custom Contour Plot Colorscale
```
import plotly.plotly as py
import plotly.graph_objs as go
data = [
go.Contour(
z=[[10, 10.625, 12.5, 15.625, 20],
[5.625, 6.25, 8.125, 11.25, 15.625],
[2.5, 3.125, 5., 8.125, 12.5],
[0.625, 1.25, 3.125, 6.25, 10.625],
[0, 0.625, 2.5, 5.625, 10]],
colorscale=[[0, 'rgb(166,206,227)'], [0.25, 'rgb(31,120,180)'], [0.45, 'rgb(178,223,138)'], [0.65, 'rgb(51,160,44)'], [0.85, 'rgb(251,154,153)'], [1, 'rgb(227,26,28)']],
)
]
py.iplot(data, filename='colorscales-custom-colorscale')
```
### Custom Colorbar
```
import plotly.plotly as py
import plotly.graph_objs as go
import six.moves.urllib
import json
response = six.moves.urllib.request.urlopen('https://raw.githubusercontent.com/plotly/datasets/master/custom_heatmap_colorscale.json')
dataset = json.load(response)
data = [
go.Heatmap(
z=dataset['z'],
colorscale=[[0.0, 'rgb(165,0,38)'], [0.1111111111111111, 'rgb(215,48,39)'], [0.2222222222222222, 'rgb(244,109,67)'],
[0.3333333333333333, 'rgb(253,174,97)'], [0.4444444444444444, 'rgb(254,224,144)'], [0.5555555555555556, 'rgb(224,243,248)'],
[0.6666666666666666, 'rgb(171,217,233)'],[0.7777777777777778, 'rgb(116,173,209)'], [0.8888888888888888, 'rgb(69,117,180)'],
[1.0, 'rgb(49,54,149)']],
colorbar = dict(
title = 'Surface Heat',
titleside = 'top',
tickmode = 'array',
tickvals = [2,50,100],
ticktext = ['Hot','Mild','Cool'],
ticks = 'outside'
)
)
]
py.iplot(data, filename='custom-colorscale-colorbar')
```
### Dash Example
```
from IPython.display import IFrame
IFrame(src= "https://dash-simple-apps.plotly.host/dash-colorscaleplot/" ,width="100%" ,height="650px", frameBorder="0")
```
Find the dash app source code [here](https://github.com/plotly/simple-example-chart-apps/tree/master/colorscale)
### Reference
See https://plot.ly/python/reference/ for more information and chart attribute options!
```
from IPython.display import display, HTML
display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700" rel="stylesheet" type="text/css" />'))
display(HTML('<link rel="stylesheet" type="text/css" href="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">'))
! pip install git+https://github.com/plotly/publisher.git --upgrade
import publisher
publisher.publish(
'colorscales.ipynb', 'python/colorscales/', 'Colorscales',
'How to set colorscales and heatmap colorscales in Python and Plotly. Divergent, sequential, and qualitative colorscales.',
title = 'Colorscales in Python | Plotly',
has_thumbnail='true', thumbnail='thumbnail/heatmap_colorscale.jpg',
language='python',
page_type='example_index',
display_as='style_opt',
order=11,
ipynb= '~notebook_demo/187')
```
| github_jupyter |
<h1><center>Clustering Chicago Public Libraries by Top 10 Nearby Venues</center></h1>
<h4><center>Author: Kunyu He</center></h4>
<h5><center>University of Chicago CAPP'20<h5><center>
### Executive Summary
In this notebook, I clustered 80 public libraries in the city of Chicago into 7 clusters, based on the categories of their top ten venues nearby. It would be a nice guide for those who would like to spend their days in these libraries, exploring their surroundings, but become tired of staying in only one or few of them over time.
The rest of this notebook is organized as follows:
[Data]((https://dataplatform.cloud.ibm.com/data/jupyter2/runtimeenv2/v1/wdpx/service/notebook/conda2x4b4a004761bfc4fb7999c959d3c200ec7/dsxjpy/dded3b111c2248748711f2e0af8908a960fd32ff81d51b803ea16ca6d27c85a032d73d3162155e40276ce416ddb676be350986f93c2c59/container/notebooks/576fdac1-e752-47e2-ba6d-c97780698b85?project=b4a00476-1bfc-4fb7-999c-959d3c200ec7&api=v2&env=a#Data)) section briefly introduces the data source. [Methodology]((https://dataplatform.cloud.ibm.com/data/jupyter2/runtimeenv2/v1/wdpx/service/notebook/conda2x4b4a004761bfc4fb7999c959d3c200ec7/dsxjpy/dded3b111c2248748711f2e0af8908a960fd32ff81d51b803ea16ca6d27c85a032d73d3162155e40276ce416ddb676be350986f93c2c59/container/notebooks/576fdac1-e752-47e2-ba6d-c97780698b85?project=b4a00476-1bfc-4fb7-999c-959d3c200ec7&api=v2&env=a#Methodology)) section briefly introduced the unsupervised learning algorithms used. In the [Imports and Format Parameters]((https://dataplatform.cloud.ibm.com/data/jupyter2/runtimeenv2/v1/wdpx/service/notebook/conda2x4b4a004761bfc4fb7999c959d3c200ec7/dsxjpy/dded3b111c2248748711f2e0af8908a960fd32ff81d51b803ea16ca6d27c85a032d73d3162155e40276ce416ddb676be350986f93c2c59/container/notebooks/576fdac1-e752-47e2-ba6d-c97780698b85?project=b4a00476-1bfc-4fb7-999c-959d3c200ec7&api=v2&env=a#Imports-and-Format-Parameters)) section, I install and import the Python libraries used and set the global constants for future use. [Getting and Cleaning Data]((https://dataplatform.cloud.ibm.com/data/jupyter2/runtimeenv2/v1/wdpx/service/notebook/conda2x4b4a004761bfc4fb7999c959d3c200ec7/dsxjpy/dded3b111c2248748711f2e0af8908a960fd32ff81d51b803ea16ca6d27c85a032d73d3162155e40276ce416ddb676be350986f93c2c59/container/notebooks/576fdac1-e752-47e2-ba6d-c97780698b85?project=b4a00476-1bfc-4fb7-999c-959d3c200ec7&api=v2&env=a#Getting-and-Cleaning-Data)) sections cotains code downloading and cleaning public library and nearby venues data from external sources. I perform dimension reduction, clustering and labelling mainly in the [Data Analysis]((https://dataplatform.cloud.ibm.com/data/jupyter2/runtimeenv2/v1/wdpx/service/notebook/conda2x4b4a004761bfc4fb7999c959d3c200ec7/dsxjpy/dded3b111c2248748711f2e0af8908a960fd32ff81d51b803ea16ca6d27c85a032d73d3162155e40276ce416ddb676be350986f93c2c59/container/notebooks/576fdac1-e752-47e2-ba6d-c97780698b85?project=b4a00476-1bfc-4fb7-999c-959d3c200ec7&api=v2&env=a#Data-Analysis)) section. Finally, resulting folium map is presented in the [Results]((https://dataplatform.cloud.ibm.com/data/jupyter2/runtimeenv2/v1/wdpx/service/notebook/conda2x4b4a004761bfc4fb7999c959d3c200ec7/dsxjpy/dded3b111c2248748711f2e0af8908a960fd32ff81d51b803ea16ca6d27c85a032d73d3162155e40276ce416ddb676be350986f93c2c59/container/notebooks/576fdac1-e752-47e2-ba6d-c97780698b85?project=b4a00476-1bfc-4fb7-999c-959d3c200ec7&api=v2&env=a#Results)) section and [Discussions]((https://dataplatform.cloud.ibm.com/data/jupyter2/runtimeenv2/v1/wdpx/service/notebook/conda2x4b4a004761bfc4fb7999c959d3c200ec7/dsxjpy/dded3b111c2248748711f2e0af8908a960fd32ff81d51b803ea16ca6d27c85a032d73d3162155e40276ce416ddb676be350986f93c2c59/container/notebooks/576fdac1-e752-47e2-ba6d-c97780698b85?project=b4a00476-1bfc-4fb7-999c-959d3c200ec7&api=v2&env=a#Discussions)) section covers caveats and potential improvements.
### Data
Information of the public libraries is provided by [Chicago Public Library](https://www.chipublib.org/). You can access the data [here]((https://data.cityofchicago.org/Education/Libraries-Locations-Hours-and-Contact-Information/x8fc-8rcq)).
Information of the top venues near to (within a range of 500 meters) the public libraries is acquired from [FourSquare API](https://developer.foursquare.com/). You can explore the surroundings of any geographical coordinates of interest with a developer account.
### Methodology
The clustering algorithms used include:
* [Principal Component Analysis]((https://en.wikipedia.org/wiki/Principal_component_analysis)) with [Truncated SVD]((http://infolab.stanford.edu/pub/cstr/reports/na/m/86/36/NA-M-86-36.pdf));
* [KMeans Clustering]((https://en.wikipedia.org/wiki/K-means_clustering));
* [Hierarchical Clustering]((https://en.wikipedia.org/wiki/Hierarchical_clustering)) with [Ward's Method]((https://en.wikipedia.org/wiki/Ward%27s_method)).
PCA with TSVD is used for reducing the dimension of our feature matrix, which is a [sparse matrix]((https://en.wikipedia.org/wiki/Sparse_matrix)). KMeans and hierarchical clusering are applied to cluster the libraries in terms of their top ten nearby venue categories and the final labels are derived from hierarchical clustering with ward distance.
### Imports and Format Parameters
```
import pandas as pd
import numpy as np
import re
import requests
import matplotlib.pyplot as plt
from matplotlib.font_manager import FontProperties
from pandas.io.json import json_normalize
from sklearn.decomposition import TruncatedSVD
from sklearn.cluster import KMeans
from scipy.cluster.hierarchy import linkage, dendrogram, fcluster
```
For visualization, install [folium](https://github.com/python-visualization/folium) and make an additional import.
```
!conda install --quiet -c conda-forge folium --yes
import folium
%matplotlib inline
title = FontProperties()
title.set_family('serif')
title.set_size(16)
title.set_weight('bold')
axis = FontProperties()
axis.set_family('serif')
axis.set_size(12)
plt.rcParams['figure.figsize'] = [12, 8]
```
Hard-code the geographical coordinates of the City of Chicago based on [this]((https://www.latlong.net/place/chicago-il-usa-1855.html)) page. Also prepare formatting parameters for folium map markers.
```
LATITUDE, LOGITUDE = 41.881832, -87.623177
ICON_COLORS = ['red', 'blue', 'green', 'purple', 'orange', 'beige', 'darked']
HTML = """
<center><h4><b>Library {}</b></h4></center>
<h5><b>Cluster:</b> {};</h5>
<h5><b>Hours of operation:</b><br>
{}</h5>
<h5><b>Top five venues:</b><br>
<center>{}<br>
{}<br>
{}<br>
{}<br>
{}</center></h5>
"""
```
### Getting and Cleaning Data
#### Public Library Data
```
!wget --quiet https://data.cityofchicago.org/api/views/x8fc-8rcq/rows.csv?accessType=DOWNLOAD -O libraries.csv
lib = pd.read_csv('libraries.csv', usecols=['NAME ', 'HOURS OF OPERATION', 'LOCATION'])
lib.columns = ['library', 'hours', 'location']
lib.info()
```
Notice that locations are stored as strings of tuples. Applying the following function to `lib`, we can convert `location` into two separate columns of latitudes and longitudes of the libraries.
```
def sep_location(row):
"""
Purpose: seperate the string of location in a given row, convert it into a tuple
of floats, representing latitude and longitude of the library respectively
Inputs:
row (PandasSeries): a row from the `lib` dataframe
Outputs:
(tuple): of floats representing latitude and longitude of the library
"""
return tuple(float(re.compile('[()]').sub("", coordinate)) for \
coordinate in row.location.split(', '))
lib[['latitude', 'longitude']] = lib.apply(sep_location, axis=1).apply(pd.Series)
lib.drop('location', axis=1, inplace=True)
lib.head()
```
Now data on the public libraries is ready for analysis.
#### Venue Data
Use sensitive code cell below to enter FourSquare credentials.
```
# The code was removed by Watson Studio for sharing.
```
Get top ten venues near to the libraries and store data into the `venues` dataframe, with radius set to 1000 meters by default. You can update the `VERSION` parameter to get up-to-date venue information.
```
VERSION = '20181206'
FEATURES = ['venue.name', 'venue.categories', 'venue.location.lat', 'venue.location.lng']
def get_venues(libraries, latitudes, longitudes, limit=10, radius=1000.0):
"""
Purpose: download nearby venues information through FourSquare API in a dataframe
Inputs:
libraries (PandasSeries): names of the public libraries
latitudes (PandasSeries): latitudes of the public libraries
longitudes (PandasSeries): longitudes of the public libraries
limit (int): number of top venues to explore, default to 10
radius (float): range of the circle coverage to define 'nearby', default to 1000.0
Outputs: (DataFrame)
"""
venues_lst = []
for library, lat, lng in zip(libraries, latitudes, longitudes):
url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format( \
CLIENT_ID, CLIENT_SECRET, VERSION, lat, lng, radius, limit)
items = requests.get(url).json()["response"]['groups'][0]['items']
venues_lst.append([(library, lat, lng, \
item['venue']['name'], \
item['venue']['location']['lat'], item['venue']['location']['lng'], \
item['venue']['categories'][0]['name']) for item in items])
venues = pd.DataFrame([item for venues_lst in venues_lst for item in venues_lst])
venues.columns = ['Library', 'Library Latitude', 'Library Longitude', \
'Venue', 'Venue Latitude', 'Venue Longitude', 'Venue Category']
return venues
venues = get_venues(lib.library, lib.latitude, lib.longitude)
venues.head()
```
Count unique libraries, venues and vanue categories in our `venues` dataframe.
```
print('There are {} unique libraries, {} unique venues and {} unique categories.'.format( \
len(venues.Library.unique()), \
len(venues.Venue.unique()), \
len(venues['Venue Category'].unique())))
```
Now our `venues` data is also ready for furtehr analysis.
### Data Analysis
#### Data Preprocessing
Apply one-hot encoding to get our feature matrix, group the venues by libraries and calculate the frequency of each venue category around specific library by taking the mean.
```
features = pd.get_dummies(venues['Venue Category'], prefix="", prefix_sep="")
features.insert(0, 'Library Name', venues.Library)
X = features.groupby(['Library Name']).mean().iloc[:, 1:]
X.head()
```
There are too many categories of venues in our features dataframe. Perform PCA to reduce the dimension of our data. Notice here most of the data entries in our feature matrix is zero, which means our data is sparse data, perform dimension reduction with truncated SVD.
First, attempt to find the least number of dimensions to keep 85% of the variance and transform the feature matrix.
```
tsvd = TruncatedSVD(n_components=X.shape[1]-1, random_state=0).fit(X)
least_n = np.argmax(tsvd.explained_variance_ratio_.cumsum() > 0.85)
print("In order to keep 85% of total variance, we need to keep at least {} dimensions.".format(least_n))
X_t = pd.DataFrame(TruncatedSVD(n_components=least_n, random_state=0).fit_transform(X))
```
Use KMeans on the transformed data and find the best number of k below.
```
ks = np.arange(1, 51)
inertias = []
for k in ks:
model = KMeans(n_clusters=k, random_state=0).fit(X_t)
inertias.append(model.inertia_)
plt.plot(ks, inertias, linewidth=2)
plt.title("Figure 1 KMeans: Finding Best k", fontproperties=title)
plt.xlabel('Number of Clusters (k)', fontproperties=axis)
plt.ylabel('Within-cluster Sum-of-squares', fontproperties=axis)
plt.xticks(np.arange(1, 51, 2))
plt.show()
```
It's really hard to decide based on elbow plot, as the downward trend lasts until 50. Alternatively, try Ward Hierachical Clustering method.
```
merging = linkage(X_t, 'ward')
plt.figure(figsize=[20, 10])
dendrogram(merging,
leaf_rotation=90,
leaf_font_size=10,
distance_sort='descending',
show_leaf_counts=True)
plt.axhline(y=0.65, dashes=[6, 2], c='r')
plt.xlabel('Library Names', fontproperties=axis)
plt.title("Figure 2 Hierachical Clustering with Ward Distance: Cutting at 0.65", fontproperties=title)
plt.show()
```
The result is way better than KMeans. We see six clusters cutting at approximately 0.70. Label the clustered libraries below. Join the labelled library names with `lib` to bind geographical coordinates and hours of operation of the puclic libraries.
```
labels = fcluster(merging, t=0.65, criterion='distance')
df = pd.DataFrame(list(zip(X.index.values, labels)))
df.columns = ['library', 'cluster']
merged = pd.merge(lib, df, how='inner', on='library')
merged.head()
```
### Results
Create a `folium.Map` instance `chicago` with initial zoom level of 11.
```
chicago = folium.Map(location=[LATITUDE, LOGITUDE], zoom_start=11)
```
Check the clustered map! Click on the icons to see the name, hours of operation and top five nearby venues of each public library in the city of Chicago!
```
for index, row in merged.iterrows():
venues_name = venues[venues.Library == row.library].Venue.values
label = folium.Popup(HTML.format(row.library, row.cluster, row.hours, venues_name[0], venues_name[1], venues_name[2], venues_name[3], venues_name[4]), parse_html=False)
folium.Marker([row.latitude, row.longitude], popup=label, icon=folium.Icon(color=ICON_COLORS[row.cluster-1], icon='book')).add_to(chicago)
chicago
```
### Discussions
There might be several caveats in my analysis:
* Libraries are clustered merely according to categories of their surrounding venues. Other characteristics are left out from my considseration;
* We can see that the resulting venues are not unique, i.e. not every public library has ten distinct venues. This might results from the fact that venues share same names in some cases, or nearby areas of these libraries overlap.
Future improvements might include:
* Include hyperlinks to venue photos and tips to make it easier for users to check up in advance;
* Use better algorithms to cluster the libraries.
| github_jupyter |
```
from keras.models import load_model
import pandas as pd
import keras.backend as K
from keras.callbacks import LearningRateScheduler
from keras.callbacks import Callback
import math
import numpy as np
def coeff_r2(y_true, y_pred):
from keras import backend as K
SS_res = K.sum(K.square( y_true-y_pred ))
SS_tot = K.sum(K.square( y_true - K.mean(y_true) ) )
return ( 1 - SS_res/(SS_tot + K.epsilon()) )
model = load_model('./FPV_ANN_tabulated_Standard_4Res_500n.H5')
# model = load_model('../tmp/large_next.h5',custom_objects={'coeff_r2':coeff_r2})
# model = load_model('../tmp/calc_100_3_3_cbrt.h5', custom_objects={'coeff_r2':coeff_r2})
model.summary()
import numpy as np
import pandas as pd
from sklearn.preprocessing import MinMaxScaler, StandardScaler
class data_scaler(object):
def __init__(self):
self.norm = None
self.norm_1 = None
self.std = None
self.case = None
self.scale = 1
self.bias = 1e-20
# self.bias = 1
self.switcher = {
'min_std': 'min_std',
'std2': 'std2',
'std_min':'std_min',
'min': 'min',
'no':'no',
'log': 'log',
'log_min':'log_min',
'log_std':'log_std',
'log2': 'log2',
'sqrt_std': 'sqrt_std',
'cbrt_std': 'cbrt_std',
'nrt_std':'nrt_std',
'tan': 'tan'
}
def fit_transform(self, input_data, case):
self.case = case
if self.switcher.get(self.case) == 'min_std':
self.norm = MinMaxScaler()
self.std = StandardScaler()
out = self.norm.fit_transform(input_data)
out = self.std.fit_transform(out)
if self.switcher.get(self.case) == 'std2':
self.std = StandardScaler()
out = self.std.fit_transform(input_data)
if self.switcher.get(self.case) == 'std_min':
self.norm = MinMaxScaler()
self.std = StandardScaler()
out = self.std.fit_transform(input_data)
out = self.norm.fit_transform(out)
if self.switcher.get(self.case) == 'min':
self.norm = MinMaxScaler()
out = self.norm.fit_transform(input_data)
if self.switcher.get(self.case) == 'no':
self.norm = MinMaxScaler()
self.std = StandardScaler()
out = input_data
if self.switcher.get(self.case) == 'log_min':
out = - np.log(np.asarray(input_data / self.scale) + self.bias)
self.norm = MinMaxScaler()
out = self.norm.fit_transform(out)
if self.switcher.get(self.case) == 'log_std':
out = - np.log(np.asarray(input_data / self.scale) + self.bias)
self.std = StandardScaler()
out = self.std.fit_transform(out)
if self.switcher.get(self.case) == 'log2':
self.norm = MinMaxScaler()
self.std = StandardScaler()
out = self.norm.fit_transform(input_data)
out = np.log(np.asarray(out) + self.bias)
out = self.std.fit_transform(out)
if self.switcher.get(self.case) == 'sqrt_std':
out = np.sqrt(np.asarray(input_data / self.scale))
self.std = StandardScaler()
out = self.std.fit_transform(out)
if self.switcher.get(self.case) == 'cbrt_std':
out = np.cbrt(np.asarray(input_data / self.scale))
self.std = StandardScaler()
out = self.std.fit_transform(out)
if self.switcher.get(self.case) == 'nrt_std':
out = np.power(np.asarray(input_data / self.scale),1/4)
self.std = StandardScaler()
out = self.std.fit_transform(out)
if self.switcher.get(self.case) == 'tan':
self.norm = MaxAbsScaler()
self.std = StandardScaler()
out = self.std.fit_transform(input_data)
out = self.norm.fit_transform(out)
out = np.tan(out / (2 * np.pi + self.bias))
return out
def transform(self, input_data):
if self.switcher.get(self.case) == 'min_std':
out = self.norm.transform(input_data)
out = self.std.transform(out)
if self.switcher.get(self.case) == 'std2':
out = self.std.transform(input_data)
if self.switcher.get(self.case) == 'std_min':
out = self.std.transform(input_data)
out = self.norm.transform(out)
if self.switcher.get(self.case) == 'min':
out = self.norm.transform(input_data)
if self.switcher.get(self.case) == 'no':
out = input_data
if self.switcher.get(self.case) == 'log_min':
out = - np.log(np.asarray(input_data / self.scale) + self.bias)
out = self.norm.transform(out)
if self.switcher.get(self.case) == 'log_std':
out = - np.log(np.asarray(input_data / self.scale) + self.bias)
out = self.std.transform(out)
if self.switcher.get(self.case) == 'log2':
out = self.norm.transform(input_data)
out = np.log(np.asarray(out) + self.bias)
out = self.std.transform(out)
if self.switcher.get(self.case) == 'sqrt_std':
out = np.sqrt(np.asarray(input_data / self.scale))
out = self.std.transform(out)
if self.switcher.get(self.case) == 'cbrt_std':
out = np.cbrt(np.asarray(input_data / self.scale))
out = self.std.transform(out)
if self.switcher.get(self.case) == 'nrt_std':
out = np.power(np.asarray(input_data / self.scale),1/4)
out = self.std.transform(out)
if self.switcher.get(self.case) == 'tan':
out = self.std.transform(input_data)
out = self.norm.transform(out)
out = np.tan(out / (2 * np.pi + self.bias))
return out
def inverse_transform(self, input_data):
if self.switcher.get(self.case) == 'min_std':
out = self.std.inverse_transform(input_data)
out = self.norm.inverse_transform(out)
if self.switcher.get(self.case) == 'std2':
out = self.std.inverse_transform(input_data)
if self.switcher.get(self.case) == 'std_min':
out = self.norm.inverse_transform(input_data)
out = self.std.inverse_transform(out)
if self.switcher.get(self.case) == 'min':
out = self.norm.inverse_transform(input_data)
if self.switcher.get(self.case) == 'no':
out = input_data
if self.switcher.get(self.case) == 'log_min':
out = self.norm.inverse_transform(input_data)
out = (np.exp(-out) - self.bias) * self.scale
if self.switcher.get(self.case) == 'log_std':
out = self.std.inverse_transform(input_data)
out = (np.exp(-out) - self.bias) * self.scale
if self.switcher.get(self.case) == 'log2':
out = self.std.inverse_transform(input_data)
out = np.exp(out) - self.bias
out = self.norm.inverse_transform(out)
if self.switcher.get(self.case) == 'sqrt_std':
out = self.std.inverse_transform(input_data)
out = np.power(out,2) * self.scale
if self.switcher.get(self.case) == 'cbrt_std':
out = self.std.inverse_transform(input_data)
out = np.power(out,3) * self.scale
if self.switcher.get(self.case) == 'nrt_std':
out = self.std.inverse_transform(input_data)
out = np.power(out,4) * self.scale
if self.switcher.get(self.case) == 'tan':
out = (2 * np.pi + self.bias) * np.arctan(input_data)
out = self.norm.inverse_transform(out)
out = self.std.inverse_transform(out)
return out
def read_h5_data(fileName, input_features, labels):
df = pd.read_hdf(fileName)
# df = df[df['f']<0.45]
# for i in range(5):
# pv_101=df[df['pv']==1]
# pv_101['pv']=pv_101['pv']+0.002*(i+1)
# df = pd.concat([df,pv_101])
input_df=df[input_features]
in_scaler = data_scaler()
input_np = in_scaler.fit_transform(input_df.values,'std2')
label_df=df[labels].clip(0)
# if 'PVs' in labels:
# label_df['PVs']=np.log(label_df['PVs']+1)
out_scaler = data_scaler()
label_np = out_scaler.fit_transform(label_df.values,'cbrt_std')
return input_np, label_np, df, in_scaler, out_scaler
# labels = ['CH4','O2','H2O','CO','CO2','T','PVs','psi','mu','alpha']
# labels = ['T','PVs']
# labels = ['T','CH4','O2','CO2','CO','H2O','H2','OH','psi']
# labels = ['CH2OH','HNCO','CH3OH', 'CH2CHO', 'CH2O', 'C3H8', 'HNO', 'NH2', 'HCN']
# labels = np.random.choice(col_labels,20,replace=False).tolist()
# labels.append('PVs')
# labels = col_labels
# labels= ['CH4', 'CH2O', 'CH3O', 'H', 'O2', 'H2', 'O', 'OH', 'H2O', 'HO2', 'H2O2',
# 'C', 'CH', 'CH2', 'CH2(S)', 'CH3', 'CO', 'CO2', 'HCO', 'CH2OH', 'CH3OH',
# 'C2H', 'C2H2', 'C2H3', 'C2H4', 'C2H5', 'C2H6', 'HCCO', 'CH2CO', 'HCCOH',
# 'N', 'NH', 'NH2', 'NH3', 'NNH', 'NO', 'NO2', 'N2O', 'HNO', 'CN', 'HCN',
# 'H2CN', 'HCNN', 'HCNO', 'HNCO', 'NCO', 'N2', 'AR', 'C3H7', 'C3H8', 'CH2CHO', 'CH3CHO', 'T', 'PVs']
# labels.remove('AR')
# labels.remove('N2')
labels = ['H2', 'H', 'O', 'O2', 'OH', 'H2O', 'HO2', 'CH3', 'CH4', 'CO', 'CO2', 'CH2O', 'N2', 'T', 'PVs']
print(labels)
input_features=['f','zeta','pv']
# read in the data
x_input, y_label, df, in_scaler, out_scaler = read_h5_data('../data/tables_of_fgm_psi.h5',input_features=input_features, labels = labels)
from sklearn.metrics import r2_score
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x_input,y_label, test_size=0.01)
x_test_df = pd.DataFrame(in_scaler.inverse_transform(x_test),columns=input_features)
y_test_df = pd.DataFrame(out_scaler.inverse_transform(y_test),columns=labels)
predict_val = model.predict(x_test,batch_size=1024*8)
# predict_val = model.predict(x_test,batch_size=1024*8)
predict_df = pd.DataFrame(out_scaler.inverse_transform(predict_val), columns=labels)
test_data=pd.concat([x_test_df,y_test_df],axis=1)
pred_data=pd.concat([x_test_df,predict_df],axis=1)
!rm sim_check.h5
test_data.to_hdf('sim_check.h5',key='test')
pred_data.to_hdf('sim_check.h5',key='pred')
df_test=pd.read_hdf('sim_check.h5',key='test')
df_pred=pd.read_hdf('sim_check.h5',key='pred')
zeta_level=list(set(df_test['zeta']))
zeta_level.sort()
res_sum=pd.DataFrame()
r2s=[]
r2s_i=[]
names=[]
maxs_0=[]
maxs_9=[]
for r2,name in zip(r2_score(df_test,df_pred,multioutput='raw_values'),df_test.columns):
names.append(name)
r2s.append(r2)
maxs_0.append(df_test[df_test['zeta']==zeta_level[0]][name].max())
maxs_9.append(df_test[df_test['zeta']==zeta_level[8]][name].max())
for i in zeta_level:
r2s_i.append(r2_score(df_pred[df_pred['zeta']==i][name],
df_test[df_test['zeta']==i][name]))
res_sum['name']=names
# res_sum['max_0']=maxs_0
# res_sum['max_9']=maxs_9
res_sum['z_scale']=[m_9/(m_0+1e-20) for m_9,m_0 in zip(maxs_9,maxs_0)]
# res_sum['r2']=r2s
tmp=np.asarray(r2s_i).reshape(-1,10)
for idx,z in enumerate(zeta_level):
res_sum['r2s_'+str(z)]=tmp[:,idx]
res_sum[3:]
from sklearn.metrics import r2_score
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x_input,y_label, test_size=0.01)
x_test_df = pd.DataFrame(in_scaler.inverse_transform(x_test),columns=input_features)
y_test_df = pd.DataFrame(out_scaler.inverse_transform(y_test),columns=labels)
predict_val = student_model.predict(x_test,batch_size=1024*8)
# predict_val = model.predict(x_test,batch_size=1024*8)
predict_df = pd.DataFrame(out_scaler.inverse_transform(predict_val), columns=labels)
test_data=pd.concat([x_test_df,y_test_df],axis=1)
pred_data=pd.concat([x_test_df,predict_df],axis=1)
!rm sim_check.h5
test_data.to_hdf('sim_check.h5',key='test')
pred_data.to_hdf('sim_check.h5',key='pred')
df_test=pd.read_hdf('sim_check.h5',key='test')
df_pred=pd.read_hdf('sim_check.h5',key='pred')
zeta_level=list(set(df_test['zeta']))
zeta_level.sort()
res_sum=pd.DataFrame()
r2s=[]
r2s_i=[]
names=[]
maxs_0=[]
maxs_9=[]
for r2,name in zip(r2_score(df_test,df_pred,multioutput='raw_values'),df_test.columns):
names.append(name)
r2s.append(r2)
maxs_0.append(df_test[df_test['zeta']==zeta_level[0]][name].max())
maxs_9.append(df_test[df_test['zeta']==zeta_level[8]][name].max())
for i in zeta_level:
r2s_i.append(r2_score(df_pred[df_pred['zeta']==i][name],
df_test[df_test['zeta']==i][name]))
res_sum['name']=names
# res_sum['max_0']=maxs_0
# res_sum['max_9']=maxs_9
res_sum['z_scale']=[m_9/(m_0+1e-20) for m_9,m_0 in zip(maxs_9,maxs_0)]
# res_sum['r2']=r2s
tmp=np.asarray(r2s_i).reshape(-1,10)
for idx,z in enumerate(zeta_level):
res_sum['r2s_'+str(z)]=tmp[:,idx]
res_sum[3:]
from sklearn.metrics import r2_score
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x_input,y_label, test_size=0.01)
x_test_df = pd.DataFrame(in_scaler.inverse_transform(x_test),columns=input_features)
y_test_df = pd.DataFrame(out_scaler.inverse_transform(y_test),columns=labels)
predict_val = model.predict(x_test,batch_size=1024*8)
# predict_val = model.predict(x_test,batch_size=1024*8)
predict_df = pd.DataFrame(out_scaler.inverse_transform(predict_val), columns=labels)
test_data=pd.concat([x_test_df,y_test_df],axis=1)
pred_data=pd.concat([x_test_df,predict_df],axis=1)
!rm sim_check.h5
test_data.to_hdf('sim_check.h5',key='test')
pred_data.to_hdf('sim_check.h5',key='pred')
df_test=pd.read_hdf('sim_check.h5',key='test')
df_pred=pd.read_hdf('sim_check.h5',key='pred')
zeta_level=list(set(df_test['zeta']))
zeta_level.sort()
res_sum=pd.DataFrame()
r2s=[]
r2s_i=[]
names=[]
maxs_0=[]
maxs_9=[]
for r2,name in zip(r2_score(df_test,df_pred,multioutput='raw_values'),df_test.columns):
names.append(name)
r2s.append(r2)
maxs_0.append(df_test[df_test['zeta']==zeta_level[0]][name].max())
maxs_9.append(df_test[df_test['zeta']==zeta_level[8]][name].max())
for i in zeta_level:
r2s_i.append(r2_score(df_pred[df_pred['zeta']==i][name],
df_test[df_test['zeta']==i][name]))
res_sum['name']=names
# res_sum['max_0']=maxs_0
# res_sum['max_9']=maxs_9
res_sum['z_scale']=[m_9/(m_0+1e-20) for m_9,m_0 in zip(maxs_9,maxs_0)]
# res_sum['r2']=r2s
tmp=np.asarray(r2s_i).reshape(-1,10)
for idx,z in enumerate(zeta_level):
res_sum['r2s_'+str(z)]=tmp[:,idx]
res_sum[3:]
from sklearn.metrics import r2_score
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x_input,y_label, test_size=0.01)
x_test_df = pd.DataFrame(in_scaler.inverse_transform(x_test),columns=input_features)
y_test_df = pd.DataFrame(out_scaler.inverse_transform(y_test),columns=labels)
predict_val = model.predict(x_test,batch_size=1024*8)
# predict_val = model.predict(x_test,batch_size=1024*8)
predict_df = pd.DataFrame(out_scaler.inverse_transform(predict_val), columns=labels)
test_data=pd.concat([x_test_df,y_test_df],axis=1)
pred_data=pd.concat([x_test_df,predict_df],axis=1)
!rm sim_check.h5
test_data.to_hdf('sim_check.h5',key='test')
pred_data.to_hdf('sim_check.h5',key='pred')
df_test=pd.read_hdf('sim_check.h5',key='test')
df_pred=pd.read_hdf('sim_check.h5',key='pred')
zeta_level=list(set(df_test['zeta']))
zeta_level.sort()
res_sum=pd.DataFrame()
r2s=[]
r2s_i=[]
names=[]
maxs_0=[]
maxs_9=[]
for r2,name in zip(r2_score(df_test,df_pred,multioutput='raw_values'),df_test.columns):
names.append(name)
r2s.append(r2)
maxs_0.append(df_test[df_test['zeta']==zeta_level[0]][name].max())
maxs_9.append(df_test[df_test['zeta']==zeta_level[8]][name].max())
for i in zeta_level:
r2s_i.append(r2_score(df_pred[df_pred['zeta']==i][name],
df_test[df_test['zeta']==i][name]))
res_sum['name']=names
# res_sum['max_0']=maxs_0
# res_sum['max_9']=maxs_9
res_sum['z_scale']=[m_9/(m_0+1e-20) for m_9,m_0 in zip(maxs_9,maxs_0)]
# res_sum['r2']=r2s
tmp=np.asarray(r2s_i).reshape(-1,10)
for idx,z in enumerate(zeta_level):
res_sum['r2s_'+str(z)]=tmp[:,idx]
res_sum[3:]
#@title import plotly
import plotly.plotly as py
import numpy as np
from plotly.offline import init_notebook_mode, iplot
# from plotly.graph_objs import Contours, Histogram2dContour, Marker, Scatter
import plotly.graph_objs as go
def configure_plotly_browser_state():
import IPython
display(IPython.core.display.HTML('''
<script src="/static/components/requirejs/require.js"></script>
<script>
requirejs.config({
paths: {
base: '/static/base',
plotly: 'https://cdn.plot.ly/plotly-1.5.1.min.js?noext',
},
});
</script>
'''))
#@title Default title text
# species = np.random.choice(labels)
species = 'HNO' #@param {type:"string"}
z_level = 0 #@param {type:"integer"}
# configure_plotly_browser_state()
# init_notebook_mode(connected=False)
from sklearn.metrics import r2_score
df_t=df_test[df_test['zeta']==zeta_level[z_level]].sample(frac=1)
# df_p=df_pred.loc[df_pred['zeta']==zeta_level[1]].sample(frac=0.1)
df_p=df_pred.loc[df_t.index]
# error=(df_p[species]-df_t[species])
error=(df_p[species]-df_t[species])/(df_p[species]+df_t[species])
r2=round(r2_score(df_p[species],df_t[species]),4)
print(species,'r2:',r2,'max:',df_t[species].max())
fig_db = {
'data': [
{'name':'test data from table',
'x': df_t['f'],
'y': df_t['pv'],
'z': df_t[species],
'type':'scatter3d',
'mode': 'markers',
'marker':{
'size':1
}
},
{'name':'prediction from neural networks',
'x': df_p['f'],
'y': df_p['pv'],
'z': df_p[species],
'type':'scatter3d',
'mode': 'markers',
'marker':{
'size':1
},
},
{'name':'error in difference',
'x': df_p['f'],
'y': df_p['pv'],
'z': error,
'type':'scatter3d',
'mode': 'markers',
'marker':{
'size':1
},
}
],
'layout': {
'scene':{
'xaxis': {'title':'mixture fraction'},
'yaxis': {'title':'progress variable'},
'zaxis': {'title': species+'_r2:'+str(r2)}
}
}
}
# iplot(fig_db, filename='multiple-scatter')
iplot(fig_db)
%matplotlib inline
import matplotlib.pyplot as plt
z=0.22
sp='HNO'
plt.plot(df[(df.pv==1)&(df.zeta==z)]['f'],df[(df.pv==0.9)&(df.zeta==z)][sp],'rd')
from keras.models import Model
from keras.layers import Dense, Input, Dropout
n_neuron = 100
# %%
print('set up student network')
# ANN parameters
dim_input = x_train.shape[1]
dim_label = y_train.shape[1]
batch_norm = False
# This returns a tensor
inputs = Input(shape=(dim_input,),name='input_1')
# a layer instance is callable on a tensor, and returns a tensor
x = Dense(n_neuron, activation='relu')(inputs)
x = Dense(n_neuron, activation='relu')(x)
x = Dense(n_neuron, activation='relu')(x)
# x = Dropout(0.1)(x)
predictions = Dense(dim_label, activation='linear', name='output_1')(x)
student_model = Model(inputs=inputs, outputs=predictions)
student_model.summary()
import keras.backend as K
from keras.callbacks import LearningRateScheduler
import math
def cubic_loss(y_true, y_pred):
return K.mean(K.square(y_true - y_pred)*K.abs(y_true - y_pred), axis=-1)
def coeff_r2(y_true, y_pred):
from keras import backend as K
SS_res = K.sum(K.square( y_true-y_pred ))
SS_tot = K.sum(K.square( y_true - K.mean(y_true) ) )
return ( 1 - SS_res/(SS_tot + K.epsilon()) )
def step_decay(epoch):
initial_lrate = 0.002
drop = 0.5
epochs_drop = 1000.0
lrate = initial_lrate * math.pow(drop,math.floor((1+epoch)/epochs_drop))
return lrate
lrate = LearningRateScheduler(step_decay)
class SGDRScheduler(Callback):
'''Cosine annealing learning rate scheduler with periodic restarts.
# Usage
```python
schedule = SGDRScheduler(min_lr=1e-5,
max_lr=1e-2,
steps_per_epoch=np.ceil(epoch_size/batch_size),
lr_decay=0.9,
cycle_length=5,
mult_factor=1.5)
model.fit(X_train, Y_train, epochs=100, callbacks=[schedule])
```
# Arguments
min_lr: The lower bound of the learning rate range for the experiment.
max_lr: The upper bound of the learning rate range for the experiment.
steps_per_epoch: Number of mini-batches in the dataset. Calculated as `np.ceil(epoch_size/batch_size)`.
lr_decay: Reduce the max_lr after the completion of each cycle.
Ex. To reduce the max_lr by 20% after each cycle, set this value to 0.8.
cycle_length: Initial number of epochs in a cycle.
mult_factor: Scale epochs_to_restart after each full cycle completion.
# References
Blog post: jeremyjordan.me/nn-learning-rate
Original paper: http://arxiv.org/abs/1608.03983
'''
def __init__(self,
min_lr,
max_lr,
steps_per_epoch,
lr_decay=1,
cycle_length=10,
mult_factor=2):
self.min_lr = min_lr
self.max_lr = max_lr
self.lr_decay = lr_decay
self.batch_since_restart = 0
self.next_restart = cycle_length
self.steps_per_epoch = steps_per_epoch
self.cycle_length = cycle_length
self.mult_factor = mult_factor
self.history = {}
def clr(self):
'''Calculate the learning rate.'''
fraction_to_restart = self.batch_since_restart / (self.steps_per_epoch * self.cycle_length)
lr = self.min_lr + 0.5 * (self.max_lr - self.min_lr) * (1 + np.cos(fraction_to_restart * np.pi))
return lr
def on_train_begin(self, logs={}):
'''Initialize the learning rate to the minimum value at the start of training.'''
logs = logs or {}
K.set_value(self.model.optimizer.lr, self.max_lr)
def on_batch_end(self, batch, logs={}):
'''Record previous batch statistics and update the learning rate.'''
logs = logs or {}
self.history.setdefault('lr', []).append(K.get_value(self.model.optimizer.lr))
for k, v in logs.items():
self.history.setdefault(k, []).append(v)
self.batch_since_restart += 1
K.set_value(self.model.optimizer.lr, self.clr())
def on_epoch_end(self, epoch, logs={}):
'''Check for end of current cycle, apply restarts when necessary.'''
if epoch + 1 == self.next_restart:
self.batch_since_restart = 0
self.cycle_length = np.ceil(self.cycle_length * self.mult_factor)
self.next_restart += self.cycle_length
self.max_lr *= self.lr_decay
self.best_weights = self.model.get_weights()
def on_train_end(self, logs={}):
'''Set weights to the values from the end of the most recent cycle for best performance.'''
self.model.set_weights(self.best_weights)
student_model = load_model('student.h5',custom_objects={'coeff_r2':coeff_r2})
model.summary()
gx,gy,gz=np.mgrid[0:1:600j,0:1:10j,0:1:600j]
gx=gx.reshape(-1,1)
gy=gy.reshape(-1,1)
gz=gz.reshape(-1,1)
gm=np.hstack([gx,gy,gz])
gm.shape
from keras.callbacks import ModelCheckpoint
from keras import optimizers
batch_size = 1024*16
epochs = 2000
vsplit = 0.1
loss_type='mse'
adam_op = optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999,epsilon=1e-8, decay=0.0, amsgrad=False)
student_model.compile(loss=loss_type,
# optimizer=adam_op,
optimizer='adam',
metrics=[coeff_r2])
# model.compile(loss=cubic_loss, optimizer=adam_op, metrics=['accuracy'])
# checkpoint (save the best model based validate loss)
!mkdir ./tmp
filepath = "./tmp/student_weights.best.cntk.hdf5"
checkpoint = ModelCheckpoint(filepath,
monitor='val_loss',
verbose=1,
save_best_only=True,
mode='min',
period=20)
epoch_size=x_train.shape[0]
a=0
base=2
clc=2
for i in range(5):
a+=base*clc**(i)
print(a)
epochs,c_len = a,base
schedule = SGDRScheduler(min_lr=1e-5,max_lr=1e-4,
steps_per_epoch=np.ceil(epoch_size/batch_size),
cycle_length=c_len,lr_decay=0.8,mult_factor=2)
callbacks_list = [checkpoint]
# callbacks_list = [checkpoint, schedule]
x_train_teacher = in_scaler.transform(gm)
y_train_teacher = model.predict(x_train_teacher, batch_size=1024*8)
x_train, x_test, y_train, y_test = train_test_split(x_train_teacher,y_train_teacher, test_size=0.01)
# fit the model
history = student_model.fit(
x_train, y_train,
epochs=epochs,
batch_size=batch_size,
validation_split=vsplit,
verbose=2,
callbacks=callbacks_list,
shuffle=True)
student_model.save('student_100_3.h5')
n_res = 501
pv_level = 0.996
f_1 = np.linspace(0,1,n_res)
z_1 = np.zeros(n_res)
pv_1 = np.ones(n_res)*pv_level
case_1 = np.vstack((f_1,z_1,pv_1))
# case_1 = np.vstack((pv_1,z_1,f_1))
case_1 = case_1.T
case_1.shape
out=out_scaler.inverse_transform(model.predict(case_1))
out=pd.DataFrame(out,columns=labels)
sp='PVs'
out.head()
table_val=df[(df.pv==pv_level) & (df.zeta==0)][sp]
table_val.shape
import matplotlib.pyplot as plt
plt.plot(f_1,table_val)
plt.show
plt.plot(f_1,out[sp])
plt.show
df.head()
pv_101=df[df['pv']==1][df['zeta']==0]
pv_101['pv']=pv_101['pv']+0.01
a=pd.concat([pv_101,pv_101])
pv_101.shape
a.shape
a
```
| github_jupyter |
```
import pandas as pd
from sklearn.decomposition import IncrementalPCA, PCA
from lung_cancer.connection_settings import get_connection_string, TABLE_LABELS, TABLE_FEATURES, TABLE_PCA_FEATURES, IMAGES_FOLDER
from lung_cancer.connection_settings import TABLE_PATIENTS, TABLE_TRAIN_ID, MICROSOFTML_MODEL_NAME, TABLE_PREDICTIONS, FASTTREE_MODEL_NAME, TABLE_CLASSIFIERS
from lung_cancer.lung_cancer_utils import compute_features, train_test_split, average_pool, gather_image_paths, insert_model, create_formula, roc
from revoscalepy import rx_import, RxSqlServerData, rx_data_step, RxInSqlServer, RxLocalSeq, rx_set_compute_context
from revoscalepy import RxSqlServerData, RxInSqlServer, RxLocalSeq, rx_set_compute_context, rx_data_step
from microsoftml import rx_fast_trees
from microsoftml import rx_predict as ml_predict
connection_string = get_connection_string()
sql = RxInSqlServer(connection_string=connection_string)
local = RxLocalSeq()
rx_set_compute_context(local)
print("Gathering patients and labels")
query = "SELECT patient_id, label FROM {}".format(TABLE_LABELS)
data_sql = RxSqlServerData(sql_query=query, connection_string=connection_string)
data = rx_import(data_sql)
data["label"] = data["label"].astype(bool)
n_patients = 200 # How many patients do we featurize images for?
data = data.head(n_patients)
print(data.head())
data_to_featurize = gather_image_paths(data, IMAGES_FOLDER)
print(data_to_featurize.head())
featurized_data = compute_features(data_to_featurize, MICROSOFTML_MODEL_NAME, compute_context=sql)
print(featurized_data.head())
pooled_data = average_pool(data, featurized_data)
print(pooled_data)
features_sql = RxSqlServerData(table=TABLE_FEATURES, connection_string=connection_string)
rx_data_step(input_data=pooled_data, output_file=features_sql, overwrite=True)
resample = False
if resample:
print("Performing Train Test Split")
p = 80
train_test_split(TABLE_TRAIN_ID, TABLE_PATIENTS, p, connection_string=connection_string)
n = min(485, n_patients) # 485 features is the most that can be handled right now
#pca = IncrementalPCA(n_components=n, whiten=True, batch_size=100)
pca = PCA(n_components=n, whiten=True)
def apply_pca(dataset, context):
dataset = pd.DataFrame(dataset)
feats = dataset.drop(["label", "patient_id"], axis=1)
feats = pca.transform(feats)
feats = pd.DataFrame(data=feats, index=dataset.index.values, columns=["pc" + str(i) for i in range(feats.shape[1])])
dataset = pd.concat([dataset[["label", "patient_id"]], feats], axis=1)
return dataset
query = "SELECT * FROM {} WHERE patient_id IN (SELECT patient_id FROM {})".format(TABLE_FEATURES, TABLE_TRAIN_ID)
train_data_sql = RxSqlServerData(sql_query=query, connection_string=connection_string)
train_data = rx_import(input_data=train_data_sql)
train_data = train_data.drop(["label", "patient_id"], axis=1)
pca.fit(train_data)
rx_set_compute_context(local)
pca_features_sql = RxSqlServerData(table=TABLE_PCA_FEATURES, connection_string=connection_string)
rx_data_step(input_data=features_sql, output_file=pca_features_sql, overwrite=True, transform_function=apply_pca)
# Point to the SQL table with the training data
column_info = {'label': {'type': 'integer'}}
query = "SELECT * FROM {} WHERE patient_id IN (SELECT patient_id FROM {})".format(TABLE_PCA_FEATURES, TABLE_TRAIN_ID)
print(query)
#train_sql = RxSqlServerData(sql_query=query, connection_string=connection_string, column_info=column_info)
train_sql = RxSqlServerData(sql_query=query, connection_string=connection_string)
formula = create_formula(train_sql)
print("Formula:", formula)
# Fit a classification model
classifier = rx_fast_trees(formula=formula,
data=train_sql,
num_trees=500,
method="binary",
random_seed=5,
compute_context=sql)
print(classifier)
# Serialize LGBMRegressor model and insert into table
insert_model(TABLE_CLASSIFIERS, connection_string, classifier, FASTTREE_MODEL_NAME) # TODO: Do table insertions in sql
# Point to the SQL table with the testing data
query = "SELECT * FROM {} WHERE patient_id NOT IN (SELECT patient_id FROM {})".format(TABLE_PCA_FEATURES, TABLE_TRAIN_ID)
print(query)
test_sql = RxSqlServerData(sql_query=query, connection_string=connection_string)#, column_info=column_info
# Make predictions on the test data
predictions = ml_predict(classifier, data=test_sql, extra_vars_to_write=["label", "patient_id"])
print(predictions.head())
predictions_sql = RxSqlServerData(table=TABLE_PREDICTIONS, connection_string=connection_string)
rx_data_step(predictions, predictions_sql, overwrite=True)
# Evaluate model using ROC
roc(predictions["label"], predictions["Probability"])
# Specify patient to make prediction for
PatientIndex = 9
# Select patient data
query = "SELECT TOP(1) * FROM {} AS t1 INNER JOIN {} AS t2 ON t1.patient_id = t2.patient_id WHERE t2.idx = {}".format(TABLE_PCA_FEATURES, TABLE_PATIENTS, PatientIndex)
print(query)
patient_sql = RxSqlServerData(sql_query=query, connection_string=connection_string)
# Make Prediction on a single patient
predictions = ml_predict(classifier, data=patient_sql, extra_vars_to_write=["label", "patient_id"])
print("The probability of cancer for patient {} with patient_id {} is {}%".format(PatientIndex, predictions["patient_id"].iloc[0], predictions["Probability"].iloc[0]*100))
if predictions["label"].iloc[0] == 0:
print("Ground Truth: This patient does not have cancer")
else:
print("Ground Truth: This patient does have cancer")
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import sklearn
import spacy
import re
from nltk.corpus import gutenberg
import nltk
import warnings
warnings.filterwarnings("ignore")
nltk.download('gutenberg')
!python -m spacy download en
```
## 1. Converting words or sentences into numeric vectors is fundamental when working with text data. To make sure you are solid on how these vectors work, please generate the tf-idf vectors for the last three sentences of the example we gave at the beginning of this checkpoint. If you are feeling uncertain, have your mentor walk you through it.
* 4: 1.585, 1, 0, 1, 1.585, 0,0,0,0
* 5: 0,0,0,0,0, .585, 1, 1.585, 1
* 6: 0,0,0,0,0,0, 1, 0, 2
```
# utility function for standard text cleaning
def text_cleaner(text):
# visual inspection identifies a form of punctuation spaCy does not
# recognize: the double dash '--'. Better get rid of it now!
text = re.sub(r'--',' ',text)
text = re.sub("[\[].*?[\]]", "", text)
text = re.sub(r"(\b|\s+\-?|^\-?)(\d+|\d*\.\d+)\b", " ", text)
text = ' '.join(text.split())
return text
# load and clean the data.
persuasion = gutenberg.raw('austen-persuasion.txt')
alice = gutenberg.raw('carroll-alice.txt')
# the chapter indicator is idiosyncratic
persuasion = re.sub(r'Chapter \d+', '', persuasion)
alice = re.sub(r'CHAPTER .*', '', alice)
alice = text_cleaner(alice)
persuasion = text_cleaner(persuasion)
# parse the cleaned novels. this can take a bit
nlp = spacy.load('en_core_web_sm')
alice_doc = nlp(alice)
persuasion_doc = nlp(persuasion)
# group into sentences
alice_sents = [[sent, "Carroll"] for sent in alice_doc.sents]
persuasion_sents = [[sent, "Austen"] for sent in persuasion_doc.sents]
# combine the sentences from the two novels into one data frame
sentences = pd.DataFrame(alice_sents + persuasion_sents, columns = ["text", "author"])
sentences.head()
# get rid off stop words and punctuation
# and lemmatize the tokens
for i, sentence in enumerate(sentences["text"]):
sentences.loc[i, "text"] = " ".join(
[token.lemma_ for token in sentence if not token.is_punct and not token.is_stop])
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer(
max_df=0.5, min_df=2, use_idf=True, norm=u'l2', smooth_idf=True)
# applying the vectorizer
X = vectorizer.fit_transform(sentences["text"])
tfidf_df = pd.DataFrame(X.toarray(), columns=vectorizer.get_feature_names())
sentences = pd.concat([tfidf_df, sentences[["text", "author"]]], axis=1)
# keep in mind that the log base 2 of 1 is 0,
# so a tf-idf score of 0 indicates that the word was present once in that sentence.
sentences.head()
sentences.loc[4]
```
## 2. In the 2-grams example above, we only used 2-grams as our features. This time, use both 1-grams and 2-grams together as your feature set. Run the same models in the example and compare the results.
```
# utility function for standard text cleaning
def text_cleaner(text):
# visual inspection identifies a form of punctuation spaCy does not
# recognize: the double dash '--'. Better get rid of it now!
text = re.sub(r'--',' ',text)
text = re.sub("[\[].*?[\]]", "", text)
text = re.sub(r"(\b|\s+\-?|^\-?)(\d+|\d*\.\d+)\b", " ", text)
text = ' '.join(text.split())
return text
# load and clean the data.
persuasion = gutenberg.raw('austen-persuasion.txt')
alice = gutenberg.raw('carroll-alice.txt')
# the chapter indicator is idiosyncratic
persuasion = re.sub(r'Chapter \d+', '', persuasion)
alice = re.sub(r'CHAPTER .*', '', alice)
alice = text_cleaner(alice)
persuasion = text_cleaner(persuasion)
# parse the cleaned novels. this can take a bit
nlp = spacy.load('en')
alice_doc = nlp(alice)
persuasion_doc = nlp(persuasion)
# group into sentences
alice_sents = [[sent, "Carroll"] for sent in alice_doc.sents]
persuasion_sents = [[sent, "Austen"] for sent in persuasion_doc.sents]
# combine the sentences from the two novels into one data frame
sentences = pd.DataFrame(alice_sents + persuasion_sents, columns = ["text", "author"])
sentences.head()
# get rid off stop words and punctuation
# and lemmatize the tokens
for i, sentence in enumerate(sentences["text"]):
sentences.loc[i, "text"] = " ".join(
[token.lemma_ for token in sentence if not token.is_punct and not token.is_stop])
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer(
max_df=0.5, min_df=2, use_idf=True, norm=u'l2', smooth_idf=True, ngram_range=(1,2))
# applying the vectorizer
X = vectorizer.fit_transform(sentences["text"])
tfidf_df = pd.DataFrame(X.toarray(), columns=vectorizer.get_feature_names())
sentences = pd.concat([tfidf_df, sentences[["text", "author"]]], axis=1)
# keep in mind that the log base 2 of 1 is 0,
# so a tf-idf score of 0 indicates that the word was present once in that sentence.
sentences.head()
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier
from sklearn.model_selection import train_test_split
Y = sentences['author']
X = np.array(sentences.drop(['text','author'], 1))
# We split the dataset into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.4, random_state=123)
# Models
lr = LogisticRegression()
rfc = RandomForestClassifier()
gbc = GradientBoostingClassifier()
lr.fit(X_train, y_train)
rfc.fit(X_train, y_train)
gbc.fit(X_train, y_train)
print("----------------------Logistic Regression Scores----------------------")
print('Training set score:', lr.score(X_train, y_train))
print('\nTest set score:', lr.score(X_test, y_test))
print("----------------------Random Forest Scores----------------------")
print('Training set score:', rfc.score(X_train, y_train))
print('\nTest set score:', rfc.score(X_test, y_test))
print("----------------------Gradient Boosting Scores----------------------")
print('Training set score:', gbc.score(X_train, y_train))
print('\nTest set score:', gbc.score(X_test, y_test))
```
As can be seen above, using 1-gram along with 2-gram improved the performances of all of the models.
| github_jupyter |
# Training Collaborative Experts on MSR-VTT
This notebook shows how to download code that trains a Collaborative Experts model with GPT-1 + NetVLAD on the MSR-VTT Dataset.
## Setup
* Download Code and Dependencies
* Import Modules
* Download Language Model Weights
* Download Datasets
* Generate Encodings for Dataset Captions
### Code Downloading and Dependency Downloading
* Specify tensorflow version
* Clone repository from Github
* `cd` into the correct directory
* Install the requirements
```
%tensorflow_version 2.x
!git clone https://github.com/googleinterns/via-content-understanding.git
%cd via-content-understanding/videoretrieval/
!pip install -r requirements.txt
!pip install --upgrade tensorflow_addons
```
### Importing Modules
```
import tensorflow as tf
import languagemodels
import train.encoder_datasets
import train.language_model
import experts
import datasets
import datasets.msrvtt.constants
import os
import models.components
import models.encoder
import helper.precomputed_features
from tensorflow_addons.activations import mish
import tensorflow_addons as tfa
import metrics.loss
```
### Language Model Downloading
* Download GPT-1
```
gpt_model = languagemodels.OpenAIGPTModel()
```
### Dataset downloading
* Downlaod Datasets
* Download Precomputed Features
```
datasets.msrvtt_dataset.download_dataset()
```
Note: The system `curl` is more memory efficent than the download function in our codebase, so here `curl` is used rather than the download function in our codebase.
```
url = datasets.msrvtt.constants.features_tar_url
path = datasets.msrvtt.constants.features_tar_path
os.system(f"curl {url} > {path}")
helper.precomputed_features.cache_features(
datasets.msrvtt_dataset,
datasets.msrvtt.constants.expert_to_features,
datasets.msrvtt.constants.features_tar_path,)
```
### Embeddings Generation
* Generate Embeddings for MSR-VTT
* **Note: this will take 20-30 minutes on a colab, depending on the GPU**
```
train.language_model.generate_and_cache_contextual_embeddings(
gpt_model, datasets.msrvtt_dataset)
```
## Training
* Build Train Datasets
* Initialize Models
* Compile Encoders
* Fit Model
* Test Model
### Datasets Generation
```
experts_used = [
experts.i3d,
experts.r2p1d,
experts.resnext,
experts.senet,
experts.speech_expert,
experts.ocr_expert,
experts.audio_expert,
experts.densenet,
experts.face_expert]
train_ds, valid_ds, test_ds = (
train.encoder_datasets.generate_encoder_datasets(
gpt_model, datasets.msrvtt_dataset, experts_used))
```
### Model Initialization
```
class MishLayer(tf.keras.layers.Layer):
def call(self, inputs):
return mish(inputs)
mish(tf.Variable([1.0]))
text_encoder = models.components.TextEncoder(
len(experts_used),
num_netvlad_clusters=28,
ghost_clusters=1,
language_model_dimensionality=768,
encoded_expert_dimensionality=512,
residual_cls_token=False,
)
video_encoder = models.components.VideoEncoder(
num_experts=len(experts_used),
experts_use_netvlad=[False, False, False, False, True, True, True, False, False],
experts_netvlad_shape=[None, None, None, None, 19, 43, 8, None, None],
expert_aggregated_size=512,
encoded_expert_dimensionality=512,
g_mlp_layers=3,
h_mlp_layers=0,
make_activation_layer=MishLayer)
encoder = models.encoder.EncoderForFrozenLanguageModel(
video_encoder,
text_encoder,
0.0938,
[1, 5, 10, 50],
20)
```
### Encoder Compliation
```
def build_optimizer(lr=0.001):
learning_rate_scheduler = tf.keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate=lr,
decay_steps=101,
decay_rate=0.95,
staircase=True)
return tf.keras.optimizers.Adam(learning_rate_scheduler)
encoder.compile(build_optimizer(0.1), metrics.loss.bidirectional_max_margin_ranking_loss)
train_ds_prepared = (train_ds
.shuffle(1000)
.batch(64, drop_remainder=True)
.prefetch(tf.data.experimental.AUTOTUNE))
encoder.video_encoder.trainable = True
encoder.text_encoder.trainable = True
```
### Model fitting
```
encoder.fit(
train_ds_prepared,
epochs=100,
)
```
### Tests
```
captions_per_video = 20
num_videos_upper_bound = 100000
ranks = []
for caption_index in range(captions_per_video):
batch = next(iter(test_ds.shard(captions_per_video, caption_index).batch(
num_videos_upper_bound)))
video_embeddings, text_embeddings, mixture_weights = encoder.forward_pass(
batch, training=False)
similarity_matrix = metrics.loss.build_similarity_matrix(
video_embeddings,
text_embeddings,
mixture_weights,
batch[-1])
rankings = metrics.rankings.compute_ranks(similarity_matrix)
ranks += list(rankings.numpy())
def recall_at_k(ranks, k):
return len(list(filter(lambda i: i <= k, ranks))) / len(ranks)
median_rank = sorted(ranks)[len(ranks)//2]
mean_rank = sum(ranks)/len(ranks)
print(f"Median Rank: {median_rank}")
print(f"Mean Rank: {mean_rank}")
for k in [1, 5, 10, 50]:
recall = recall_at_k(ranks, k)
print(f"R@{k}: {recall}")
```
| github_jupyter |
# k-Nearest Neighbor (kNN) exercise
*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*
The kNN classifier consists of two stages:
- During training, the classifier takes the training data and simply remembers it
- During testing, kNN classifies every test image by comparing to all training images and transfering the labels of the k most similar training examples
- The value of k is cross-validated
In this exercise you will implement these steps and understand the basic Image Classification pipeline, cross-validation, and gain proficiency in writing efficient, vectorized code.
```
# Run some setup code for this notebook.
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
from __future__ import print_function
# This is a bit of magic to make matplotlib figures appear inline in the notebook
# rather than in a new window.
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
# Load the raw CIFAR-10 data.
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# As a sanity check, we print out the size of the training and test data.
print('Training data shape: ', X_train.shape)
print('Training labels shape: ', y_train.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
# Visualize some examples from the dataset.
# We show a few examples of training images from each class.
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
num_classes = len(classes)
samples_per_class = 7
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(X_train[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls)
plt.show()
# Subsample the data for more efficient code execution in this exercise
num_training = 5000
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
num_test = 500
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
# Reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
print(X_train.shape, X_test.shape)
from cs231n.classifiers import KNearestNeighbor
# Create a kNN classifier instance.
# Remember that training a kNN classifier is a noop:
# the Classifier simply remembers the data and does no further processing
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
```
We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps:
1. First we must compute the distances between all test examples and all train examples.
2. Given these distances, for each test example we find the k nearest examples and have them vote for the label
Lets begin with computing the distance matrix between all training and test examples. For example, if there are **Ntr** training examples and **Nte** test examples, this stage should result in a **Nte x Ntr** matrix where each element (i,j) is the distance between the i-th test and j-th train example.
First, open `cs231n/classifiers/k_nearest_neighbor.py` and implement the function `compute_distances_two_loops` that uses a (very inefficient) double loop over all pairs of (test, train) examples and computes the distance matrix one element at a time.
```
# Open cs231n/classifiers/k_nearest_neighbor.py and implement
# compute_distances_two_loops.
# Test your implementation:
dists = classifier.compute_distances_two_loops(X_test)
print(dists.shape)
# We can visualize the distance matrix: each row is a single test example and
# its distances to training examples
plt.imshow(dists, interpolation='none')
plt.show()
```
**Inline Question #1:** Notice the structured patterns in the distance matrix, where some rows or columns are visible brighter. (Note that with the default color scheme black indicates low distances while white indicates high distances.)
- What in the data is the cause behind the distinctly bright rows?
- What causes the columns?
**Your Answer**: *fill this in.*
* The distinctly bright rows indicate that they are all far away from all the training set (outlier)
* The distinctly bright columns indicate that they are all far away from all the test set
```
# Now implement the function predict_labels and run the code below:
# We use k = 1 (which is Nearest Neighbor).
y_test_pred = classifier.predict_labels(dists, k=1)
# Compute and print the fraction of correctly predicted examples
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))
```
You should expect to see approximately `27%` accuracy. Now lets try out a larger `k`, say `k = 5`:
```
y_test_pred = classifier.predict_labels(dists, k=5)
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))
```
You should expect to see a slightly better performance than with `k = 1`.
```
# Now lets speed up distance matrix computation by using partial vectorization
# with one loop. Implement the function compute_distances_one_loop and run the
# code below:
dists_one = classifier.compute_distances_one_loop(X_test)
# To ensure that our vectorized implementation is correct, we make sure that it
# agrees with the naive implementation. There are many ways to decide whether
# two matrices are similar; one of the simplest is the Frobenius norm. In case
# you haven't seen it before, the Frobenius norm of two matrices is the square
# root of the squared sum of differences of all elements; in other words, reshape
# the matrices into vectors and compute the Euclidean distance between them.
difference = np.linalg.norm(dists - dists_one, ord='fro')
print('Difference was: %f' % (difference, ))
if difference < 0.001:
print('Good! The distance matrices are the same')
else:
print('Uh-oh! The distance matrices are different')
# Now implement the fully vectorized version inside compute_distances_no_loops
# and run the code
dists_two = classifier.compute_distances_no_loops(X_test)
# check that the distance matrix agrees with the one we computed before:
difference = np.linalg.norm(dists - dists_two, ord='fro')
print('Difference was: %f' % (difference, ))
if difference < 0.001:
print('Good! The distance matrices are the same')
else:
print('Uh-oh! The distance matrices are different')
# Let's compare how fast the implementations are
def time_function(f, *args):
"""
Call a function f with args and return the time (in seconds) that it took to execute.
"""
import time
tic = time.time()
f(*args)
toc = time.time()
return toc - tic
two_loop_time = time_function(classifier.compute_distances_two_loops, X_test)
print('Two loop version took %f seconds' % two_loop_time)
one_loop_time = time_function(classifier.compute_distances_one_loop, X_test)
print('One loop version took %f seconds' % one_loop_time)
no_loop_time = time_function(classifier.compute_distances_no_loops, X_test)
print('No loop version took %f seconds' % no_loop_time)
# you should see significantly faster performance with the fully vectorized implementation
```
### Cross-validation
We have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation.
```
num_folds = 5
k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]
X_train_folds = []
y_train_folds = []
################################################################################
# TODO: #
# Split up the training data into folds. After splitting, X_train_folds and #
# y_train_folds should each be lists of length num_folds, where #
# y_train_folds[i] is the label vector for the points in X_train_folds[i]. #
# Hint: Look up the numpy array_split function. #
################################################################################
#pass
X_train_folds = np.array_split(X_train, num_folds)
y_train_folds = np.array_split(y_train, num_folds)
################################################################################
# END OF YOUR CODE #
################################################################################
# A dictionary holding the accuracies for different values of k that we find
# when running cross-validation. After running cross-validation,
# k_to_accuracies[k] should be a list of length num_folds giving the different
# accuracy values that we found when using that value of k.
k_to_accuracies = {}
################################################################################
# TODO: #
# Perform k-fold cross validation to find the best value of k. For each #
# possible value of k, run the k-nearest-neighbor algorithm num_folds times, #
# where in each case you use all but one of the folds as training data and the #
# last fold as a validation set. Store the accuracies for all fold and all #
# values of k in the k_to_accuracies dictionary. #
################################################################################
#pass
for k in k_choices:
inner_accuracies = np.zeros(num_folds)
for i in range(num_folds):
X_sub_train = np.concatenate(np.delete(X_train_folds, i, axis=0))
y_sub_train = np.concatenate(np.delete(y_train_folds, i, axis=0))
print(X_sub_train.shape,y_sub_train.shape)
X_sub_test = X_train_folds[i]
y_sub_test = y_train_folds[i]
print(X_sub_test.shape,y_sub_test.shape)
classifier = KNearestNeighbor()
classifier.train(X_sub_train, y_sub_train)
dists = classifier.compute_distances_no_loops(X_sub_test)
pred_y = classifier.predict_labels(dists, k)
num_correct = np.sum(y_sub_test == pred_y)
inner_accuracies[i] = float(num_correct)/X_test.shape[0]
k_to_accuracies[k] = np.sum(inner_accuracies)/num_folds
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out the computed accuracies
for k in sorted(k_to_accuracies):
for accuracy in k_to_accuracies[k]:
print('k = %d, accuracy = %f' % (k, accuracy))
X_train_folds = np.array_split(X_train, 5)
t = np.delete(X_train_folds, 1,axis=0)
print(X_train_folds)
# plot the raw observations
for k in k_choices:
accuracies = k_to_accuracies[k]
plt.scatter([k] * len(accuracies), accuracies)
# plot the trend line with error bars that correspond to standard deviation
accuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())])
accuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())])
plt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std)
plt.title('Cross-validation on k')
plt.xlabel('k')
plt.ylabel('Cross-validation accuracy')
plt.show()
# Based on the cross-validation results above, choose the best value for k,
# retrain the classifier using all the training data, and test it on the test
# data. You should be able to get above 28% accuracy on the test data.
best_k = 1
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
y_test_pred = classifier.predict(X_test, k=best_k)
# Compute and display the accuracy
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))
```
| github_jupyter |
```
# default_exp callback.core
#export
from fastai2.data.all import *
from fastai2.optimizer import *
from nbdev.showdoc import *
#export
_all_ = ['CancelFitException', 'CancelEpochException', 'CancelTrainException', 'CancelValidException', 'CancelBatchException']
```
# Callback
> Basic callbacks for Learner
## Callback -
```
#export
_inner_loop = "begin_batch after_pred after_loss after_backward after_step after_cancel_batch after_batch".split()
#export
class Callback(GetAttr):
"Basic class handling tweaks of the training loop by changing a `Learner` in various events"
_default,learn,run,run_train,run_valid = 'learn',None,True,True,True
def __repr__(self): return type(self).__name__
def __call__(self, event_name):
"Call `self.{event_name}` if it's defined"
_run = (event_name not in _inner_loop or (self.run_train and getattr(self, 'training', True)) or
(self.run_valid and not getattr(self, 'training', False)))
if self.run and _run: getattr(self, event_name, noop)()
if event_name=='after_fit': self.run=True #Reset self.run to True at each end of fit
def __setattr__(self, name, value):
if hasattr(self.learn,name):
warn(f"You are setting an attribute ({name}) that also exists in the learner. Please be advised that you're not setting it in the learner but in the callback. Use `self.learn.{name}` if you would like to change it in the learner.")
super().__setattr__(name, value)
@property
def name(self):
"Name of the `Callback`, camel-cased and with '*Callback*' removed"
return class2attr(self, 'Callback')
```
The training loop is defined in `Learner` a bit below and consists in a minimal set of instructions: looping through the data we:
- compute the output of the model from the input
- calculate a loss between this output and the desired target
- compute the gradients of this loss with respect to all the model parameters
- update the parameters accordingly
- zero all the gradients
Any tweak of this training loop is defined in a `Callback` to avoid over-complicating the code of the training loop, and to make it easy to mix and match different techniques (since they'll be defined in different callbacks). A callback can implement actions on the following events:
- `begin_fit`: called before doing anything, ideal for initial setup.
- `begin_epoch`: called at the beginning of each epoch, useful for any behavior you need to reset at each epoch.
- `begin_train`: called at the beginning of the training part of an epoch.
- `begin_batch`: called at the beginning of each batch, just after drawing said batch. It can be used to do any setup necessary for the batch (like hyper-parameter scheduling) or to change the input/target before it goes in the model (change of the input with techniques like mixup for instance).
- `after_pred`: called after computing the output of the model on the batch. It can be used to change that output before it's fed to the loss.
- `after_loss`: called after the loss has been computed, but before the backward pass. It can be used to add any penalty to the loss (AR or TAR in RNN training for instance).
- `after_backward`: called after the backward pass, but before the update of the parameters. It can be used to do any change to the gradients before said update (gradient clipping for instance).
- `after_step`: called after the step and before the gradients are zeroed.
- `after_batch`: called at the end of a batch, for any clean-up before the next one.
- `after_train`: called at the end of the training phase of an epoch.
- `begin_validate`: called at the beginning of the validation phase of an epoch, useful for any setup needed specifically for validation.
- `after_validate`: called at the end of the validation part of an epoch.
- `after_epoch`: called at the end of an epoch, for any clean-up before the next one.
- `after_fit`: called at the end of training, for final clean-up.
```
show_doc(Callback.__call__)
tst_cb = Callback()
tst_cb.call_me = lambda: print("maybe")
test_stdout(lambda: tst_cb("call_me"), "maybe")
show_doc(Callback.__getattr__)
```
This is a shortcut to avoid having to write `self.learn.bla` for any `bla` attribute we seek, and just write `self.bla`.
```
mk_class('TstLearner', 'a')
class TstCallback(Callback):
def batch_begin(self): print(self.a)
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
test_stdout(lambda: cb('batch_begin'), "1")
```
Note that it only works to get the value of the attribute, if you want to change it, you have to manually access it with `self.learn.bla`. In the example below, `self.a += 1` creates an `a` attribute of 2 in the callback instead of setting the `a` of the learner to 2. It also issues a warning that something is probably wrong:
```
learn.a
class TstCallback(Callback):
def batch_begin(self): self.a += 1
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
cb('batch_begin')
test_eq(cb.a, 2)
test_eq(cb.learn.a, 1)
```
A proper version needs to write `self.learn.a = self.a + 1`:
```
class TstCallback(Callback):
def batch_begin(self): self.learn.a = self.a + 1
learn,cb = TstLearner(1),TstCallback()
cb.learn = learn
cb('batch_begin')
test_eq(cb.learn.a, 2)
show_doc(Callback.name, name='Callback.name')
test_eq(TstCallback().name, 'tst')
class ComplicatedNameCallback(Callback): pass
test_eq(ComplicatedNameCallback().name, 'complicated_name')
```
### TrainEvalCallback -
```
#export
class TrainEvalCallback(Callback):
"`Callback` that tracks the number of iterations done and properly sets training/eval mode"
run_valid = False
def begin_fit(self):
"Set the iter and epoch counters to 0, put the model and the right device"
self.learn.train_iter,self.learn.pct_train = 0,0.
self.model.to(self.dls.device)
def after_batch(self):
"Update the iter counter (in training mode)"
self.learn.pct_train += 1./(self.n_iter*self.n_epoch)
self.learn.train_iter += 1
def begin_train(self):
"Set the model in training mode"
self.learn.pct_train=self.epoch/self.n_epoch
self.model.train()
self.learn.training=True
def begin_validate(self):
"Set the model in validation mode"
self.model.eval()
self.learn.training=False
show_doc(TrainEvalCallback, title_level=3)
```
This `Callback` is automatically added in every `Learner` at initialization.
```
#hide
#test of the TrainEvalCallback below in Learner.fit
show_doc(TrainEvalCallback.begin_fit)
show_doc(TrainEvalCallback.after_batch)
show_doc(TrainEvalCallback.begin_train)
show_doc(TrainEvalCallback.begin_validate)
# export
defaults.callbacks = [TrainEvalCallback]
```
### GatherPredsCallback -
```
#export
#TODO: save_targs and save_preds only handle preds/targets that have one tensor, not tuples of tensors.
class GatherPredsCallback(Callback):
"`Callback` that saves the predictions and targets, optionally `with_loss`"
def __init__(self, with_input=False, with_loss=False, save_preds=None, save_targs=None, concat_dim=0):
store_attr(self, "with_input,with_loss,save_preds,save_targs,concat_dim")
def begin_batch(self):
if self.with_input: self.inputs.append((to_detach(self.xb)))
def begin_validate(self):
"Initialize containers"
self.preds,self.targets = [],[]
if self.with_input: self.inputs = []
if self.with_loss: self.losses = []
def after_batch(self):
"Save predictions, targets and potentially losses"
preds,targs = to_detach(self.pred),to_detach(self.yb)
if self.save_preds is None: self.preds.append(preds)
else: (self.save_preds/str(self.iter)).save_array(preds)
if self.save_targs is None: self.targets.append(targs)
else: (self.save_targs/str(self.iter)).save_array(targs[0])
if self.with_loss:
bs = find_bs(self.yb)
loss = self.loss if self.loss.numel() == bs else self.loss.view(bs,-1).mean(1)
self.losses.append(to_detach(loss))
def after_validate(self):
"Concatenate all recorded tensors"
if self.with_input: self.inputs = detuplify(to_concat(self.inputs, dim=self.concat_dim))
if not self.save_preds: self.preds = detuplify(to_concat(self.preds, dim=self.concat_dim))
if not self.save_targs: self.targets = detuplify(to_concat(self.targets, dim=self.concat_dim))
if self.with_loss: self.losses = to_concat(self.losses)
def all_tensors(self):
res = [None if self.save_preds else self.preds, None if self.save_targs else self.targets]
if self.with_input: res = [self.inputs] + res
if self.with_loss: res.append(self.losses)
return res
show_doc(GatherPredsCallback, title_level=3)
show_doc(GatherPredsCallback.begin_validate)
show_doc(GatherPredsCallback.after_batch)
show_doc(GatherPredsCallback.after_validate)
```
## Callbacks control flow
It happens that we may want to skip some of the steps of the training loop: in gradient accumulation, we don't aways want to do the step/zeroing of the grads for instance. During an LR finder test, we don't want to do the validation phase of an epoch. Or if we're training with a strategy of early stopping, we want to be able to completely interrupt the training loop.
This is made possible by raising specific exceptions the training loop will look for (and properly catch).
```
#export
_ex_docs = dict(
CancelFitException="Skip the rest of this batch and go to `after_batch`",
CancelEpochException="Skip the rest of the training part of the epoch and go to `after_train`",
CancelTrainException="Skip the rest of the validation part of the epoch and go to `after_validate`",
CancelValidException="Skip the rest of this epoch and go to `after_epoch`",
CancelBatchException="Interrupts training and go to `after_fit`")
for c,d in _ex_docs.items(): mk_class(c,sup=Exception,doc=d)
show_doc(CancelBatchException, title_level=3)
show_doc(CancelTrainException, title_level=3)
show_doc(CancelValidException, title_level=3)
show_doc(CancelEpochException, title_level=3)
show_doc(CancelFitException, title_level=3)
```
You can detect one of those exceptions occurred and add code that executes right after with the following events:
- `after_cancel_batch`: reached imediately after a `CancelBatchException` before proceeding to `after_batch`
- `after_cancel_train`: reached imediately after a `CancelTrainException` before proceeding to `after_epoch`
- `after_cancel_valid`: reached imediately after a `CancelValidException` before proceeding to `after_epoch`
- `after_cancel_epoch`: reached imediately after a `CancelEpochException` before proceeding to `after_epoch`
- `after_cancel_fit`: reached imediately after a `CancelFitException` before proceeding to `after_fit`
```
# export
_events = L.split('begin_fit begin_epoch begin_train begin_batch after_pred after_loss \
after_backward after_step after_cancel_batch after_batch after_cancel_train \
after_train begin_validate after_cancel_validate after_validate after_cancel_epoch \
after_epoch after_cancel_fit after_fit')
mk_class('event', **_events.map_dict(),
doc="All possible events as attributes to get tab-completion and typo-proofing")
# export
_all_ = ['event']
show_doc(event, name='event', title_level=3)
test_eq(event.after_backward, 'after_backward')
```
Here's the full list: *begin_fit begin_epoch begin_train begin_batch after_pred after_loss after_backward after_step after_cancel_batch after_batch after_cancel_train after_train begin_validate after_cancel_validate after_validate after_cancel_epoch after_epoch after_cancel_fit after_fit*.
## Export -
```
#hide
from nbdev.export import notebook2script
notebook2script()
```
| github_jupyter |
```
# Update sklearn to prevent version mismatches
#!pip install sklearn --upgrade
# install joblib. This will be used to save your model.
# Restart your kernel after installing
#!pip install joblib
import pandas as pd
```
# Read the CSV and Perform Basic Data Cleaning
```
df = pd.read_csv("resources/exoplanet_data.csv")
# Drop the null columns where all values are null
df = df.dropna(axis='columns', how='all')
# Drop the null rows
df = df.dropna()
df.head()
```
# Select your features (columns)
```
# Set features. This will also be used as your x values.
#selected_features = df[['names', 'of', 'selected', 'features', 'here']]
feature_list = df.columns.to_list()
feature_list.remove("koi_disposition")
removal_list = []
for x in feature_list:
if "err" in x:
removal_list.append(x)
print(removal_list)
selected_features = df[feature_list].drop(columns=removal_list)
selected_features.head()
```
# Create a Train Test Split
Use `koi_disposition` for the y values
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(selected_features, df["koi_disposition"], random_state=13)
X_train.head()
```
# Pre-processing
Scale the data using the MinMaxScaler and perform some feature selection
```
# Scale your data
from sklearn.preprocessing import MinMaxScaler
X_scaler = MinMaxScaler().fit(X_train)
#y_scaler = MinMaxScaler().fit(y_train)
X_train_scaled = X_scaler.transform(X_train)
X_test_scaled = X_scaler.transform(X_test)
#y_train_scaled = y_scaler.transform(y_train)
#y_test_scaled = y_scaler.transform(y_train)
```
# Train the Model
```
from sklearn import tree
decision_tree_model = tree.DecisionTreeClassifier()
decision_tree_model = decision_tree_model.fit(X_train, y_train)
print(f"Training Data Score: {decision_tree_model.score(X_train_scaled, y_train)}")
print(f"Testing Data Score: {decision_tree_model.score(X_test_scaled, y_test)}")
```
# Hyperparameter Tuning
Use `GridSearchCV` to tune the model's parameters
```
decision_tree_model.get_params()
# Create the GridSearchCV model
from sklearn.model_selection import GridSearchCV
param_grid = {'C': [1, 5, 10, 50],
'gamma': [0.0001, 0.0005, 0.001, 0.005]}
grid = GridSearchCV(decision_tree_model, param_grid, verbose=3)
# Train the model with GridSearch
grid.fit(X_train,y_train)
print(grid.best_params_)
print(grid.best_score_)
```
# Save the Model
```
# save your model by updating "your_name" with your name
# and "your_model" with your model variable
# be sure to turn this in to BCS
# if joblib fails to import, try running the command to install in terminal/git-bash
import joblib
filename = 'your_name.sav'
joblib.dump(your_model, filename)
```
| github_jupyter |
# Negative Binomial Regression (Students absence example)
## Negative binomial distribution review
I always experience some kind of confusion when looking at the negative binomial distribution after a while of not working with it. There are so many different definitions that I usually need to read everything more than once. The definition I've first learned, and the one I like the most, says as follows: The negative binomial distribution is the distribution of a random variable that is defined as the number of independent Bernoulli trials until the k-th "success". In short, we repeat a Bernoulli experiment until we observe k successes and record the number of trials it required.
$$
Y \sim \text{NB}(k, p)
$$
where $0 \le p \le 1$ is the probability of success in each Bernoulli trial, $k > 0$, usually integer, and $y \in \{k, k + 1, \cdots\}$
The probability mass function (pmf) is
$$
p(y | k, p)= \binom{y - 1}{y-k}(1 -p)^{y - k}p^k
$$
If you, like me, find it hard to remember whether $y$ starts at $0$, $1$, or $k$, try to think twice about the definition of the variable. But how? First, recall we aim to have $k$ successes. And success is one of the two possible outcomes of a trial, so the number of trials can never be smaller than the number of successes. Thus, we can be confident to say that $y \ge k$.
But this is not the only way of defining the negative binomial distribution, there are plenty of options! One of the most interesting, and the one you see in [PyMC3](https://docs.pymc.io/api/distributions/discrete.html#pymc3.distributions.discrete.NegativeBinomial), the library we use in Bambi for the backend, is as a continuous mixture. The negative binomial distribution describes a Poisson random variable whose rate is also a random variable (not a fixed constant!) following a gamma distribution. Or in other words, conditional on a gamma-distributed variable $\mu$, the variable $Y$ has a Poisson distribution with mean $\mu$.
Under this alternative definition, the pmf is
$$
\displaystyle p(y | k, \alpha) = \binom{y + \alpha - 1}{y} \left(\frac{\alpha}{\mu + \alpha}\right)^\alpha\left(\frac{\mu}{\mu + \alpha}\right)^y
$$
where $\mu$ is the parameter of the Poisson distribution (the mean, and variance too!) and $\alpha$ is the rate parameter of the gamma.
```
import arviz as az
import bambi as bmb
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from scipy.stats import nbinom
az.style.use("arviz-darkgrid")
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
```
In SciPy, the definition of the negative binomial distribution differs a little from the one in our introduction. They define $Y$ = Number of failures until k successes and then $y$ starts at 0. In the following plot, we have the probability of observing $y$ failures before we see $k=3$ successes.
```
y = np.arange(0, 30)
k = 3
p1 = 0.5
p2 = 0.3
fig, ax = plt.subplots(1, 2, figsize=(12, 4), sharey=True)
ax[0].bar(y, nbinom.pmf(y, k, p1))
ax[0].set_xticks(np.linspace(0, 30, num=11))
ax[0].set_title(f"k = {k}, p = {p1}")
ax[1].bar(y, nbinom.pmf(y, k, p2))
ax[1].set_xticks(np.linspace(0, 30, num=11))
ax[1].set_title(f"k = {k}, p = {p2}")
fig.suptitle("Y = Number of failures until k successes", fontsize=16);
```
For example, when $p=0.5$, the probability of seeing $y=0$ failures before 3 successes (or in other words, the probability of having 3 successes out of 3 trials) is 0.125, and the probability of seeing $y=3$ failures before 3 successes is 0.156.
```
print(nbinom.pmf(y, k, p1)[0])
print(nbinom.pmf(y, k, p1)[3])
```
Finally, if one wants to show this probability mass function as if we are following the first definition of negative binomial distribution we introduced, we just need to shift the whole thing to the right by adding $k$ to the $y$ values.
```
fig, ax = plt.subplots(1, 2, figsize=(12, 4), sharey=True)
ax[0].bar(y + k, nbinom.pmf(y, k, p1))
ax[0].set_xticks(np.linspace(3, 30, num=10))
ax[0].set_title(f"k = {k}, p = {p1}")
ax[1].bar(y + k, nbinom.pmf(y, k, p2))
ax[1].set_xticks(np.linspace(3, 30, num=10))
ax[1].set_title(f"k = {k}, p = {p2}")
fig.suptitle("Y = Number of trials until k successes", fontsize=16);
```
## Negative binomial in GLM
The negative binomial distribution belongs to the exponential family, and the canonical link function is
$$
g(\mu_i) = \log\left(\frac{\mu_i}{k + \mu_i}\right) = \log\left(\frac{k}{\mu_i} + 1\right)
$$
but it is difficult to interpret. The log link is usually preferred because of the analogy with Poisson model, and it also tends to give better results.
## Load and explore Students data
This example is based on this [UCLA example](https://stats.idre.ucla.edu/r/dae/negative-binomial-regression/).
School administrators study the attendance behavior of high school juniors at two schools. Predictors of the **number of days of absence** include the **type of program** in which the student is enrolled and a **standardized test in math**. We have attendance data on 314 high school juniors.
The variables of insterest in the dataset are
* daysabs: The number of days of absence. It is our response variable.
* progr: The type of program. Can be one of 'General', 'Academic', or 'Vocational'.
* math: Score in a standardized math test.
```
data = pd.read_stata("https://stats.idre.ucla.edu/stat/stata/dae/nb_data.dta")
data.head()
```
We assign categories to the values 1, 2, and 3 of our `"prog"` variable.
```
data["prog"] = data["prog"].map({1: "General", 2: "Academic", 3: "Vocational"})
data.head()
```
The Academic program is the most popular program (167/314) and General is the least popular one (40/314)
```
data["prog"].value_counts()
```
Let's explore the distributions of math score and days of absence for each of the three programs listed above. The vertical lines indicate the mean values.
```
fig, ax = plt.subplots(3, 2, figsize=(8, 6), sharex="col")
programs = list(data["prog"].unique())
programs.sort()
for idx, program in enumerate(programs):
# Histogram
ax[idx, 0].hist(data[data["prog"] == program]["math"], edgecolor='black', alpha=0.9)
ax[idx, 0].axvline(data[data["prog"] == program]["math"].mean(), color="C1")
# Barplot
days = data[data["prog"] == program]["daysabs"]
days_mean = days.mean()
days_counts = days.value_counts()
values = list(days_counts.index)
count = days_counts.values
ax[idx, 1].bar(values, count, edgecolor='black', alpha=0.9)
ax[idx, 1].axvline(days_mean, color="C1")
# Titles
ax[idx, 0].set_title(program)
ax[idx, 1].set_title(program)
plt.setp(ax[-1, 0], xlabel="Math score")
plt.setp(ax[-1, 1], xlabel="Days of absence");
```
The first impression we have is that the distribution of math scores is not equal for any of the programs. It looks right-skewed for students under the Academic program, left-skewed for students under the Vocational program, and roughly uniform for students in the General program (although there's a drop in the highest values). Clearly those in the Vocational program has the highest mean for the math score.
On the other hand, the distribution of the days of absence is right-skewed in all cases. Students in the General program present the highest absence mean while the Vocational group is the one who misses fewer classes on average.
## Models
We are interested in measuring the association between the type of the program and the math score with the days of absence. It's also of interest to see if the association between math score and days of absence is different in each type of program.
In order to answer our questions, we are going to fit and compare two models. The first model uses the type of the program and the math score as predictors. The second model also includes the interaction between these two variables. The score in the math test is going to be standardized in both cases to make things easier for the sampler and save some seconds. A good idea to follow along is to run these models without scaling `math` and comparing how long it took to fit.
We are going to use a negative binomial likelihood to model the days of absence. But let's stop here and think why we use this likelihood. Earlier, we said that the negative binomial distributon arises when our variable represents the number of trials until we got $k$ successes. However, the number of trials is fixed, i.e. the number of school days in a given year is not a random variable. So if we stick to the definition, we could think of the two alternative views for this problem
* Each of the $n$ days is a trial, and we record whether the student is absent ($y=1$) or not ($y=0$). This corresponds to a binary regression setting, where we could think of logistic regression or something alike. A problem here is that we have the sum of $y$ for a student, but not the $n$.
* The whole school year represents the space where events occur and we count how many absences we see in that space for each student. This gives us a Poisson regression setting (count of an event in a given space or time).
We also know that when $n$ is large and $p$ is small, the Binomial distribution can be approximated with a Poisson distribution with $\lambda = n * p$. We don't know exactly $n$ in this scenario, but we know it is around 180, and we do know that $p$ is small because you can't skip classes all the time. So both modeling approaches should give similar results.
But then, why negative binomial? Can't we just use a Poisson likelihood?
Yes, we can. However, using a Poisson likelihood implies that the mean is equal to the variance, and that is usually an unrealistic assumption. If it turns out the variance is either substantially smaller or greater than the mean, the Poisson regression model results in a poor fit. Alternatively, if we use a negative binomial likelihood, the variance is not forced to be equal to the mean, and there's more flexibility to handle a given dataset, and consequently, the fit tends to better.
### Model 1
$$
\log{Y_i} = \beta_1 \text{Academic}_i + \beta_2 \text{General}_i + \beta_3 \text{Vocational}_i + \beta_4 \text{Math_std}_i
$$
### Model 2
$$
\log{Y_i} = \beta_1 \text{Academic}_i + \beta_2 \text{General}_i + \beta_3 \text{Vocational}_i + \beta_4 \text{Math_std}_i
+ \beta_5 \text{General}_i \cdot \text{Math_std}_i + \beta_6 \text{Vocational}_i \cdot \text{Math_std}_i
$$
In both cases we have the following dummy variables
$$\text{Academic}_i =
\left\{
\begin{array}{ll}
1 & \textrm{if student is under Academic program} \\
0 & \textrm{other case}
\end{array}
\right.
$$
$$\text{General}_i =
\left\{
\begin{array}{ll}
1 & \textrm{if student is under General program} \\
0 & \textrm{other case}
\end{array}
\right.
$$
$$\text{Vocational}_i =
\left\{
\begin{array}{ll}
1 & \textrm{if student is under Vocational program} \\
0 & \textrm{other case}
\end{array}
\right.
$$
and $Y$ represents the days of absence.
So, for example, the first model for a student under the Vocational program reduces to
$$
\log{Y_i} = \beta_3 + \beta_4 \text{Math_std}_i
$$
And one last thing to note is we've decided not to inclide an intercept term, that's why you don't see any $\beta_0$ above. This choice allows us to represent the effect of each program directly with $\beta_1$, $\beta_2$, and $\beta_3$.
## Model fit
It's very easy to fit these models with Bambi. We just pass a formula describing the terms in the model and Bambi will know how to handle each of them correctly. The `0` on the right hand side of `~` simply means we don't want to have the intercept term that is added by default. `scale(math)` tells Bambi we want to use standardize `math` before being included in the model. By default, Bambi uses a log link for negative binomial GLMs. We'll stick to this default here.
### Model 1
```
model_additive = bmb.Model("daysabs ~ 0 + prog + scale(math)", data, family="negativebinomial")
idata_additive = model_additive.fit()
```
### Model 2
For this second model we just add `prog:scale(math)` to indicate the interaction. A shorthand would be to use `y ~ 0 + prog*scale(math)`, which uses the **full interaction** operator. In other words, it just means we want to include the interaction between `prog` and `scale(math)` as well as their main effects.
```
model_interaction = bmb.Model("daysabs ~ 0 + prog + scale(math) + prog:scale(math)", data, family="negativebinomial")
idata_interaction = model_interaction.fit()
```
## Explore models
The first thing we do is calling `az.summary()`. Here we pass the `InferenceData` object the `.fit()` returned. This prints information about the marginal posteriors for each parameter in the model as well as convergence diagnostics.
```
az.summary(idata_additive)
az.summary(idata_interaction)
```
The information in the two tables above can be visualized in a more concise manner using a forest plot. ArviZ provides us with `plot_forest()`. There we simply pass a list containing the `InferenceData` objects of the models we want to compare.
```
az.plot_forest(
[idata_additive, idata_interaction],
model_names=["Additive", "Interaction"],
var_names=["prog", "scale(math)"],
combined=True,
figsize=(8, 4)
);
```
One of the first things one can note when seeing this plot is the similarity between the marginal posteriors. Maybe one can conclude that the variability of the marginal posterior of `scale(math)` is slightly lower in the model that considers the interaction, but the difference is not significant.
We can also make conclusions about the association between the program and the math score with the days of absence. First, we see the posterior for the Vocational group is to the left of the posterior for the two other programs, meaning it is associated with fewer absences (as we have seen when first exploring our data). There also seems to be a difference between General and Academic, where we may conclude the students in the General group tend to miss more classes.
In addition, the marginal posterior for `math` shows negative values in both cases. This means that students with higher math scores tend to miss fewer classes. Below, we see a forest plot with the posteriors for the coefficients of the interaction effects. Both of them overlap with 0, which means the data does not give much evidence to support there is an interaction effect between program and math score (i.e., the association between math and days of absence is similar for all the programs).
```
az.plot_forest(idata_interaction, var_names=["prog:scale(math)"], combined=True, figsize=(8, 4))
plt.axvline(0);
```
## Plot predicted mean response
We finish this example showing how we can get predictions for new data and plot the mean response for each program together with confidence intervals.
```
math_score = np.arange(1, 100)
# This function takes a model and an InferenceData object.
# It returns of length 3 with predictions for each type of program.
def predict(model, idata):
predictions = []
for program in programs:
new_data = pd.DataFrame({"math": math_score, "prog": [program] * len(math_score)})
new_idata = model.predict(
idata,
data=new_data,
inplace=False
)
prediction = new_idata.posterior.stack(sample=["chain", "draw"])["daysabs_mean"].values
predictions.append(prediction)
return predictions
prediction_additive = predict(model_additive, idata_additive)
prediction_interaction = predict(model_interaction, idata_interaction)
mu_additive = [prediction.mean(1) for prediction in prediction_additive]
mu_interaction = [prediction.mean(1) for prediction in prediction_interaction]
fig, ax = plt.subplots(1, 2, sharex=True, sharey=True, figsize = (10, 4))
for idx, program in enumerate(programs):
ax[0].plot(math_score, mu_additive[idx], label=f"{program}", color=f"C{idx}", lw=2)
az.plot_hdi(math_score, prediction_additive[idx].T, color=f"C{idx}", ax=ax[0])
ax[1].plot(math_score, mu_interaction[idx], label=f"{program}", color=f"C{idx}", lw=2)
az.plot_hdi(math_score, prediction_interaction[idx].T, color=f"C{idx}", ax=ax[1])
ax[0].set_title("Additive");
ax[1].set_title("Interaction");
ax[0].set_xlabel("Math score")
ax[1].set_xlabel("Math score")
ax[0].set_ylim(0, 25)
ax[0].legend(loc="upper right");
```
As we can see in this plot, the interval for the mean response for the Vocational program does not overlap with the interval for the other two groups, representing the group of students who miss fewer classes. On the right panel we can also see that including interaction terms does not change the slopes significantly because the posterior distributions of these coefficients have a substantial overlap with 0.
If you've made it to the end of this notebook and you're still curious about what else you can do with these two models, you're invited to use `az.compare()` to compare the fit of the two models. What do you expect before seeing the plot? Why? Is there anything else you could do to improve the fit of the model?
Also, if you're still curious about what this model would have looked like with the Poisson likelihood, you just need to replace `family="negativebinomial"` with `family="poisson"` and then you're ready to compare results!
```
%load_ext watermark
%watermark -n -u -v -iv -w
```
| github_jupyter |
# Self-Driving Car Engineer Nanodegree
## Project: **Finding Lane Lines on the Road**
***
In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below.
Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.
In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the [rubric points](https://review.udacity.com/#!/rubrics/322/view) for this project.
---
Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.
**Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".**
---
**The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.**
---
<figure>
<img src="examples/line-segments-example.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your output should look something like this (above) after detecting line segments using the helper functions below </p>
</figcaption>
</figure>
<p></p>
<figure>
<img src="examples/laneLines_thirdPass.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your goal is to connect/average/extrapolate line segments to get output like this</p>
</figcaption>
</figure>
**Run the cell below to import some packages. If you get an `import error` for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.**
## Import Packages
```
#importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
%matplotlib inline
```
## Read in an Image
```
#reading in an image
image = mpimg.imread('test_images/solidWhiteRight.jpg')
#printing out some stats and plotting
print('This image is:', type(image), 'with dimensions:', image.shape)
plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')
```
## Ideas for Lane Detection Pipeline
**Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:**
`cv2.inRange()` for color selection
`cv2.fillPoly()` for regions selection
`cv2.line()` to draw lines on an image given endpoints
`cv2.addWeighted()` to coadd / overlay two images
`cv2.cvtColor()` to grayscale or change color
`cv2.imwrite()` to output images to file
`cv2.bitwise_and()` to apply a mask to an image
**Check out the OpenCV documentation to learn about these and discover even more awesome functionality!**
## Helper Functions
Below are some helper functions to help get you started. They should look familiar from the lesson!
```
import math
def grayscale(img):
"""Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
(assuming your grayscaled image is called 'gray')
you should call plt.imshow(gray, cmap='gray')"""
return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Or use BGR2GRAY if you read an image with cv2.imread()
# return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def canny(img, low_threshold, high_threshold):
"""Applies the Canny transform"""
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
"""Applies a Gaussian Noise kernel"""
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
"""
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
`vertices` should be a numpy array of integer points.
"""
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def draw_lines(img, lines, color=[255, 0, 0], thickness=2):
"""
NOTE: this is the function you might want to use as a starting point once you want to
average/extrapolate the line segments you detect to map out the full
extent of the lane (going from the result shown in raw-lines-example.mp4
to that shown in P1_example.mp4).
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
"""
for line in lines:
for x1,y1,x2,y2 in line:
cv2.line(img, (x1, y1), (x2, y2), color, thickness)
def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):
"""
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
"""
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
draw_lines(line_img, lines)
return line_img
# Python 3 has support for cool math symbols.
def weighted_img(img, initial_img, α=0.8, β=1., γ=0.):
"""
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + γ
NOTE: initial_img and img must be the same shape!
"""
return cv2.addWeighted(initial_img, α, img, β, γ)
```
## Test Images
Build your pipeline to work on the images in the directory "test_images"
**You should make sure your pipeline works well on these images before you try the videos.**
```
import os
list_img = os.listdir("test_images/")
os.listdir("test_images/")
```
## Build a Lane Finding Pipeline
Build the pipeline and run your solution on all test_images. Make copies into the `test_images_output` directory, and you can use the images in your writeup report.
Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.
```
# TODO: Build your pipeline that will draw lane lines on the test_images
# then save them to the test_images_output directory.
def lane_finding(image):
# 1. convert image to grayscale
gray = grayscale(image)
cv2.imwrite('test_images_output/gray.jpg',gray)
# 2. Gaussian smoothing of gray image
kernel_size = 5
gray_blur = gaussian_blur(gray,kernel_size)
cv2.imwrite('test_images_output/gray_blur.jpg',gray_blur)
# 3. canny edge detection
low_threshold = 50
high_threshold = 110
edges = canny(gray_blur, low_threshold,high_threshold)
cv2.imwrite('test_images_output/edges.jpg',edges)
# 4. region selection (masking)
imshape = image.shape
lb = [0,imshape[0]]
rb = [imshape[1],imshape[0]]
lu = [400, 350]
ru = [600, 350]
#vertices = np.array([[(0,imshape[0]),(400, 350), (600, 350), (imshape[1],imshape[0])]], dtype=np.int32)
vertices = np.array([[lb,lu, ru, rb]], dtype=np.int32)
plt.imshow(image)
x = [lb[0], rb[0], ru[0], lu[0],lb[0]]
y = [lb[1], rb[1], ru[1], lu[1],lb[1]]
plt.plot(x, y, 'b--', lw=2)
plt.savefig('test_images_output/region_interest.jpg')
masked_edges = region_of_interest(edges, vertices)
# 5. Hough transform for lane lines
rho = 1
theta = np.pi/180
threshold = 10
min_line_len = 50
max_line_gap = 100
line_image = hough_lines(masked_edges, rho, theta, threshold, min_line_len, max_line_gap)
# 6. show lanes in original image
lane_image = weighted_img(line_image, image, α=0.8, β=1., γ=0.)
plt.imshow(lane_image)
return lane_image
#lane_image = lane_finding(image)
#plt.imshow(lane_image)
# output_dir = "test_images_output/"
# for img in list_img:
# image = mpimg.imread('test_images/'+img)
# lane_image = lane_finding(image)
# img_name = output_dir + img
# status = cv2.imwrite(img_name, cv2.cvtColor(lane_image, cv2.COLOR_RGB2BGR))
# caution:
# 1. destination folder must exist, or image cannot be saved!
# 2. cv2.imwrite changes RGB channels, which need to be converted, or the saved image has different colors
# print("Image written to file-system : ",status)
```
## Test on Videos
You know what's cooler than drawing lanes over images? Drawing lanes over video!
We can test our solution on two provided videos:
`solidWhiteRight.mp4`
`solidYellowLeft.mp4`
**Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.**
**If you get an error that looks like this:**
```
NeedDownloadError: Need ffmpeg exe.
You can download it by calling:
imageio.plugins.ffmpeg.download()
```
**Follow the instructions in the error message and check out [this forum post](https://discussions.udacity.com/t/project-error-of-test-on-videos/274082) for more troubleshooting tips across operating systems.**
```
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
def process_image(image):
# NOTE: The output you return should be a color image (3 channel) for processing video below
# TODO: put your pipeline here,
# you should return the final output (image where lines are drawn on lanes)
result = lane_finding(image)
return result
```
Let's try the one with the solid white lane on the right first ...
```
white_output = 'test_videos_output/solidWhiteRight.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5)
clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
```
Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
```
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(white_output))
```
## Improve the draw_lines() function
**At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".**
**Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.**
Now for the one with the solid yellow lane on the left. This one's more tricky!
```
yellow_output = 'test_videos_output/solidYellowLeft.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5)
##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
%time yellow_clip.write_videofile(yellow_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(yellow_output))
```
## Writeup and Submission
If you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a [link](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) to the writeup template file.
## Optional Challenge
Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!
```
challenge_output = 'test_videos_output/challenge.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5)
clip3 = VideoFileClip('test_videos/challenge.mp4')
challenge_clip = clip3.fl_image(process_image)
%time challenge_clip.write_videofile(challenge_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(challenge_output))
```
| github_jupyter |
# In-Class Coding Lab: Data Analysis with Pandas
In this lab, we will perform a data analysis on the **RMS Titanic** passenger list. The RMS Titanic is one of the most famous ocean liners in history. On April 15, 1912 it sank after colliding with an iceberg in the North Atlantic Ocean. To learn more, read here: https://en.wikipedia.org/wiki/RMS_Titanic
Our goal today is to perform a data analysis on a subset of the passenger list. We're looking for insights as to which types of passengers did and didn't survive. Women? Children? 1st Class Passengers? 3rd class? Etc.
I'm sure you've heard the expression often said during emergencies: "Women and Children first" Let's explore this data set and find out if that's true!
Before we begin you should read up on what each of the columns mean in the data dictionary. You can find this information on this page: https://www.kaggle.com/c/titanic/data
## Loading the data set
First we load the dataset into a Pandas `DataFrame` variable. The `sample(10)` method takes a random sample of 10 passengers from the data set.
```
import pandas as pd
import numpy as np
# this turns off warning messages
import warnings
warnings.filterwarnings('ignore')
passengers = pd.read_csv('CCL-titanic.csv')
passengers.sample(10)
```
## How many survived?
One of the first things we should do is figure out how many of the passengers in this data set survived. Let's start with isolating just the `'Survivied'` column into a series:
```
passengers['Survived'].sample(10)
```
There's too many to display so we just display a random sample of 10 passengers.
- 1 means the passenger survivied
- 0 means the passenger died
What we really want is to count the number of survivors and deaths. We do this by querying the `value_counts()` of the `['Survived']` column, which returns a `Series` of counts, like this:
```
passengers['Survived'].value_counts()
```
Only 342 passengers survived, and 549 perished. Let's observe this same data as percentages of the whole. We do this by adding the `normalize=True` named argument to the `value_counts()` method.
```
passengers['Survived'].value_counts(normalize=True)
```
**Just 38% of passengers in this dataset survived.**
### Now you Try it!
**FIRST** Write a Pandas expression to display counts of males and female passengers using the `Sex` variable:
```
passengers['Sex'].value_counts()
```
**NEXT** Write a Pandas expression to display male /female passenger counts as a percentage of the whole number of passengers in the data set.
```
passengers['Sex'].value_counts(normalize=True)
```
If you got things working, you now know that **35% of passengers were female**.
## Who survivies? Men or Women?
We now know that 35% of the passengers were female, and 65% we male.
**The next think to think about is how do survivial rates affect these numbers? **
If the ratio is about the same for surviviors only, then we can conclude that your **Sex** did not play a role in your survival on the RMS Titanic.
Let's find out.
```
survivors = passengers[passengers['Survived'] ==1]
survivors['PassengerId'].count()
```
Still **342** like we discovered originally. Now let's check the **Sex** split among survivors only:
```
survivors['Sex'].value_counts()
```
WOW! That is a huge difference! But you probably can't see it easily. Let's represent it in a `DataFrame`, so that it's easier to visualize:
```
sex_all_series = passengers['Sex'].value_counts()
sex_survivor_series = survivors['Sex'].value_counts()
sex_comparision_df = pd.DataFrame({ 'AllPassengers' : sex_all_series, 'Survivors' : sex_survivor_series })
sex_comparision_df['SexSurvivialRate'] = sex_comparision_df['Survivors'] / sex_comparision_df['AllPassengers']
sex_comparision_df
```
**So, females had a 74% survival rate. Much better than the overall rate of 38%**
We should probably briefly explain the code above.
- The first two lines get a series count of all passengers by Sex (male / female) and count of survivors by sex
- The third line creates DataFrame. Recall a pandas dataframe is just a dict of series. We have two keys 'AllPassengers' and 'Survivors'
- The fourth line creates a new column in the dataframe which is just the survivors / all passengers to get the rate of survival for that Sex.
## Feature Engineering: Adults and Children
Sometimes the variable we want to analyze is not readily available, but can be created from existing data. This is commonly referred to as **feature engineering**. The name comes from machine learning where we use data called *features* to predict an outcome.
Let's create a new feature called `'AgeCat'` as follows:
- When **Age** <=18 then 'Child'
- When **Age** >18 then 'Adult'
This is easy to do in pandas. First we create the column and set all values to `np.nan` which means 'Not a number'. This is Pandas way of saying no value. Then we set the values based on the rules we set for the feature.
```
passengers['AgeCat'] = np.nan # Not a number
passengers['AgeCat'][ passengers['Age'] <=18 ] = 'Child'
passengers['AgeCat'][ passengers['Age'] > 18 ] = 'Adult'
passengers.sample(5)
```
Let's get the count and distrubutions of Adults and Children on the passenger list.
```
passengers['AgeCat'].value_counts()
```
And here's the percentage as a whole:
```
passengers['AgeCat'].value_counts(normalize=True)
```
So close to **80%** of the passengers were adults. Once again let's look at the ratio of `AgeCat` for survivors only. If your age has no bearing of survivial, then the rates should be the same.
Here's the counts of Adult / Children among the survivors only:
```
survivors = passengers[passengers['Survived'] ==1]
survivors['AgeCat'].value_counts()
```
### Now You Try it!
Calculate the `AgeCat` survival rate, similar to how we did for the `SexSurvivalRate`.
```
agecat_all_series = passengers['AgeCat'].value_counts()
agecat_survivor_series = survivors['AgeCat'].value_counts()
# todo make a data frame, add AgeCatSurvivialRate column, display dataframe
agecat_comparision_df = pd.DataFrame({ 'AllPassengers' : agecat_all_series, 'Survivors' : agecat_survivor_series })
agecat_comparision_df['AgeCatSurvivialRate'] = agecat_comparision_df['Survivors'] / agecat_comparision_df['AllPassengers']
agecat_comparision_df
```
**So, children had a 50% survival rate, better than the overall rate of 38%**
## So, women and children first?
It looks like the RMS really did have the motto: "Women and Children First."
Here's our insights. We know:
- If you were a passenger, you had a 38% chance of survival.
- If you were a female passenger, you had a 74% chance of survival.
- If you were a child passenger, you had a 50% chance of survival.
### Now you try it for Passenger Class
Repeat this process for `Pclass` The passenger class variable. Display the survival rates for each passenger class. What does the information tell you about passenger class and survival rates?
I'll give you a hint... "Money Talks"
```
# todo: repeat the analysis in the previous cell for Pclass
pclass_all_series = passengers['Pclass'].value_counts()
pclass_survivor_series = survivors['Pclass'].value_counts()
pclass_comparision_df = pd.DataFrame({ 'AllPassengers' : pclass_all_series, 'Survivors' : pclass_survivor_series })
pclass_comparision_df['PclassSurvivialRate'] = pclass_comparision_df['Survivors'] / pclass_comparision_df['AllPassengers']
pclass_comparision_df
```
| github_jupyter |
```
import numpy as np
from scipy.spatial import Delaunay
import networkx as nx
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import pandas
import os
import graphsonchip.graphmaker
from graphsonchip.graphmaker import make_spheroids
from graphsonchip.graphmaker import graph_generation_func
from graphsonchip.graphplotter import graph_plot
```
## Generate small plot
```
cells = make_spheroids.generate_artificial_spheroid(10)['cells']
spheroid = {}
spheroid['cells'] = cells
G = graph_generation_func.generate_voronoi_graph(spheroid, dCells = 0.6)
for ind in G.nodes():
if ind % 2 == 0:
G.add_node(ind, color = 'r')
else:
G.add_node(ind, color = 'b')
graph_plot.network_plot_3D(G)
#plt.savefig('example_code.pdf')
path = r'/Users/gustaveronteix/Documents/Projets/Projets Code/3D-Segmentation-Sebastien/data'
spheroid_data = pandas.read_csv(os.path.join(path, 'spheroid_table_3.csv'))
mapper = {"centroid-0": "z", "centroid-1": "x", "centroid-2": "y"}
spheroid_data = spheroid_data.rename(columns = mapper)
spheroid = pr.single_spheroid_process(spheroid_data)
G = graph.generate_voronoi_graph(spheroid, zRatio = 1, dCells = 20)
for ind in G.nodes():
G.add_node(ind, color ='g')
pos =nx.get_node_attributes(G,'pos')
gp.network_plot_3D(G, 5)
#plt.savefig('Example_image.pdf')
path = r'/Volumes/Multicell/Sebastien/Gustave_Jeremie/spheroid_sample_Francoise.csv'
spheroid_data = pandas.read_csv(path)
spheroid = pr.single_spheroid_process(spheroid_data)
G = graph.generate_voronoi_graph(spheroid, zRatio = 2, dCells = 35)
for ind in G.nodes():
G.add_node(ind, color = 'r')
pos =nx.get_node_attributes(G,'pos')
gp.network_plot_3D(G, 20)
plt.savefig('/Volumes/Multicell/Sebastien/Gustave_Jeremie/spheroid_sample_Francoise.pdf', transparent=True)
```
## Batch analyze the data
```
spheroid_path = './utility/spheroid_sample_1.csv'
spheroid_data = pandas.read_csv(spheroid_path)
spheroid = pr.single_spheroid_process(spheroid_data[spheroid_data['area'] > 200])
G = graph.generate_voronoi_graph(spheroid, zRatio = 2, dCells = 35)
import glob
from collections import defaultdict
degree_frame_Vor = pandas.DataFrame()
i = 0
degree_frame_Geo = pandas.DataFrame()
j = 0
deg_Vor = []
deg_Geo = []
for fname in glob.glob('./utility/*.csv'):
spheroid_data = pandas.read_csv(fname)
spheroid_data['x'] *= 1.25
spheroid_data['y'] *= 1.25
spheroid_data['z'] *= 1.25
spheroid_data = spheroid_data[spheroid_data['area']>200]
spheroid = pr.single_spheroid_process(spheroid_data)
G = generate_voronoi_graph(spheroid, zRatio = 1, dCells = 55)
degree_sequence = sorted([d for n, d in G.degree()], reverse=True)
degreeCount = collections.Counter(degree_sequence)
for key in degreeCount.keys():
N_tot = 0
for k in degreeCount.keys():
N_tot += degreeCount[k]
degree_frame_Vor.loc[i, 'degree'] = key
degree_frame_Vor.loc[i, 'p'] = degreeCount[key]/N_tot
degree_frame_Vor.loc[i, 'fname'] = fname
i += 1
deg_Vor += list(degree_sequence)
G = graph.generate_geometric_graph(spheroid, zRatio = 1, dCells = 26)
degree_sequence = sorted([d for n, d in G.degree()], reverse=True)
degreeCount = collections.Counter(degree_sequence)
for key in degreeCount.keys():
N_tot = 0
for k in degreeCount.keys():
N_tot += degreeCount[k]
degree_frame_Geo.loc[j, 'degree'] = key
degree_frame_Geo.loc[j, 'p'] = degreeCount[key]/N_tot
degree_frame_Geo.loc[j, 'fname'] = fname
j += 1
deg_Geo.append(degreeCount[key])
indx = degree_frame_Vor.pivot(index = 'degree', columns = 'fname', values = 'p').fillna(0).mean(axis = 1).index
mean = degree_frame_Vor.pivot(index = 'degree', columns = 'fname', values = 'p').fillna(0).mean(axis = 1).values
std = degree_frame_Vor.pivot(index = 'degree', columns = 'fname', values = 'p').fillna(0).std(axis = 1).values
indx_geo = degree_frame_Geo.pivot(index = 'degree', columns = 'fname', values = 'p').fillna(0).mean(axis = 1).index
mean_geo = degree_frame_Geo.pivot(index = 'degree', columns = 'fname', values = 'p').fillna(0).mean(axis = 1).values
std_geo = degree_frame_Geo.pivot(index = 'degree', columns = 'fname', values = 'p').fillna(0).std(axis = 1).values
import seaborn as sns
sns.set_style('white')
plt.errorbar(indx+0.3, mean, yerr=std,
marker = 's', linestyle = ' ', color = 'b',
label = 'Voronoi')
plt.errorbar(indx_geo-0.3, mean_geo, yerr=std_geo,
marker = 'o', linestyle = ' ', color = 'r',
label = 'Geometric')
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
from scipy.special import factorial
from scipy.stats import poisson
# the bins should be of integer width, because poisson is an integer distribution
bins = np.arange(25)-0.5
entries, bin_edges, patches = plt.hist(deg_Vor, bins=bins, density=True, label='Data')
# calculate bin centres
bin_middles = 0.5 * (bin_edges[1:] + bin_edges[:-1])
def fit_function(k, lamb):
'''poisson function, parameter lamb is the fit parameter'''
return poisson.pmf(k, lamb)
# fit with curve_fit
parameters, cov_matrix = curve_fit(fit_function, bin_middles, entries)
# plot poisson-deviation with fitted parameter
x_plot = np.arange(0, 25)
plt.plot(
x_plot,
fit_function(x_plot, *parameters),
marker='o', linestyle='',
label='Fit result',
)
plt.legend()
plt.show()
parameters
```
| github_jupyter |
# Model Explainer Example

In this example we will:
* [Describe the project structure](#Project-Structure)
* [Train some models](#Train-Models)
* [Create Tempo artifacts](#Create-Tempo-Artifacts)
* [Run unit tests](#Unit-Tests)
* [Save python environment for our classifier](#Save-Classifier-Environment)
* [Test Locally on Docker](#Test-Locally-on-Docker)
* [Production on Kubernetes via Tempo](#Production-Option-1-(Deploy-to-Kubernetes-with-Tempo))
* [Prodiuction on Kuebrnetes via GitOps](#Production-Option-2-(Gitops))
## Prerequisites
This notebooks needs to be run in the `tempo-examples` conda environment defined below. Create from project root folder:
```bash
conda env create --name tempo-examples --file conda/tempo-examples.yaml
```
## Project Structure
```
!tree -P "*.py" -I "__init__.py|__pycache__" -L 2
```
## Train Models
* This section is where as a data scientist you do your work of training models and creating artfacts.
* For this example we train sklearn and xgboost classification models for the iris dataset.
```
import os
import logging
import numpy as np
import json
import tempo
from tempo.utils import logger
from src.constants import ARTIFACTS_FOLDER
logger.setLevel(logging.ERROR)
logging.basicConfig(level=logging.ERROR)
from src.data import AdultData
data = AdultData()
from src.model import train_model
adult_model = train_model(ARTIFACTS_FOLDER, data)
from src.explainer import train_explainer
train_explainer(ARTIFACTS_FOLDER, data, adult_model)
```
## Create Tempo Artifacts
```
from src.tempo import create_explainer, create_adult_model
sklearn_model = create_adult_model()
Explainer = create_explainer(sklearn_model)
explainer = Explainer()
# %load src/tempo.py
import os
import dill
import numpy as np
from alibi.utils.wrappers import ArgmaxTransformer
from src.constants import ARTIFACTS_FOLDER, EXPLAINER_FOLDER, MODEL_FOLDER
from tempo.serve.metadata import ModelFramework
from tempo.serve.model import Model
from tempo.serve.pipeline import PipelineModels
from tempo.serve.utils import pipeline, predictmethod
def create_adult_model() -> Model:
sklearn_model = Model(
name="income-sklearn",
platform=ModelFramework.SKLearn,
local_folder=os.path.join(ARTIFACTS_FOLDER, MODEL_FOLDER),
uri="gs://seldon-models/test/income/model",
)
return sklearn_model
def create_explainer(model: Model):
@pipeline(
name="income-explainer",
uri="s3://tempo/explainer/pipeline",
local_folder=os.path.join(ARTIFACTS_FOLDER, EXPLAINER_FOLDER),
models=PipelineModels(sklearn=model),
)
class ExplainerPipeline(object):
def __init__(self):
pipeline = self.get_tempo()
models_folder = pipeline.details.local_folder
explainer_path = os.path.join(models_folder, "explainer.dill")
with open(explainer_path, "rb") as f:
self.explainer = dill.load(f)
def update_predict_fn(self, x):
if np.argmax(self.models.sklearn(x).shape) == 0:
self.explainer.predictor = self.models.sklearn
self.explainer.samplers[0].predictor = self.models.sklearn
else:
self.explainer.predictor = ArgmaxTransformer(self.models.sklearn)
self.explainer.samplers[0].predictor = ArgmaxTransformer(self.models.sklearn)
@predictmethod
def explain(self, payload: np.ndarray, parameters: dict) -> str:
print("Explain called with ", parameters)
self.update_predict_fn(payload)
explanation = self.explainer.explain(payload, **parameters)
return explanation.to_json()
# explainer = ExplainerPipeline()
# return sklearn_model, explainer
return ExplainerPipeline
```
## Save Explainer
```
!ls artifacts/explainer/conda.yaml
tempo.save(Explainer)
```
## Test Locally on Docker
Here we test our models using production images but running locally on Docker. This allows us to ensure the final production deployed model will behave as expected when deployed.
```
from tempo import deploy_local
remote_model = deploy_local(explainer)
r = json.loads(remote_model.predict(payload=data.X_test[0:1], parameters={"threshold":0.90}))
print(r["data"]["anchor"])
r = json.loads(remote_model.predict(payload=data.X_test[0:1], parameters={"threshold":0.99}))
print(r["data"]["anchor"])
remote_model.undeploy()
```
## Production Option 1 (Deploy to Kubernetes with Tempo)
* Here we illustrate how to run the final models in "production" on Kubernetes by using Tempo to deploy
### Prerequisites
Create a Kind Kubernetes cluster with Minio and Seldon Core installed using Ansible as described [here](https://tempo.readthedocs.io/en/latest/overview/quickstart.html#kubernetes-cluster-with-seldon-core).
```
!kubectl apply -f k8s/rbac -n production
from tempo.examples.minio import create_minio_rclone
import os
create_minio_rclone(os.getcwd()+"/rclone-minio.conf")
tempo.upload(sklearn_model)
tempo.upload(explainer)
from tempo.serve.metadata import SeldonCoreOptions
runtime_options = SeldonCoreOptions(**{
"remote_options": {
"namespace": "production",
"authSecretName": "minio-secret"
}
})
from tempo import deploy_remote
remote_model = deploy_remote(explainer, options=runtime_options)
r = json.loads(remote_model.predict(payload=data.X_test[0:1], parameters={"threshold":0.95}))
print(r["data"]["anchor"])
remote_model.undeploy()
```
## Production Option 2 (Gitops)
* We create yaml to provide to our DevOps team to deploy to a production cluster
* We add Kustomize patches to modify the base Kubernetes yaml created by Tempo
```
from tempo import manifest
from tempo.serve.metadata import SeldonCoreOptions
runtime_options = SeldonCoreOptions(**{
"remote_options": {
"namespace": "production",
"authSecretName": "minio-secret"
}
})
yaml_str = manifest(explainer, options=runtime_options)
with open(os.getcwd()+"/k8s/tempo.yaml","w") as f:
f.write(yaml_str)
!kustomize build k8s
```
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License").
# Neural Machine Translation with Attention
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r2/tutorials/sequences/_nmt.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/r2/tutorials/sequences/_nmt.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
</table>
# This notebook is still under construction! Please come back later.
This notebook trains a sequence to sequence (seq2seq) model for Spanish to English translation using TF 2.0 APIs. This is an advanced example that assumes some knowledge of sequence to sequence models.
After training the model in this notebook, you will be able to input a Spanish sentence, such as *"¿todavia estan en casa?"*, and return the English translation: *"are you still at home?"*
The translation quality is reasonable for a toy example, but the generated attention plot is perhaps more interesting. This shows which parts of the input sentence has the model's attention while translating:
<img src="https://tensorflow.org/images/spanish-english.png" alt="spanish-english attention plot">
Note: This example takes approximately 10 mintues to run on a single P100 GPU.
```
import collections
import io
import itertools
import os
import random
import re
import time
import unicodedata
import numpy as np
import tensorflow as tf
assert tf.__version__.startswith('2')
import matplotlib.pyplot as plt
print(tf.__version__)
```
## Download and prepare the dataset
We'll use a language dataset provided by http://www.manythings.org/anki/. This dataset contains language translation pairs in the format:
```
May I borrow this book? ¿Puedo tomar prestado este libro?
```
There are a variety of languages available, but we'll use the English-Spanish dataset. For convenience, we've hosted a copy of this dataset on Google Cloud, but you can also download your own copy. After downloading the dataset, here are the steps we'll take to prepare the data:
1. Clean the sentences by removing special characters.
1. Add a *start* and *end* token to each sentence.
1. Create a word index and reverse word index (dictionaries mapping from word → id and id → word).
1. Pad each sentence to a maximum length.
```
# TODO(brianklee): This preprocessing should ideally be implemented in TF
# because preprocessing should be exported as part of the SavedModel.
# Converts the unicode file to ascii
# https://stackoverflow.com/a/518232/2809427
def unicode_to_ascii(s):
return ''.join(c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn')
START_TOKEN = u'<start>'
END_TOKEN = u'<end>'
def preprocess_sentence(w):
# remove accents; lowercase everything
w = unicode_to_ascii(w.strip()).lower()
# creating a space between a word and the punctuation following it
# eg: "he is a boy." => "he is a boy ."
# https://stackoverflow.com/a/3645931/3645946
w = re.sub(r'([?.!,¿])', r' \1 ', w)
# replacing everything with space except (a-z, '.', '?', '!', ',')
w = re.sub(r'[^a-z?.!,¿]+', ' ', w)
# adding a start and an end token to the sentence
# so that the model know when to start and stop predicting.
w = '<start> ' + w + ' <end>'
return w
en_sentence = u"May I borrow this book?"
sp_sentence = u"¿Puedo tomar prestado este libro?"
print(preprocess_sentence(en_sentence))
print(preprocess_sentence(sp_sentence))
```
Training on the complete dataset of >100,000 sentences will take a long time. To train faster, we can limit the size of the dataset (of course, translation quality degrades with less data).
```
def load_anki_data(num_examples=None):
# Download the file
path_to_zip = tf.keras.utils.get_file(
'spa-eng.zip', origin='http://download.tensorflow.org/data/spa-eng.zip',
extract=True)
path_to_file = os.path.dirname(path_to_zip) + '/spa-eng/spa.txt'
with io.open(path_to_file, 'rb') as f:
lines = f.read().decode('utf8').strip().split('\n')
# Data comes as tab-separated strings; one per line.
eng_spa_pairs = [[preprocess_sentence(w) for w in line.split('\t')] for line in lines]
# The translations file is ordered from shortest to longest, so slicing from
# the front will select the shorter examples. This also speeds up training.
if num_examples is not None:
eng_spa_pairs = eng_spa_pairs[:num_examples]
eng_sentences, spa_sentences = zip(*eng_spa_pairs)
eng_tokenizer = tf.keras.preprocessing.text.Tokenizer(filters='')
spa_tokenizer = tf.keras.preprocessing.text.Tokenizer(filters='')
eng_tokenizer.fit_on_texts(eng_sentences)
spa_tokenizer.fit_on_texts(spa_sentences)
return (eng_spa_pairs, eng_tokenizer, spa_tokenizer)
NUM_EXAMPLES = 30000
sentence_pairs, english_tokenizer, spanish_tokenizer = load_anki_data(NUM_EXAMPLES)
# Turn our english/spanish pairs into TF Datasets by mapping words -> integers.
def make_dataset(eng_spa_pairs, eng_tokenizer, spa_tokenizer):
eng_sentences, spa_sentences = zip(*eng_spa_pairs)
eng_ints = eng_tokenizer.texts_to_sequences(eng_sentences)
spa_ints = spa_tokenizer.texts_to_sequences(spa_sentences)
padded_eng_ints = tf.keras.preprocessing.sequence.pad_sequences(
eng_ints, padding='post')
padded_spa_ints = tf.keras.preprocessing.sequence.pad_sequences(
spa_ints, padding='post')
dataset = tf.data.Dataset.from_tensor_slices((padded_eng_ints, padded_spa_ints))
return dataset
# Train/test split
train_size = int(len(sentence_pairs) * 0.8)
random.shuffle(sentence_pairs)
train_sentence_pairs, test_sentence_pairs = sentence_pairs[:train_size], sentence_pairs[train_size:]
# Show length
len(train_sentence_pairs), len(test_sentence_pairs)
_english, _spanish = train_sentence_pairs[0]
_eng_ints, _spa_ints = english_tokenizer.texts_to_sequences([_english])[0], spanish_tokenizer.texts_to_sequences([_spanish])[0]
print("Source language: ")
print('\n'.join('{:4d} ----> {}'.format(i, word) for i, word in zip(_eng_ints, _english.split())))
print("Target language: ")
print('\n'.join('{:4d} ----> {}'.format(i, word) for i, word in zip(_spa_ints, _spanish.split())))
# Set up datasets
BATCH_SIZE = 64
train_ds = make_dataset(train_sentence_pairs, english_tokenizer, spanish_tokenizer)
test_ds = make_dataset(test_sentence_pairs, english_tokenizer, spanish_tokenizer)
train_ds = train_ds.shuffle(len(train_sentence_pairs)).batch(BATCH_SIZE, drop_remainder=True)
test_ds = test_ds.batch(BATCH_SIZE, drop_remainder=True)
print("Dataset outputs elements with shape ({}, {})".format(
*train_ds.output_shapes))
```
## Write the encoder and decoder model
Here, we'll implement an encoder-decoder model with attention. The following diagram shows that each input word is assigned a weight by the attention mechanism which is then used by the decoder to predict the next word in the sentence.
<img src="https://www.tensorflow.org/images/seq2seq/attention_mechanism.jpg" width="500" alt="attention mechanism">
The input is put through an encoder model which gives us the encoder output of shape *(batch_size, max_length, hidden_size)* and the encoder hidden state of shape *(batch_size, hidden_size)*.
```
ENCODER_SIZE = DECODER_SIZE = 1024
EMBEDDING_DIM = 256
MAX_OUTPUT_LENGTH = train_ds.output_shapes[1][1]
def gru(units):
return tf.keras.layers.GRU(units,
return_sequences=True,
return_state=True,
recurrent_activation='sigmoid',
recurrent_initializer='glorot_uniform')
class Encoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, encoder_size):
super(Encoder, self).__init__()
self.embedding_dim = embedding_dim
self.encoder_size = encoder_size
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = gru(encoder_size)
def call(self, x, hidden):
x = self.embedding(x)
output, state = self.gru(x, initial_state=hidden)
return output, state
def initial_hidden_state(self, batch_size):
return tf.zeros((batch_size, self.encoder_size))
```
For the decoder, we're using *Bahdanau attention*. Here are the equations that are implemented:
<img src="https://www.tensorflow.org/images/seq2seq/attention_equation_0.jpg" alt="attention equation 0" width="800">
<img src="https://www.tensorflow.org/images/seq2seq/attention_equation_1.jpg" alt="attention equation 1" width="800">
Lets decide on notation before writing the simplified form:
* FC = Fully connected (dense) layer
* EO = Encoder output
* H = hidden state
* X = input to the decoder
And the pseudo-code:
* `score = FC(tanh(FC(EO) + FC(H)))`
* `attention weights = softmax(score, axis = 1)`. Softmax by default is applied on the last axis but here we want to apply it on the *1st axis*, since the shape of score is *(batch_size, max_length, hidden_size)*. `Max_length` is the length of our input. Since we are trying to assign a weight to each input, softmax should be applied on that axis.
* `context vector = sum(attention weights * EO, axis = 1)`. Same reason as above for choosing axis as 1.
* `embedding output` = The input to the decoder X is passed through an embedding layer.
* `merged vector = concat(embedding output, context vector)`
* This merged vector is then given to the GRU
The shapes of all the vectors at each step have been specified in the comments in the code:
```
class BahdanauAttention(tf.keras.Model):
def __init__(self, units):
super(BahdanauAttention, self).__init__()
self.W1 = tf.keras.layers.Dense(units)
self.W2 = tf.keras.layers.Dense(units)
self.V = tf.keras.layers.Dense(1)
def call(self, hidden_state, enc_output):
# enc_output shape = (batch_size, max_length, hidden_size)
# (batch_size, hidden_size) -> (batch_size, 1, hidden_size)
hidden_with_time = tf.expand_dims(hidden_state, 1)
# score shape == (batch_size, max_length, 1)
score = self.V(tf.nn.tanh(self.W1(enc_output) + self.W2(hidden_with_time)))
# attention_weights shape == (batch_size, max_length, 1)
attention_weights = tf.nn.softmax(score, axis=1)
# context_vector shape after sum = (batch_size, hidden_size)
context_vector = attention_weights * enc_output
context_vector = tf.reduce_sum(context_vector, axis=1)
return context_vector, attention_weights
class Decoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, decoder_size):
super(Decoder, self).__init__()
self.vocab_size = vocab_size
self.embedding_dim = embedding_dim
self.decoder_size = decoder_size
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = gru(decoder_size)
self.fc = tf.keras.layers.Dense(vocab_size)
self.attention = BahdanauAttention(decoder_size)
def call(self, x, hidden, enc_output):
context_vector, attention_weights = self.attention(hidden, enc_output)
# x shape after passing through embedding == (batch_size, 1, embedding_dim)
x = self.embedding(x)
# x shape after concatenation == (batch_size, 1, embedding_dim + hidden_size)
x = tf.concat([tf.expand_dims(context_vector, 1), x], axis=-1)
# passing the concatenated vector to the GRU
output, state = self.gru(x)
# output shape == (batch_size, hidden_size)
output = tf.reshape(output, (-1, output.shape[2]))
# output shape == (batch_size, vocab)
x = self.fc(output)
return x, state, attention_weights
```
## Define a translate function
Now, let's put the encoder and decoder halves together. The encoder step is fairly straightforward; we'll just reuse Keras's dynamic unroll. For the decoder, we have to make some choices about how to feed the decoder RNN. Overall the process goes as follows:
1. Pass the *input* through the *encoder* which return *encoder output* and the *encoder hidden state*.
2. The encoder output, encoder hidden state and the <START> token is passed to the decoder.
3. The decoder returns the *predictions* and the *decoder hidden state*.
4. The encoder output, hidden state and next token is then fed back into the decoder repeatedly. This has two different behaviors under training and inference:
- during training, we use *teacher forcing*, where the correct next token is fed into the decoder, regardless of what the decoder emitted.
- during inference, we use `tf.argmax(predictions)` to select the most likely continuation and feed it back into the decoder. Another strategy that yields more robust results is called *beam search*.
5. Repeat step 4 until either the decoder emits an <END> token, indicating that it's done translating, or we run into a hardcoded length limit.
```
class NmtTranslator(tf.keras.Model):
def __init__(self, encoder, decoder, start_token_id, end_token_id):
super(NmtTranslator, self).__init__()
self.encoder = encoder
self.decoder = decoder
# (The token_id should match the decoder's language.)
# Uses start_token_id to initialize the decoder.
self.start_token_id = tf.constant(start_token_id)
# Check for sequence completion using this token_id
self.end_token_id = tf.constant(end_token_id)
@tf.function
def call(self, inp, target=None, max_output_length=MAX_OUTPUT_LENGTH):
'''Translate an input.
If target is provided, teacher forcing is used to generate the translation.
'''
batch_size = inp.shape[0]
hidden = self.encoder.initial_hidden_state(batch_size)
enc_output, enc_hidden = self.encoder(inp, hidden)
dec_hidden = enc_hidden
if target is not None:
output_length = target.shape[1]
else:
output_length = max_output_length
predictions_array = tf.TensorArray(tf.float32, size=output_length - 1)
attention_array = tf.TensorArray(tf.float32, size=output_length - 1)
# Feed <START> token to start decoder.
dec_input = tf.cast([self.start_token_id] * batch_size, tf.int32)
# Keep track of which sequences have emitted an <END> token
is_done = tf.zeros([batch_size], dtype=tf.bool)
for i in tf.range(output_length - 1):
dec_input = tf.expand_dims(dec_input, 1)
predictions, dec_hidden, attention_weights = self.decoder(dec_input, dec_hidden, enc_output)
predictions = tf.where(is_done, tf.zeros_like(predictions), predictions)
# Write predictions/attention for later visualization.
predictions_array = predictions_array.write(i, predictions)
attention_array = attention_array.write(i, attention_weights)
# Decide what to pass into the next iteration of the decoder.
if target is not None:
# if target is known, use teacher forcing
dec_input = target[:, i + 1]
else:
# Otherwise, pick the most likely continuation
dec_input = tf.argmax(predictions, axis=1, output_type=tf.int32)
# Figure out which sentences just completed.
is_done = tf.logical_or(is_done, tf.equal(dec_input, self.end_token_id))
# Exit early if all our sentences are done.
if tf.reduce_all(is_done):
break
# [time, batch, predictions] -> [batch, time, predictions]
return tf.transpose(predictions_array.stack(), [1, 0, 2]), tf.transpose(attention_array.stack(), [1, 0, 2, 3])
```
## Define the loss function
Our loss function is a word-for-word comparison between true answer and model prediction.
real = [<start>, 'This', 'is', 'the', 'correct', 'answer', '.', '<end>', '<oov>']
pred = ['This', 'is', 'what', 'the', 'model', 'emitted', '.', '<end>']
results in comparing
This/This, is/is, the/what, correct/the, answer/model, ./emitted, <end>/.
and ignoring the rest of the prediction.
```
def loss_fn(real, pred):
# The prediction doesn't include the <start> token.
real = real[:, 1:]
# Cut down the prediction to the correct shape (We ignore extra words).
pred = pred[:, :real.shape[1]]
# If real == <OOV>, then mask out the loss.
mask = 1 - np.equal(real, 0)
loss_ = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=real, logits=pred) * mask
# Sum loss over the time dimension, but average it over the batch dimension.
return tf.reduce_mean(tf.reduce_sum(loss_, axis=1))
```
## Configure model directory
We'll use one directory to save all of our relevant artifacts (summary logs, checkpoints, SavedModel exports, etc.)
```
# Where to save checkpoints, tensorboard summaries, etc.
MODEL_DIR = '/tmp/tensorflow/nmt_attention'
def apply_clean():
if tf.io.gfile.exists(MODEL_DIR):
print('Removing existing model dir: {}'.format(MODEL_DIR))
tf.io.gfile.rmtree(MODEL_DIR)
# Optional: remove existing data
apply_clean()
# Summary writers
train_summary_writer = tf.summary.create_file_writer(
os.path.join(MODEL_DIR, 'summaries', 'train'), flush_millis=10000)
test_summary_writer = tf.summary.create_file_writer(
os.path.join(MODEL_DIR, 'summaries', 'eval'), flush_millis=10000, name='test')
# Set up all stateful objects
encoder = Encoder(len(english_tokenizer.word_index) + 1, EMBEDDING_DIM, ENCODER_SIZE)
decoder = Decoder(len(spanish_tokenizer.word_index) + 1, EMBEDDING_DIM, DECODER_SIZE)
start_token_id = spanish_tokenizer.word_index[START_TOKEN]
end_token_id = spanish_tokenizer.word_index[END_TOKEN]
model = NmtTranslator(encoder, decoder, start_token_id, end_token_id)
# TODO(brianklee): Investigate whether Adam defaults have changed and whether it affects training.
optimizer = tf.keras.optimizers.Adam(epsilon=1e-8)# tf.keras.optimizers.SGD(learning_rate=0.01)#Adam()
# Checkpoints
checkpoint_dir = os.path.join(MODEL_DIR, 'checkpoints')
checkpoint_prefix = os.path.join(checkpoint_dir, 'ckpt')
checkpoint = tf.train.Checkpoint(
encoder=encoder, decoder=decoder, optimizer=optimizer)
# Restore variables on creation if a checkpoint exists.
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))
# SavedModel exports
export_path = os.path.join(MODEL_DIR, 'export')
```
# Visualize the model's output
Let's visualize our model's output. (It hasn't been trained yet, so it will output gibberish.)
We'll use this visualization to check on the model's progress.
```
def plot_attention(attention, sentence, predicted_sentence):
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(1, 1, 1)
ax.matshow(attention, cmap='viridis')
fontdict = {'fontsize': 14}
ax.set_xticklabels([''] + sentence.split(), fontdict=fontdict, rotation=90)
ax.set_yticklabels([''] + predicted_sentence.split(), fontdict=fontdict)
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
plt.show()
def ints_to_words(tokenizer, ints):
return ' '.join(tokenizer.index_word[int(i)] if int(i) != 0 else '<OOV>' for i in ints)
def sentence_to_ints(tokenizer, sentence):
sentence = preprocess_sentence(sentence)
return tf.constant(tokenizer.texts_to_sequences([sentence])[0])
def translate_and_plot_ints(model, english_tokenizer, spanish_tokenizer, ints, target_ints=None):
"""Run translation on a sentence and plot an attention matrix.
Sentence should be passed in as list of integers.
"""
ints = tf.expand_dims(ints, 0)
predictions, attention = model(ints)
prediction_ids = tf.squeeze(tf.argmax(predictions, axis=-1))
attention = tf.squeeze(attention)
sentence = ints_to_words(english_tokenizer, ints[0])
predicted_sentence = ints_to_words(spanish_tokenizer, prediction_ids)
print(u'Input: {}'.format(sentence))
print(u'Predicted translation: {}'.format(predicted_sentence))
if target_ints is not None:
print(u'Correct translation: {}'.format(ints_to_words(spanish_tokenizer, target_ints)))
plot_attention(attention, sentence, predicted_sentence)
def translate_and_plot_words(model, english_tokenizer, spanish_tokenizer, sentence, target_sentence=None):
"""Same as translate_and_plot_ints, but pass in a sentence as a string."""
english_ints = sentence_to_ints(english_tokenizer, sentence)
spanish_ints = sentence_to_ints(spanish_tokenizer, target_sentence) if target_sentence is not None else None
translate_and_plot_ints(model, english_tokenizer, spanish_tokenizer, english_ints, target_ints=spanish_ints)
translate_and_plot_words(model, english_tokenizer, spanish_tokenizer, u"it's really cold here", u'hace mucho frio aqui')
```
# Train the model
```
def train(model, optimizer, dataset):
"""Trains model on `dataset` using `optimizer`."""
start = time.time()
avg_loss = tf.keras.metrics.Mean('loss', dtype=tf.float32)
for inp, target in dataset:
with tf.GradientTape() as tape:
predictions, _ = model(inp, target=target)
loss = loss_fn(target, predictions)
avg_loss(loss)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
if tf.equal(optimizer.iterations % 10, 0):
tf.summary.scalar('loss', avg_loss.result(), step=optimizer.iterations)
avg_loss.reset_states()
rate = 10 / (time.time() - start)
print('Step #%d\tLoss: %.6f (%.2f steps/sec)' % (optimizer.iterations, loss, rate))
start = time.time()
if tf.equal(optimizer.iterations % 100, 0):
# translate_and_plot_words(model, english_index, spanish_index, u"it's really cold here.", u'hace mucho frio aqui.')
translate_and_plot_ints(model, english_tokenizer, spanish_tokenizer, inp[0], target[0])
def test(model, dataset, step_num):
"""Perform an evaluation of `model` on the examples from `dataset`."""
avg_loss = tf.keras.metrics.Mean('loss', dtype=tf.float32)
for inp, target in dataset:
predictions, _ = model(inp)
loss = loss_fn(target, predictions)
avg_loss(loss)
print('Model test set loss: {:0.4f}'.format(avg_loss.result()))
tf.summary.scalar('loss', avg_loss.result(), step=step_num)
NUM_TRAIN_EPOCHS = 10
for i in range(NUM_TRAIN_EPOCHS):
start = time.time()
with train_summary_writer.as_default():
train(model, optimizer, train_ds)
end = time.time()
print('\nTrain time for epoch #{} ({} total steps): {}'.format(
i + 1, optimizer.iterations, end - start))
with test_summary_writer.as_default():
test(model, test_ds, optimizer.iterations)
checkpoint.save(checkpoint_prefix)
# TODO(brianklee): This seems to be complaining about input shapes not being set?
# tf.saved_model.save(model, export_path)
```
## Next steps
* [Download a different dataset](http://www.manythings.org/anki/) to experiment with translations, for example, English to German, or English to French.
* Experiment with training on a larger dataset, or using more epochs
```
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# This code creates a virtual display to draw game images on.
# If you are running locally, just ignore it
import os
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY"))==0:
!bash ../xvfb start
%env DISPLAY=:1
```
### OpenAI Gym
We're gonna spend several next weeks learning algorithms that solve decision processes. We are then in need of some interesting decision problems to test our algorithms.
That's where OpenAI gym comes into play. It's a python library that wraps many classical decision problems including robot control, videogames and board games.
So here's how it works:
```
import gym
env = gym.make("MountainCar-v0")
plt.imshow(env.render('rgb_array'))
print("Observation space:", env.observation_space)
print("Action space:", env.action_space)
```
Note: if you're running this on your local machine, you'll see a window pop up with the image above. Don't close it, just alt-tab away.
### Gym interface
The three main methods of an environment are
* __reset()__ - reset environment to initial state, _return first observation_
* __render()__ - show current environment state (a more colorful version :) )
* __step(a)__ - commit action __a__ and return (new observation, reward, is done, info)
* _new observation_ - an observation right after commiting the action __a__
* _reward_ - a number representing your reward for commiting action __a__
* _is done_ - True if the MDP has just finished, False if still in progress
* _info_ - some auxilary stuff about what just happened. Ignore it ~~for now~~.
```
obs0 = env.reset()
print("initial observation code:", obs0)
# Note: in MountainCar, observation is just two numbers: car position and velocity
print("taking action 2 (right)")
new_obs, reward, is_done, _ = env.step(2)
print("new observation code:", new_obs)
print("reward:", reward)
print("is game over?:", is_done)
# Note: as you can see, the car has moved to the riht slightly (around 0.0005)
```
### Play with it
Below is the code that drives the car to the right.
However, it doesn't reach the flag at the far right due to gravity.
__Your task__ is to fix it. Find a strategy that reaches the flag.
You're not required to build any sophisticated algorithms for now, feel free to hard-code :)
_Hint: your action at each step should depend either on __t__ or on __s__._
```
# create env manually to set time limit. Please don't change this.
TIME_LIMIT = 250
env = gym.wrappers.TimeLimit(gym.envs.classic_control.MountainCarEnv(),
max_episode_steps=TIME_LIMIT + 1)
s = env.reset()
actions = {'left': 0, 'stop': 1, 'right': 2}
# prepare "display"
%matplotlib notebook
fig = plt.figure()
ax = fig.add_subplot(111)
fig.show()
def policy(t):
if t>50 and t<100:
return actions['left']
else:
return actions['right']
for t in range(TIME_LIMIT):
s, r, done, _ = env.step(policy(t))
#draw game image on display
ax.clear()
ax.imshow(env.render('rgb_array'))
fig.canvas.draw()
if done:
print("Well done!")
break
else:
print("Time limit exceeded. Try again.")
```
### Submit to coursera
```
from submit import submit_interface
submit_interface(policy, "[email protected]", "IT3M0zwksnBtCJXV")
```
| github_jupyter |
<table>
<tr align=left><td><img align=left src="https://i.creativecommons.org/l/by/4.0/88x31.png">
<td>Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) Kyle T. Mandli</td>
</table>
```
%matplotlib inline
import numpy
import matplotlib.pyplot as plt
```
# Boundary Value Problems: Discretization
## Model Problems
The simplest boundary value problem (BVP) we will run into is the one-dimensional version of Poisson's equation
$$
u''(x) = f(x).
$$
Usually we solve this equation on a finite interval with either Dirichlet or Neumann boundary condtions. Because there are two derivatives in the equation we need two boundary conditions to solve the PDE (really and ODE in this case) uniquely. To start let us consider the following basic problem
$$\begin{aligned}
u''(x) = f(x) ~~~ \Omega = [a, b] \\
u(a) = \alpha ~~~ u(b) = \beta.
\end{aligned}$$
BVPs of this sort are often the result of looking at the steady-state form of a time dependent PDE. For instance, if we were considering the steady-state solution to the heat equation
$$
u_t(x,t) = \kappa u_{xx}(x,t) + \Psi(x,t) ~~~~ \Omega = [0, T] \times [a, b] \\
u(x, 0) = u^0(x) ~~~ u(a, t) = \alpha(t) ~~~ u(b, t) = \beta(t)
$$
we would solve the equation where $u_t = 0$ and arrive at
$$
u''(x) = - \Psi / \kappa,
$$
a version of Poisson's equation above.
In higher spatial dimensions the second derivative turns into a Laplacian. Notation varies for this but all these are equivalent statements:
$$\begin{aligned}
\nabla^2 u(\vec{x}) &= f(\vec{x}) \\
\Delta u(\vec{x}) &= f(\vec{x}) \\
\sum^N_{i=1} u_{x_i x_i} &= f(\vec{x}).
\end{aligned}$$
## One-Dimensional Discretization
As a first approach to solving the one-dimensional Poisson's equation let's break up the domain into `m` points, often called a *mesh* or *grid*. Our goal is to approximate the unknown function $u(x)$ as the mesh points $x_i$. First we can relate the number of mesh points `m` to the distance between with
$$
\Delta x = \frac{1}{m + 1}.
$$
The mesh points $x_i$ can be written as
$$
x_i = a + i \Delta x.
$$
We can let $\Delta x$ vary and many of the formulas above have only minor modifications but we will leave that for homework. Notationally we will also adopt the notation
$$
U_i \approx u(x_i)
$$
so that $U_i$ are the approximate solution at the grid points and retain the lower-case $u$ to denote the true solution.
To simplify our discussion let's consider the ODE
$$
u''(x) = f(x) ~~~ \Omega = [0, 1] \\
u(0) = \alpha ~~~ u(1) = \beta.
$$
Applying the 2nd order, centered difference approximation for the 2nd derivative we have the equation
$$
D^2 U_i = \frac{1}{\Delta x^2} (U_{i+1} - 2 U_i + U_{i-1})
$$
so that we end up with the approximate algebraic expression at every grid point of
$$
\frac{1}{\Delta x^2} (U_{i+1} - 2 U_i + U_{i-1}) = f(x_i) ~~~ i = 1, 2, 3, \ldots, m.
$$
Note at this point that these algebraic equations are coupled as each $U_i$ depends on its neighbors. This means we can write these as system of coupled equations
$$
A U = F.
$$
#### Write the system of equations
$$
\frac{1}{\Delta x^2} (U_{i+1} - 2 U_i + U_{i-1}) = f(x_i) ~~~ i = 1, 2, 3, \ldots, m.
$$
Note the boundary conditions!
$$
\frac{1}{\Delta x^2} \begin{bmatrix}
-2 & 1 & & & \\
1 & -2 & 1 & & \\
& 1 & -2 & 1 & \\
& & 1 & -2 & 1 \\
& & & 1 & -2 \\
\end{bmatrix} \begin{bmatrix}
U_1 \\ U_2 \\ U_3 \\ U_4 \\ U_5
\end{bmatrix} =
\begin{bmatrix}
f(x_1) - \frac{\alpha}{\Delta x^2} \\ f(x_2) \\ f(x_3) \\ f(x_4) \\ f(x_5) - \frac{\beta}{\Delta x^2} \\
\end{bmatrix}.
$$
#### Example
Want to solve the BVP
$$
u_{xx} = e^x, ~~~~ x \in [0, 1] ~~~~ \text{with} ~~~~ u(0) = 0.0, \text{ and } u(1) = 3
$$
via the construction of a linear system of equations.
$$\begin{aligned}
u_{xx} &= e^x \\
u_x &= A + e^x \\
u &= Ax + B + e^x\\
u(0) &= B + 1 = 0 \Rightarrow B = -1 \\
u(1) &= A - 1 + e^{1} = 3 \Rightarrow A = 4 - e\\
~\\
u(x) &= (4 - e) x - 1 + e^x
\end{aligned}$$
```
# Problem setup
a = 0.0
b = 1.0
u_a = 0.0
u_b = 3.0
f = lambda x: numpy.exp(x)
u_true = lambda x: (4.0 - numpy.exp(1.0)) * x - 1.0 + numpy.exp(x)
# Descretization
m = 10
x_bc = numpy.linspace(a, b, m + 2)
x = x_bc[1:-1]
delta_x = (b - a) / (m + 1)
# Construct matrix A
A = numpy.zeros((m, m))
diagonal = numpy.ones(m) / delta_x**2
A += numpy.diag(diagonal * -2.0, 0)
A += numpy.diag(diagonal[:-1], 1)
A += numpy.diag(diagonal[:-1], -1)
# Construct RHS
b = f(x)
b[0] -= u_a / delta_x**2
b[-1] -= u_b / delta_x**2
# Solve system
U = numpy.empty(m + 2)
U[0] = u_a
U[-1] = u_b
U[1:-1] = numpy.linalg.solve(A, b)
# Plot result
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x_bc, U, 'o', label="Computed")
axes.plot(x_bc, u_true(x_bc), 'k', label="True")
axes.set_title("Solution to $u_{xx} = e^x$")
axes.set_xlabel("x")
axes.set_ylabel("u(x)")
plt.show()
```
## Error Analysis
A natural question to ask given our approximation $U_i$ is how close this is to the true solution $u(x)$ at the grid points $x_i$. To address this we will define the error $E$ as
$$
E = U - \hat{U}
$$
where $U$ is the vector of the approximate solution and $\hat{U}$ is the vector composed of the $u(x_i)$.
This leaves $E$ as a vector still so often we ask the question how does the norm of $E$ behave given a particular $\Delta x$. For the $\infty$-norm we would have
$$
||E||_\infty = \max_{1 \leq i \leq m} |E_i| = \max_{1 \leq i \leq m} |U_i - u(x_i)|
$$
If we can show that $||E||_\infty$ goes to zero as $\Delta x \rightarrow 0$ we can then claim that the approximate solution $U_i$ at any of the grid points $E_i \rightarrow 0$. If we would like to use other norms we often define slightly modified versions of the norms that also contain the grid width $\Delta x$ where
$$\begin{aligned}
||E||_1 &= \Delta x \sum^m_{i=1} |E_i| \\
||E||_2 &= \left( \Delta x \sum^m_{i=1} |E_i|^2 \right )^{1/2}
\end{aligned}$$
These are referred to as *grid function norms*.
The $E$ defined above is known as the *global error*. One of our main goals throughout this course is to understand how $E$ behaves given other factors as we defined later.
### Local Truncation Error
The *local truncation error* (LTE) can be defined by replacing the approximate solution $U_i$ by the approximate solution $u(x_i)$. Since the algebraic equations are an approximation to the original BVP, we do not expect that the true solution will exactly satisfy these equations, this resulting difference is the LTE.
For our one-dimensional finite difference approximation from above we have
$$
\frac{1}{\Delta x^2} (U_{i+1} - 2 U_i + U_{i-1}) = f(x_i).
$$
Replacing $U_i$ with $u(x_i)$ in this equation leads to
$$
\tau_i = \frac{1}{\Delta x^2} (u(x_{i+1}) - 2 u(x_i) + u(x_{i-1})) - f(x_i).
$$
In this form the LTE is not as useful but if we assume $u(x)$ is smooth we can repalce the $u(x_i)$ with their Taylor series counterparts, similar to what we did for finite differences. The relevant Taylor series are
$$
u(x_{i \pm 1}) = u(x_i) \pm u'(x_i) \Delta x + \frac{1}{2} u''(x_i) \Delta x^2 \pm \frac{1}{6} u'''(x_i) \Delta x^3 + \frac{1}{24} u^{(4)}(x_i) \Delta x^4 + \mathcal{O}(\Delta x^5)
$$
This leads to an expression for $\tau_i$ of
$$\begin{aligned}
\tau_i &= \frac{1}{\Delta x^2} \left [u''(x_i) \Delta x^2 + \frac{1}{12} u^{(4)}(x_i) \Delta x^4 + \mathcal{O}(\Delta x^5) \right ] - f(x_i) \\
&= u''(x_i) + \frac{1}{12} u^{(4)}(x_i) \Delta x^2 + \mathcal{O}(\Delta x^4) - f(x_i) \\
&= \frac{1}{12} u^{(4)}(x_i) \Delta x^2 + \mathcal{O}(\Delta x^4)
\end{aligned}$$
where we note that the true solution would satisfy $u''(x) = f(x)$.
As long as $ u^{(4)}(x_i) $ remains finite (smooth) we know that $\tau_i \rightarrow 0$ as $\Delta x \rightarrow 0$
We can also write the vector of LTEs as
$$
\tau = A \hat{U} - F
$$
which implies
$$
A\hat{U} = F + \tau.
$$
### Global Error
What we really want to bound is the global error $E$. To relate the global error and LTE we can substitute $E = U - \hat{U}$ into our expression for the LTE to find
$$
A E = -\tau.
$$
This means that the global error is the solution to the system of equations we defined for the approximation except with $\tau$ as the forcing function rather than $F$!
This also implies that the global error $E$ can be thought of as an approximation to similar BVP as we started with where
$$
e''(x) = -\tau(x) ~~~ \Omega = [0, 1] \\
e(0) = 0 ~~~ e(1) = 0.
$$
We can solve this ODE directly by integrating twice since to find to leading order
$$\begin{aligned}
e(x) &\approx -\frac{1}{12} \Delta x^2 u''(x) + \frac{1}{12} \Delta x^2 (u''(0) + x (u''(1) - u''(0))) \\
&= \mathcal{O}(\Delta x^2) \\
&\rightarrow 0 ~~~ \text{as} ~~~ \Delta x \rightarrow 0.
\end{aligned}$$
### Stability
We showed that the continuous analog to $E$, $e(x)$, does in fact go to zero as $\Delta x \rightarrow 0$ but what about $E$? Instead of showing something based on $e(x)$ let's look back at the original system of equations for the global error
$$
A^{\Delta x} E^{\Delta x} = - \tau^{\Delta x}
$$
where we now denote a particular realization of the system by the corresponding grid spacing $\Delta x$.
If we could invert $A^{\Delta x}$ we could compute $E^{\Delta x}$ directly. Assuming that we can and taking an appropriate norm we find
$$\begin{aligned}
E^{\Delta x} &= (A^{\Delta x})^{-1} \tau^{\Delta x} \\
||E^{\Delta x}|| &= ||(A^{\Delta x})^{-1} \tau^{\Delta x}|| \\
& \leq ||(A^{\Delta x})^{-1} ||~|| \tau^{\Delta x}||
\end{aligned}$$
We know that $\tau^{\Delta x} \rightarrow 0$ as $\Delta x \rightarrow 0$ already for our example so if we can bound the norm of the matrix $(A^{\Delta x})^{-1}$ by some constant $C$ for sufficiently small $\Delta x$ we can then write a bound on the global error of
$$
||E^{\Delta x}|| \leq C ||\tau^{\Delta x}||
$$
demonstrating that $E^{\Delta x} \rightarrow 0 $ at least as fast as $\tau^{\Delta x} \rightarrow 0$.
We can generalize this observation to all linear BVP problems by supposing that we have a finite difference approximation to a linear BVP of the form
$$
A^{\Delta x} U^{\Delta x} = F^{\Delta x},
$$
where $\Delta x$ is the grid spacing.
We say the approximation is *stable* if $(A^{\Delta x})^{-1}$ exists $\forall \Delta x < \Delta x_0$ and there is a constant $C$ such that
$$
||(A^{\Delta x})^{-1}|| \leq C ~~~~ \forall \Delta x < \Delta x_0.
$$
### Consistency
A related and important idea for the discretization of any PDE is that it be consistent with the equation we are approximating. If
$$
||\tau^{\Delta x}|| \rightarrow 0 ~~\text{as}~~ \Delta x \rightarrow 0
$$
then we say an approximation is *consistent* with the differential equation.
### Convergence
We now have all the pieces to say something about the global error $E$. A method is said to be *convergent* if
$$
||E^{\Delta x}|| \rightarrow 0 ~~~ \text{as} ~~~ \Delta x \rightarrow 0.
$$
If an approximation is both consistent ($||\tau^{\Delta x}|| \rightarrow 0 ~~\text{as}~~ \Delta x \rightarrow 0$) and stable ($||E^{\Delta x}|| \leq C ||\tau^{\Delta x}||$) then the approximation is convergent.
We have only derived this in the case of linear BVPs but in fact these criteria for convergence are often found to be true for any finite difference approximation (and beyond for that matter). This statement of convergence can also often be strengthened to say
$$
\mathcal{O}(\Delta x^p) ~\text{LTE}~ + ~\text{stability} ~ \Rightarrow \mathcal{O}(\Delta x^p) ~\text{global error}.
$$
It turns out the most difficult part of this process is usually the statement regarding stability. In the next section we will see for our simple example how we can prove stability in the 2-norm.
### Stability in the 2-Norm
Recalling our definition of stability, we need to show that for our previously defined $A$ that
$$
(A^{\Delta x})^{-1}
$$
exists and
$$
||(A^{\Delta x})^{-1}|| \leq C ~~~ \forall \Delta x < \Delta x_0
$$
for some $C$.
We can show that $A$ is in fact invertible but can we bound the norm of the inverse? Recall that the 2-norm of a symmetric matrix is equal to its spectral radius, i.e.
$$
||A||_2 = \rho(A) = \max_{1\leq p \leq m} |\lambda_p|.
$$
Since the inverse of $A$ is also symmetric the eigenvalues of $A^{-1}$ are the inverses of the eigenvalues of $A$ implying that
$$
||A^{-1}||_2 = \rho(A^{-1}) = \max_{1\leq p \leq m} \left| \frac{1}{\lambda_p} \right| = \frac{1}{\max_{1\leq p \leq m} \left| \lambda_p \right|}.
$$
If none of the $\lambda_p$ of $A$ are zero for sufficiently small $\Delta x$ and the rest are finite as $\Delta x \rightarrow 0$ we have shown the stability of the approximation.
The eigenvalues of the matrix $A$ from above can be written as
$$
\lambda_p = \frac{2}{\Delta x^2} (\cos(p \pi \Delta x) - 1)
$$
with the corresponding eigenvectors $v^p$
$$
v^p_j = \sin(p \pi j \Delta x)
$$
as the $j$th component with $j = 1, \ldots, m$.
#### Check that these are in fact the eigenpairs of the matrix $A$
$$
\lambda_p = \frac{2}{\Delta x^2} (\cos(p \pi \Delta x) - 1)
$$
$$
v^p_j = \sin(p \pi j \Delta x)
$$
$$\begin{aligned}
(A v^p)_j &= \frac{1}{\Delta x^2} (v^p_{j-1} - 2 v^p_j + v^p_{j+1} ) \\
&= \frac{1}{\Delta x^2} (\sin(p \pi (j-1) \Delta x) - 2 \sin(p \pi j \Delta x) + \sin(p \pi (j+1) \Delta x) ) \\
&= \frac{1}{\Delta x^2} (\sin(p \pi j \Delta x) \cos(p \pi \Delta x) - 2 \sin(p \pi j \Delta x) + \sin(p \pi j \Delta x) \cos(p \pi \Delta x) \\
&= \lambda_p v^p_j.
\end{aligned}$$
#### Compute the smallest eigenvalue
If we can show that the eigenvalues are away from the origin then we know $||A||_2$ will be bounded. In this case the eigenvalues are negative so we need to show that they are always strictly less than zero.
$$
\lambda_p = \frac{2}{\Delta x^2} (\cos(p \pi \Delta x) - 1)
$$
Use a Taylor series to get an idea of how this behaves with respect to $\Delta x$
From these expressions we know that smallest eigenvalue is
$$\begin{aligned}
\lambda_1 &= \frac{2}{\Delta x^2} (\cos(p \pi \Delta x) - 1) \\
&= \frac{2}{\Delta x^2} \left (-\frac{1}{2} p^2 \pi^2 \Delta x^2 + \frac{1}{24} p^4 \pi^4 \Delta x^4 + \mathcal{O}(\Delta^6) \right ) \\
&= -p^2 \pi^2 + \mathcal{O}(\Delta x^2).
\end{aligned}$$
Note that this also gives us an error bound as this eigenvalue also will also lead to the largest eigenvalue of the inverse matrix. We can therefore say
$$
||E^{\Delta x}||_2 \leq ||(A^{\Delta x})^{-1}||_2 ||\tau^{\Delta x}||_2 \approx \frac{1}{\pi^2} ||\tau^{\Delta x}||_2.
$$
### Stability in the $\infty$-Norm
The straight forward approach to show that $||E||_\infty \rightarrow 0$ as $\Delta x \rightarrow 0$ would be to use the matrix bound
$$
||E||_\infty \leq \frac{1}{\sqrt{\Delta x}} ||E||_2.
$$
For our example problem we showed that $||E||_2 = \mathcal{O}(\Delta x^2)$ so this implies that we at least know that $||E||_\infty = \mathcal{O}(\Delta x^{3/2})$. This is unfortunate as we expect $||E||_\infty = \mathcal{O}(\Delta x^{2})$ due to the discretization. In order to alleviate this problem let's go back and consider our definition of stability but this time consider the $\infty$-norm.
It turns out that our matrix $A$ can be seen as a number of discrete approximations to *Green's functions* in each column. This is more broadly applicable later on so we will spend some time reviewing the theory of Green's functions and apply them to our simple example problem.
### Green's Functions
Consider the BVP with Dirichlet boundary conditions
$$
u''(x) = f(x) ~~~~ \Omega = [0, 1] \\
u(0) = \alpha ~~~~ u(1) = \beta.
$$
Pick a fixed point $\bar{x} \in \Omega$, the Green's function $G(x ; \bar{x})$ solves the BVP above with
$$
f(x) = \delta(x - \bar{x})
$$
and $\alpha = \beta = 0$. You could think of this as the result of a steady-state problem of the heat equation with a point-loss of heat somewhere in the domain.
To find the Green's function for our particular problem we can integrate just around the point $\bar{x}$ near the $\delta$ function source to find
$$\begin{aligned}
\int^{\bar{x} + \epsilon}_{\bar{x} - \epsilon} u''(x) dx &= \int^{\bar{x} + \epsilon}_{\bar{x} - \epsilon} \delta(x - \bar{x}) dx \\
u'(\bar{x} + \epsilon) - u'(\bar{x} - \epsilon) &= 1
\end{aligned}$$
recalling that by definition the integral of the $\delta$ function must be 1 if the interval of integration includes $\bar{x}$. We see that the jump in the derivative at $\bar{x}$ from the left and right should be 1.
After a bit of algebra we can solve for the Green's function for our model BVP as
$$
G(x; \bar{x}) = \left \{ \begin{aligned}
(\bar{x} - 1) x & & 0 \leq x \leq \bar{x} \\
\bar{x} (x - 1) & & \bar{x} \leq x \leq 1
\end{aligned} \right . .
$$
One imporant property of linear PDEs (or ODEs) in general is that they exhibit the principle of superposition. The reason we care about this with Green's functions is that if we have a $f(x)$ composed of two $\delta$ functions, it turns out the solution is the sum of the corresponding two Green's functions. For instance if
$$
f(x) = \delta(x - 0.25) + 2 \delta(x - 0.5)
$$
then
$$
u(x) = G(x ; 0.25) + 2 G(x ; 0.5).
$$
This of course can be extended to an infinite number of $\delta$ functions so that
$$
f(x) = \int^1_0 f(\bar{x}) \delta(x - \bar{x}) d\bar{x}
$$
and therefore
$$
u(x) = \int^1_0 f(\bar{x}) G(x ; \bar{x}) d\bar{x}.
$$
To incorporate the effects of boundary conditions we can continue to add Green's functions to the solution to find the general solution of our original BVP as
$$
u(x) = \alpha (1 - x) + \beta x + \int^1_0 f(\bar{x}) G(x ; \bar{x}) d\bar{x}.
$$
So why did we do all this? Well the Green's function solution representation above can be thought of as a linear operator on the function $f(x)$. Written in perhaps more familiar terms we have
$$
\mathcal{A} u = f ~~~~ u = \mathcal{A}^{-1} f.
$$
We see now that our linear operator $\mathcal{A}$ may be the continuous analog to our discrete matrix $A$.
To proceed we will modify our original matrix $A$ into a slightly different version based on the same discretization. Instead of moving the boundary terms to the right-hand-side of the equation instead we will introduce two new "unknowns", called *ghost cells*, that will be placed at the edges of the grid. We will label these $U_0$ and $U_{m+1}$. In reality we know the value of these points, they are the boundary conditions!
The modified system then looks like
$$
A = \frac{1}{\Delta x^2} \begin{bmatrix}
\Delta x^2 & 0 \\
1 & -2 & 1 \\
& 1 & -2 & 1 \\
& & \ddots & \ddots & \ddots \\
& & & 1 & -2 & 1 \\
& & & & 1 & -2 & 1 \\
& & & & & 0 & \Delta x^2
\end{bmatrix} ~~~ U = \begin{bmatrix}
U_0 \\ U_1 \\ \vdots \\ U_m \\ U_{m+1}
\end{bmatrix}~~~~~ F = \begin{bmatrix}
\alpha \\ f(x_1) \\ \vdots \\ f(x_{m}) \\ \beta
\end{bmatrix}
$$
This has the advantage later that we can implement more general boundary conditions and it isolates the algebraic dependence on the boundary conditions. The drawbacks are that the matrix no longer has as simple of a form as before.
Let's finally turn to the form of the matrix $A^{-1}$. Introducing a bit more notation, let $A_{j}$ denote the $j$th column and $A_{ij}$ denote the $i$th $j$th element of the matrix $A$.
We know that
$$
A A^{-1}_j = e_j
$$
where $e_j$ is the unit vector with $1$ in the $j$th row ($j$th column of the identity matrix).
Note that the above system has some similarities to a discretized version of the Green's function problem. Here $e_j$ represents the $\delta$ function, $A$ the original operator, and $A^{-1}_j$ the effect that the $j$th $\delta$ function (corresponding to the $\bar{x}$) has on the full solution.
It turns out that we can write down the inverse matrix directly using Green's functions (see LeVeque for the details) but we end up with
$$
A^{-1}_{ij} = \Delta xG(x_i ; x_j) = \left \{ \begin{aligned}
\Delta x (x_j - 1) x_i, & & i &= 1,2, \ldots j \\
\Delta x (x_i - 1) x_j, & & i &= j, j+1, \ldots , m
\end{aligned} \right . .
$$
We can also write the effective right-hand side of our system as
$$
F = \alpha e_0 + \beta e_{m+1} + \sum^m_{j=1} f_j e_j
$$
and finally the solution as
$$
U = \alpha A^{-1}_{0} + \beta A^{-1}_{m+1} + \sum^m_{j=1} f_j A^{-1}_{j}
$$
whose elements are
$$
U_i = \alpha(1 - x_i) + \beta x_i + \Delta x \sum^m_{j=1} f_j G(x_i ; x_j).
$$
Alright, where has all this gotten us? Well, since we now know what the form of $A^{-1}$ is we may be able to get at the $\infty$-norm of this matrix. Recall that the $\infty$-norm of a matrix (induced from the $\infty$-norm) for a vector is
$$
|| C ||_\infty = \max_{0\leq i \leq m+1} \sum^{m+1}_{j=0} |C_{ij}|
$$
Note that due to the form of the matrix $A^{-1}$ the first row's sum is
$$
\sum^{m+1}_{j=0} A_{0j}^{-1} = 1
$$
as is the last rows $A^{-1}_{m+1}$. We also know that for the other rows $A^{-1}_{i,0} < 1$ and $A^{-1}_{i,m+1} < 1$.
The intermediate rows are also all bounded as
$$
\sum^{m+1}_{j=0} |A^{-1}_{ij}| \leq 1 + 1 + m \Delta x < 3
$$
using the fact we know that
$$
\Delta x = \frac{1}{m+1}.
$$
This completes our stability wanderings as we can now say definitively that
$$
||A^{-1}||_\infty < 3 ~~~ \forall \Delta x.
$$
## Neumann Boundary Conditions
As mentioned before, we can incorporate other types of boundary conditions into our discretization using the modified version of our matrix. Let's try to do this for our original problem but with one side having Neumann boundary conditions:
$$
u''(x) = f(x) ~~~ \Omega = [-1, 1] \\
u(-1) = \alpha ~~~ u'(1) = \sigma.
$$
**Group Work**
$$
u''(x) = f(x) ~~~ \Omega = [-1, 1] \\
u(-1) = \alpha ~~~ u'(1) = \sigma.
$$
$u(x) = -(5 + e) x - (2 + e + e^{-1}) + e^x$
Explore implementing the Neumann boundary condition by
1. using a one-sided 1st order expression,
1. using a centered 2nd order expression, and
1. using a one-sided 2nd order expression
```
def solve_mixed_1st_order_one_sided(m):
# Problem setup
a = -1.0
b = 1.0
alpha = 3.0
sigma = -5.0
f = lambda x: numpy.exp(x)
# Descretization
x_bc = numpy.linspace(a, b, m + 2)
x = x_bc[1:-1]
delta_x = (b - a) / (m + 1)
# Construct matrix A
A = numpy.zeros((m + 2, m + 2))
diagonal = numpy.ones(m + 2) / delta_x**2
A += numpy.diag(diagonal * -2.0, 0)
A += numpy.diag(diagonal[:-1], 1)
A += numpy.diag(diagonal[:-1], -1)
# Construct RHS
b = f(x_bc)
# Boundary conditions
A[0, 0] = 1.0
A[0, 1] = 0.0
A[-1, -1] = 1.0 / (delta_x)
A[-1, -2] = -1.0 / (delta_x)
b[0] = alpha
b[-1] = sigma
# Solve system
U = numpy.linalg.solve(A, b)
return x_bc, U
u_true = lambda x: -(5.0 + numpy.exp(1.0)) * x - (2.0 + numpy.exp(1.0) + numpy.exp(-1.0)) + numpy.exp(x)
x_bc, U = solve_mixed_1st_order_one_sided(10)
# Plot result
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x_bc, U, 'o', label="Computed")
axes.plot(x_bc, u_true(x_bc), 'k', label="True")
axes.set_title("Solution to $u_{xx} = e^x$")
axes.set_xlabel("x")
axes.set_ylabel("u(x)")
plt.show()
def solve_mixed_2nd_order_centered(m):
# Problem setup
a = -1.0
b = 1.0
alpha = 3.0
sigma = -5.0
f = lambda x: numpy.exp(x)
# Descretization
x_bc = numpy.linspace(a, b, m + 2)
x = x_bc[1:-1]
delta_x = (b - a) / (m + 1)
# Construct matrix A
A = numpy.zeros((m + 2, m + 2))
diagonal = numpy.ones(m + 2) / delta_x**2
A += numpy.diag(diagonal * -2.0, 0)
A += numpy.diag(diagonal[:-1], 1)
A += numpy.diag(diagonal[:-1], -1)
# Construct RHS
b = f(x_bc)
# Boundary conditions
A[0, 0] = 1.0
A[0, 1] = 0.0
A[-1, -1] = -1.0 / (delta_x)
A[-1, -2] = 1.0 / (delta_x)
b[0] = alpha
b[-1] = delta_x / 2.0 * f(x_bc[-1]) - sigma
# Solve system
U = numpy.linalg.solve(A, b)
return x_bc, U
x_bc, U = solve_mixed_2nd_order_centered(10)
# Plot result
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x_bc, U, 'o', label="Computed")
axes.plot(x_bc, u_true(x_bc), 'k', label="True")
axes.set_title("Solution to $u_{xx} = e^x$")
axes.set_xlabel("x")
axes.set_ylabel("u(x)")
plt.show()
def solve_mixed_2nd_order_one_sided(m):
# Problem setup
a = -1.0
b = 1.0
alpha = 3.0
sigma = -5.0
f = lambda x: numpy.exp(x)
# Descretization
x_bc = numpy.linspace(a, b, m + 2)
x = x_bc[1:-1]
delta_x = (b - a) / (m + 1)
# Construct matrix A
A = numpy.zeros((m + 2, m + 2))
diagonal = numpy.ones(m + 2) / delta_x**2
A += numpy.diag(diagonal * -2.0, 0)
A += numpy.diag(diagonal[:-1], 1)
A += numpy.diag(diagonal[:-1], -1)
# Construct RHS
b = f(x_bc)
# Boundary conditions
A[0, 0] = 1.0
A[0, 1] = 0.0
A[-1, -1] = 3.0 / (2.0 * delta_x)
A[-1, -2] = -4.0 / (2.0 * delta_x)
A[-1, -3] = 1.0 / (2.0 * delta_x)
b[0] = alpha
b[-1] = sigma
# Solve system
U = numpy.linalg.solve(A, b)
return x_bc, U
x_bc, U = solve_mixed_2nd_order_one_sided(10)
# Plot result
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x_bc, U, 'o', label="Computed")
axes.plot(x_bc, u_true(x_bc), 'k', label="True")
axes.set_title("Solution to $u_{xx} = e^x$")
axes.set_xlabel("x")
axes.set_ylabel("u(x)")
plt.show()
# Problem setup
a = -1.0
b = 1.0
alpha = 3.0
sigma = -5.0
f = lambda x: numpy.exp(x)
u_true = lambda x: -(5.0 + numpy.exp(1.0)) * x - (2.0 + numpy.exp(1.0) + numpy.exp(-1.0)) + numpy.exp(x)
# Compute the error as a function of delta_x
m_range = numpy.arange(10, 200, 20)
delta_x = numpy.empty(m_range.shape)
error = numpy.empty((m_range.shape[0], 3))
for (i, m) in enumerate(m_range):
x = numpy.linspace(a, b, m + 2)
delta_x[i] = (b - a) / (m + 1)
# Compute solution
_, U = solve_mixed_1st_order_one_sided(m)
error[i, 0] = numpy.linalg.norm(U - u_true(x), ord=numpy.infty)
_, U = solve_mixed_2nd_order_one_sided(m)
error[i, 1] = numpy.linalg.norm(U - u_true(x), ord=numpy.infty)
_, U = solve_mixed_2nd_order_centered(m)
error[i, 2] = numpy.linalg.norm(U - u_true(x), ord=numpy.infty)
titles = ["1st Order, One-Sided", "2nd Order, Centered", "2nd Order, One-Sided"]
order_C = lambda delta_x, error, order: numpy.exp(numpy.log(error) - order * numpy.log(delta_x))
for i in xrange(3):
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.loglog(delta_x, error[:, i], 'ko', label="Approx. Derivative")
axes.loglog(delta_x, order_C(delta_x[0], error[0,i], 1.0) * delta_x**1.0, 'r--', label="1st Order")
axes.loglog(delta_x, order_C(delta_x[0], error[0,i], 2.0) * delta_x**2.0, 'b--', label="2nd Order")
axes.legend(loc=4)
axes.set_title(titles[i])
axes.set_xlabel("$\Delta x$")
axes.set_ylabel("$|u(x) - U|$")
plt.show()
U = solve_mixed_1st_order_one_sided(10)
U = solve_mixed_2nd_order_one_sided(10)
U = solve_mixed_2nd_order_centered(10)
```
## Existance and Uniqueness
One question that should be asked before embarking upon a numerical solution to any equation is whether the original is *well-posed*. Well-posedness is defined as a problem that has a unique solution and depends continuously on the input data (inital condition and boundary conditions are examples).
Consider the BVP we have been exploring but now add strictly Neumann boundary conditions
$$
u''(x) = f(x) ~~~ \Omega = [0, 1] \\
u'(0) = \sigma_0 ~~~ u'(1) = \sigma_1.
$$
We can easily discretize this using one of our methods developed above but we run into problems.
```
# Problem setup
a = -1.0
b = 1.0
alpha = 3.0
sigma = -5.0
f = lambda x: numpy.exp(x)
# Descretization
m = 50
x_bc = numpy.linspace(a, b, m + 2)
x = x_bc[1:-1]
delta_x = (b - a) / (m + 1)
# Construct matrix A
A = numpy.zeros((m + 2, m + 2))
diagonal = numpy.ones(m + 2) / delta_x**2
A += numpy.diag(diagonal * -2.0, 0)
A += numpy.diag(diagonal[:-1], 1)
A += numpy.diag(diagonal[:-1], -1)
# Construct RHS
b = f(x_bc)
# Boundary conditions
A[0, 0] = -1.0 / delta_x
A[0, 1] = 1.0 / delta_x
A[-1, -1] = -1.0 / (delta_x)
A[-1, -2] = 1.0 / (delta_x)
b[0] = alpha
b[-1] = delta_x / 2.0 * f(x_bc[-1]) - sigma
# Solve system
U = numpy.linalg.solve(A, b)
```
We can see why $A$ is singular, the constant vector $e = [1, 1, 1, 1, 1,\ldots, 1]^T$ is in fact in the null-space of $A$. Our numerical method has actually demonstrated this problem is *ill-posed*! Indeed, since the boundary conditions are only on the derivatives there are an infinite number of solutions to the BVP (this could also occur if there were no solutions).
Another way to understand why this is the case is to examine this problem again as the steady-state problem originating with the heat equation. Consider the heat equation with $\sigma_0 = \sigma_1 = 0$ and $f(x) = 0$. This setup would preserve and heat in the rod as none can escape through the ends of the rod. In fact, the solution to the steady-state problem would simply to redistribute the heat in the rod evenly across the rod based on the initial condition. We then would have a solution
$$
u(x) = \int^1_0 u^0(x) dx = C.
$$
The problem comes from the fact that the steady-state problem does not know about this bit of information by itself. This means that the BVP as it stands could pick out any $C$ and it would be a solution.
The solution is similar if we had the same setup except $f(x) \neq 0$. Now we are either adding or subtracting heat in the rod. In this case there may not be a steady state at all! You can actually show that if the addition and subtraction of heat exactly cancels we may in fact have a solution if
$$
\int^1_0 f(x) dx = 0
$$
which leads again to an infinite number of solutions.
## General Linear Second Order Discretization
Let's now describe a method for solving the equation
$$
a(x) u''(x) + b(x) u'(x) + c(x) u(x) = f(x) ~~~~ \Omega = [a, b] \\
u(a) = \alpha ~~~~ u(b) = \beta.
$$
Try discretizing this using second order finite differences and write the system for
$$
a(x) u''(x) + b(x) u'(x) + c(x) u(x) = f(x) ~~~~ \Omega = [a, b] \\
u(a) = \alpha ~~~~ u(b) = \beta.
$$
The general, second order finite difference approximation to the above equation can be written as
$$
a_i \frac{U_{i+1} - 2 U_i + U_{i-1}}{\Delta x^2} + b_i \frac{U_{i+1} - U_{i-1}}{2 \Delta x} + c_i U_i = f_i
$$
leading to the matrix entries
$$
A_{i,i} = -\frac{2 a_i}{\Delta x^2} + c_i
$$
on the diagonal and
$$
A_{i,i\pm1} = \frac{a_i}{\Delta x^2} \pm \frac{b_i}{2 \Delta x}
$$
on the sub-diagonals. We can take of the boundary conditions by either using the ghost-points approach or by incorporating them into the right hand side evaluation.
### Example:
Consider the steady-state heat conduction problem with a variable $\kappa(x)$ so that
$$
(\kappa(x) u'(x))' = f(x), ~~~~ \Omega = [0, 1] \\
u(0) = \alpha ~~~~ u(1) = \beta
$$
By the chain rule we know
$$
\kappa(x) u''(x) + \kappa'(x) u'(x) = f(x).
$$
It turns out that in this case this approach is not really the best approach to solving the problem. In many cases it is best to discretize the original form of the physics rather than a perhaps equivalent formulation. To demonstrate this let's try to construct a system to solve the original equations
$$
(\kappa(x) u'(x))' = f(x).
$$
First we will approximate the expression
$$
\kappa(x) u'(x)
$$
but at the points half-way in between the points $x_i$, i.e. $x_{i + 1/2}$.
We also will take this approximation effectively to be $\Delta x / 2$ and find
$$
\kappa(x_{i+1/2}) u'(x_{i+1/2}) = \kappa_{i+1/2} \frac{U_{i+1} - U_i}{\Delta x}.
$$
Now taking this approximation and differencing it with the same difference centered at $x_{i-1/2}$ leads to
$$\begin{aligned}
(\kappa(x_i) u'(x_i))' &= \frac{1}{\Delta x} \left [ \kappa_{i+1/2} \frac{U_{i+1} - U_i}{\Delta x} - \kappa_{i-1/2} \frac{U_{i} - U_{i-1}}{\Delta x} \right ] \\
&= \frac{\kappa_{i+1/2}U_{i+1} - \kappa_{i+1/2} U_i -\kappa_{i-1/2} U_{i} + \kappa_{i-1/2} U_{i-1}}{\Delta x^2} \\
&= \frac{\kappa_{i+1/2}U_{i+1} - (\kappa_{i+1/2} - \kappa_{i-1/2}) U_i + \kappa_{i-1/2} U_{i-1}}{\Delta x^2}
\end{aligned}$$
Note that these formulations are actually equivalent to $\mathcal{O}(\Delta x^2)$. The matrix entries are
$$\begin{aligned}
A_{i,i} = -\frac{\kappa_{i+1/2} - \kappa_{i-1/2}}{\Delta x^2} \\
A_{i,i \pm 1} = \frac{\kappa_{i\pm 1/2}}{\Delta x^2}.
\end{aligned}$$
Note that this latter discretization is symmetric. This will have consequences as to how well or quickly we can solve the resulting system of linear equations.
## Non-Linear Equations
Our model problem, Poisson's equation, is a linear BVP. How would we approach a non-linear problem? As a new model problem let's consider the non-linear pendulum problem. The physical system is a mass $m$ connected to a rigid, massless rod of length $L$ which is allowed to swing about a point. The angle $\theta(t)$ is taken with reference to the stable at-rest point with the mass hanging downwards.
This system can be described by
$$
\theta''(t) = \frac{-g}{L} \sin(\theta(t)).
$$
We will take $\frac{g}{L} = 1$ for convenience.
Looking at the Taylor series of $\sin$ we can approximate this equation for small $\theta$ as
$$
\sin(\theta) \approx \theta - \frac{\theta^3}{6} + \mathcal{O}(\theta^5)
$$
so that
$$
\theta'' = -\theta.
$$
We know that this equation has solutions of the form
$$
\theta(t) = C_1 \cos t + C_2 \sin t.
$$
We clearly need two boundary conditions to uniquely specify the system which can be a bit awkward given that we usually specify these at two points in the spatial domain. Since we are in time we can specify the initial position of the pendulum $\theta(0) = \alpha$ however the second condition would specify where the pendulum would be sometime in the future, say $\theta(T) = \beta$. We could also specify another initial condition such as the angular velocity $\theta'(0) = \sigma$.
```
# Simple linear pendulum solutions
def linear_pendulum(t, alpha=0.01, beta=0.01, T=1.0):
C_1 = alpha
C_2 = (beta - alpha * numpy.cos(T)) / numpy.sin(T)
return C_1 * numpy.cos(t) + C_2 * numpy.sin(t)
alpha = [0.1, -0.1, -1.0]
beta = [0.1, 0.1, 0.0]
T = [1.0, 1.0, 1.0]
t = numpy.linspace(0, 10.0, 100)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
for i in xrange(len(alpha)):
axes.plot(t, linear_pendulum(t, alpha[i], beta[i], T[i]))
axes.set_title("Solutions to the Linear Pendulum Problem")
axes.set_xlabel("t")
axes.set_ylabel("$\theta$")
plt.show()
```
But how would we go about handling the fully non-linear problem? First let's discretize using our approach to date with the second order, centered second derivative finite difference approximation to find
$$
\frac{1}{\Delta t^2}(\theta_{i+1} - 2 \theta_i + \theta_{i-1}) + \sin (\theta_i) = 0.
$$
The most common approach to solving a non-linear BVP like this (and many non-linear PDEs for that matter) is to use Newton's method. Recall that if we have a non-linear function $G(\theta)$ and we want to find $\theta$ such that
$$
G(\theta) = 0
$$
we can expand $G(\theta)$ in a Taylor series to find
$$
G(\theta^{[k+1]}) = G(\theta^{[k]}) + G'(\theta^{[k]}) (\theta^{[k+1]} - \theta^{[k]}) + \mathcal{O}((\theta^{[k+1]} - \theta^{[k]})^2)
$$
If we want $G(\theta^{[k+1]}) = 0$ we can set this in the expression above (this is also known as a fixed point iteration) and dropping the higher order terms we can solve for $\theta^{[k+1]}$ to find
$$\begin{aligned}
0 &= G(\theta^{[k]}) + G'(\theta^{[k]}) (\theta^{[k+1]} - \theta^{[k]} )\\
G'(\theta^{[k]}) \theta^{[k+1]} &= G'(\theta^{[k]}) \theta^{[k]} - G(\theta^{[k]})
\end{aligned}$$
At this point we need to be careful, if we have a system of equations we cannot simply divide through by $G'(\theta^{[k]})$ (which is now a matrix) to find our new value $\theta^{[k+1]}$. Instead we need to invert the matrix $G'(\theta^{[k]})$. Another way to write this is as an update to the value $\theta^{[k+1]}$ where
$$
\theta^{[k+1]} = \theta^{[k]} + \delta^{[k]}
$$
where
$$
J(\theta^{[k]}) \delta^{[k]} = -G(\theta^{[k]}).
$$
Here we have introduced notation for the **Jacobian matrix** whose elements are
$$
J_{ij}(\theta) = \frac{\partial}{\partial \theta_j} G_i(\theta).
$$
So how do we compute the Jacobian matrix? Since we know the system of equations in this case we can write down in general what the entries of $J$ are.
$$
\frac{1}{\Delta t^2}(\theta_{i+1} - 2 \theta_i + \theta_{i-1}) + \sin (\theta_i) = 0.
$$
$$
J_{ij}(\theta) = \left \{ \begin{aligned}
&\frac{1}{\Delta t^2} & & j = i - 1, j = i + 1 \\
-&\frac{2}{\Delta t^2} + \cos(\theta_i) & & j = i \\
&0 & & \text{otherwise}
\end{aligned} \right .
$$
With the Jacobian in hand we can solve the BVP by iterating until some stopping criteria is met (we have converged to our satisfaction).
### Example
Solve the linear and non-linear pendulum problem with $T=2\pi$, $\alpha = \beta = 0.7$.
- Does the linear equation have a unique solution
- Do you expect the original problem to have a unique solution (i.e. does the non-linear problem have a unique solution)?
```
def solve_nonlinear_pendulum(m, alpha, beta, T, max_iterations=100, tolerance=1e-3, verbose=False):
# Discretization
t_bc = numpy.linspace(0.0, T, m + 2)
t = t_bc[1:-1]
delta_t = T / (m + 1)
diagonal = numpy.ones(t.shape)
G = numpy.empty(t_bc.shape)
# Initial guess
theta = 0.7 * numpy.cos(t_bc)
theta[0] = alpha
theta[-1] = beta
# Main iteration loop
success = False
for num_step in xrange(1, max_iterations):
# Construct Jacobian matrix
J = numpy.diag(diagonal * -2.0 / delta_t**2 + numpy.cos(theta[1:-1]), 0)
J += numpy.diag(diagonal[:-1] / delta_t**2, -1)
J += numpy.diag(diagonal[:-1] / delta_t**2, 1)
# Construct vector G
G = (theta[:-2] - 2.0 * theta[1:-1] + theta[2:]) / delta_t**2 + numpy.sin(theta[1:-1])
# Take care of BCs
G[0] = (alpha - 2.0 * theta[1] + theta[2]) / delta_t**2 + numpy.sin(theta[1])
G[-1] = (theta[-3] - 2.0 * theta[-2] + beta) / delta_t**2 + numpy.sin(theta[-2])
# Solve
delta = numpy.linalg.solve(J, -G)
theta[1:-1] += delta
if verbose:
print " (%s) Step size: %s" % (num_step, numpy.linalg.norm(delta))
if numpy.linalg.norm(delta) < tolerance:
success = True
break
if not success:
print numpy.linalg.norm(delta)
raise ValueError("Reached maximum allowed steps before convergence criteria met.")
return t_bc, theta
t, theta = solve_nonlinear_pendulum(100, 0.7, 0.7, 2.0 * numpy.pi, tolerance=1e-9, verbose=True)
plt.plot(t, theta)
plt.show()
# Linear Problem
alpha = 0.7
beta = 0.7
T = 2.0 * numpy.pi
t = numpy.linspace(0, T, 100)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, linear_pendulum(t, alpha, beta, T), 'r-', label="Linear")
# Non-linear problem
t, theta = solve_nonlinear_pendulum(100, alpha, beta, T)
axes.plot(t, theta, 'b-', label="Non-Linear")
axes.set_title("Solutions to the Pendulum Problem")
axes.set_xlabel("t")
axes.set_ylabel("$\theta$")
plt.show()
```
### Accuracy
Note that there are two different ideas of convergence going on in our non-linear solver above, one is the convergence of the finite difference approximation controlled by $\Delta x$ and the convergence of the Newton iteration. We expect both to be second order (Newton's method converges quadratically under suitable assumptions). How do these two methods combine to affect the global error though?
First let's compute the LTE
$$\begin{aligned}
\tau_{i} &= \frac{1}{\Delta t^2} (\theta(t_{i+1}) - 2 \theta(t_i) + \theta(t_{i-1})) + \sin \theta(t_i) \\
&= \frac{1}{\Delta t^2} \left (\theta(t_i) + \theta'(t_i) \Delta t + \frac{1}{2} \theta''(t_i) \Delta t^2 + \frac{1}{6} \theta'''(t_i) \Delta t^3 + \frac{1}{24} \theta^{(4)}(t_i) \Delta t^4 - 2 \theta(t_i) \right .\\
&~~~~~~~~~~~~~~ \left . + \theta(t_i) - \theta'(t_i) \Delta t + \frac{1}{2} \theta''(t_i) \Delta t^2 - \frac{1}{6} \theta'''(t_i) \Delta t^3 + \frac{1}{24} \theta^{(4)}(t_i) \Delta t^4 + \mathcal{O}(\Delta t^5) \right) + \sin \theta(t_i) \\
&= \frac{1}{\Delta t^2} \left (\theta''(t_i) \Delta t^2 + \frac{1}{12} \theta^{(4)}(t_i) \Delta t^4 \mathcal{O}(\Delta t^6) \right) + \sin \theta(t_i) \\
&= \theta''(t_i) + \sin \theta(t_i) + \frac{1}{12} \theta^{(4)}(t_i) \Delta t^2 + \mathcal{O}(\Delta t^4).
\end{aligned}$$
For Newton's method we can consider the difference of taking a step with the true solution to the BVP $\hat{\theta}$ vs. the approximate solution $\theta$. We can formulate an analogous LTE where
$$
G(\Theta) = 0 ~~~ G(\hat{\Theta}) = \tau.
$$
Following our discussion from before we can use these two expressions to find
$$
G(\Theta) - G(\hat{\Theta}) = -\tau
$$
and from here we want to derive an expression of the global error $E = \Theta - \hat{\Theta}$.
Since $G(\theta)$ is not linear we will write the above expression as a Taylor series to find
$$
G(\Theta) = G(\hat{\Theta}) + J(\hat{\Theta}) E + \mathcal{O}(||E||^2).
$$
Using this expression we find
$$
J(\hat{\Theta}) E = -\tau + \mathcal{O}(||E||^2).
$$
Ignoring higher order terms then we have a linear expression for $E$ which we can solve.
This motivates another definition of stability then involving the Jacobian of $G$. The nonlinear difference methods $G(\Theta) = 0$ is *stable* in some norm $||\cdot||$ if the matrices $(J^{\Delta t})^{-1}$ are uniformly bounded in the norm as $\Delta t \rightarrow 0$. In other words $\exists C$ and $\Delta t^0$ s.t.
$$
||(J^{\Delta t})^{-1}|| \leq C ~~~ \forall \Delta t < \Delta t^0.
$$
Given this sense of stability and consistency ($||\tau|| \rightarrow 0$ as $\Delta t \rightarrow 0$) then the method converges as
$$
||E^{\Delta t}|| \rightarrow 0 ~~~ \text{as} ~~~ \Delta t \rightarrow 0.
$$
Note that we are still not guaranteed that Newton's method will converge, say from a bad initial guess, even though we have shown convergence. It can be proven that Newton's method will converge from a sufficiently good initial guess. It also should be noted that although Newton's method may have an error that is to round-off does not imply that the error will follow suit.
| github_jupyter |
# Many to Many Classification
Simple example for Many to Many Classification (Simple pos tagger) by Recurrent Neural Networks
- Creating the **data pipeline** with `tf.data`
- Preprocessing word sequences (variable input sequence length) using `padding technique` by `user function (pad_seq)`
- Using `tf.nn.embedding_lookup` for getting vector of tokens (eg. word, character)
- Training **many to many classification** with `tf.contrib.seq2seq.sequence_loss`
- Masking unvalid token with `tf.sequence_mask`
- Creating the model as **Class**
- Reference
- https://github.com/aisolab/sample_code_of_Deep_learning_Basics/blob/master/DLEL/DLEL_12_2_RNN_(toy_example).ipynb
### Setup
```
import os, sys
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import string
%matplotlib inline
slim = tf.contrib.slim
print(tf.__version__)
```
### Prepare example data
```
sentences = [['I', 'feel', 'hungry'],
['tensorflow', 'is', 'very', 'difficult'],
['tensorflow', 'is', 'a', 'framework', 'for', 'deep', 'learning'],
['tensorflow', 'is', 'very', 'fast', 'changing']]
pos = [['pronoun', 'verb', 'adjective'],
['noun', 'verb', 'adverb', 'adjective'],
['noun', 'verb', 'determiner', 'noun', 'preposition', 'adjective', 'noun'],
['noun', 'verb', 'adverb', 'adjective', 'verb']]
# word dic
word_list = []
for elm in sentences:
word_list += elm
word_list = list(set(word_list))
word_list.sort()
word_list = ['<pad>'] + word_list
word_dic = {word : idx for idx, word in enumerate(word_list)}
print(word_dic)
# pos dic
pos_list = []
for elm in pos:
pos_list += elm
pos_list = list(set(pos_list))
pos_list.sort()
pos_list = ['<pad>'] + pos_list
print(pos_list)
pos_dic = {pos : idx for idx, pos in enumerate(pos_list)}
pos_dic
pos_idx_to_dic = {elm[1] : elm[0] for elm in pos_dic.items()}
pos_idx_to_dic
```
### Create pad_seq function
```
def pad_seq(sequences, max_len, dic):
seq_len, seq_indices = [], []
for seq in sequences:
seq_len.append(len(seq))
seq_idx = [dic.get(char) for char in seq]
seq_idx += (max_len - len(seq_idx)) * [dic.get('<pad>')] # 0 is idx of meaningless token "<pad>"
seq_indices.append(seq_idx)
return seq_len, seq_indices
```
### Pre-process data
```
max_length = 10
X_length, X_indices = pad_seq(sequences = sentences, max_len = max_length, dic = word_dic)
print(X_length, np.shape(X_indices))
y = [elm + ['<pad>'] * (max_length - len(elm)) for elm in pos]
y = [list(map(lambda el : pos_dic.get(el), elm)) for elm in y]
print(np.shape(y))
y
```
### Define SimPosRNN
```
class SimPosRNN:
def __init__(self, X_length, X_indices, y, n_of_classes, hidden_dim, max_len, word_dic):
# Data pipeline
with tf.variable_scope('input_layer'):
# input layer를 구현해보세요
# tf.get_variable을 사용하세요
# tf.nn.embedding_lookup을 사용하세요
self._X_length = X_length
self._X_indices = X_indices
self._y = y
# RNN cell (many to many)
with tf.variable_scope('rnn_cell'):
# RNN cell을 구현해보세요
# tf.contrib.rnn.BasicRNNCell을 사용하세요
# tf.nn.dynamic_rnn을 사용하세요
# tf.contrib.rnn.OutputProjectionWrapper를 사용하세요
with tf.variable_scope('seq2seq_loss'):
# tf.sequence_mask를 사용하여 masks를 정의하세요
# tf.contrib.seq2seq.sequence_loss의 weights argument에 masks를 넣으세요
with tf.variable_scope('prediction'):
# tf.argmax를 사용하세요
def predict(self, sess, X_length, X_indices):
# predict instance method를 구현하세요
return sess.run(self._prediction, feed_dict = feed_prediction)
```
### Create a model of SimPosRNN
```
# hyper-parameter#
lr = .003
epochs = 100
batch_size = 2
total_step = int(np.shape(X_indices)[0] / batch_size)
print(total_step)
## create data pipeline with tf.data
# tf.data를 이용해서 직접 구현해보세요
# 최종적으로 model은 아래의 코드를 통해서 생성됩니다.
sim_pos_rnn = SimPosRNN(X_length = X_length_mb, X_indices = X_indices_mb, y = y_mb,
n_of_classes = 8, hidden_dim = 16, max_len = max_length, word_dic = word_dic)
```
### Creat training op and train model
```
## create training op
opt = tf.train.AdamOptimizer(learning_rate = lr)
training_op = opt.minimize(loss = sim_pos_rnn.seq2seq_loss)
sess = tf.Session()
sess.run(tf.global_variables_initializer())
tr_loss_hist = []
for epoch in range(epochs):
avg_tr_loss = 0
tr_step = 0
sess.run(tr_iterator.initializer)
try:
while True:
# 여기를 직접구현하시면 됩니다.
except tf.errors.OutOfRangeError:
pass
avg_tr_loss /= tr_step
tr_loss_hist.append(avg_tr_loss)
if (epoch + 1) % 10 == 0:
print('epoch : {:3}, tr_loss : {:.3f}'.format(epoch + 1, avg_tr_loss))
yhat = sim_pos_rnn.predict(sess = sess, X_length = X_length, X_indices = X_indices)
yhat
y
yhat = [list(map(lambda elm : pos_idx_to_dic.get(elm), row)) for row in yhat]
for elm in yhat:
print(elm)
```
| github_jupyter |
# Android的人脸识别库(NDK)
## 创建工程向导



## dlib库源代码添加到工程
* 把dlib目录下的dlib文件夹拷贝到app/src/main/
## 增加JNI接口
### 创建Java接口类
在app/src/main/java/com/wangjunjian/facerecognition下创建类FaceRecognition
```java
package com.wangjunjian.facerecognition;
import android.graphics.Rect;
public class FaceRecognition {
static {
System.loadLibrary("face-recognition");
}
public native void detect(String filename, Rect rect);
}
```
### 通过Java接口类输出C++头文件
打开Terminal窗口,输入命令(**Windows系统下要把:改为;**)
```bash
cd app/src/main/
javah -d jni -classpath /Users/wjj/Library/Android/sdk/platforms/android-21/android.jar:java com.wangjunjian.facerecognition.FaceRecognition
```
### 参考资料
* [JNI 无法确定Bitmap的签名](https://blog.csdn.net/wxxgreat/article/details/48030775)
* [在编辑JNI头文件的时候碰到无法确定Bitmap的签名问题](https://www.jianshu.com/p/b49bdcbfb5ed)
## 实现人脸检测
打开app/src/main/cpp/face-recognition.cpp
```cpp
#include <jni.h>
#include <string>
#include <dlib/image_processing/frontal_face_detector.h>
#include <dlib/image_io.h>
#include "jni/com_wangjunjian_facerecognition_FaceRecognition.h"
using namespace dlib;
using namespace std;
JNIEXPORT void JNICALL Java_com_wangjunjian_facerecognition_FaceRecognition_detect
(JNIEnv *env, jobject clazz, jstring filename, jobject rect)
{
const char* pfilename = env->GetStringUTFChars(filename, JNI_FALSE);
static frontal_face_detector detector = get_frontal_face_detector();
array2d<unsigned char> img;
load_image(img, pfilename);
env->ReleaseStringUTFChars(filename, pfilename);
std::vector<rectangle> dets = detector(img, 0);
if (dets.size() > 0)
{
rectangle faceRect = dets[0];
jclass rectClass = env->GetObjectClass(rect);
jfieldID fidLeft = env->GetFieldID(rectClass, "left", "I");
env->SetIntField(rect, fidLeft, faceRect.left());
jfieldID fidTop = env->GetFieldID(rectClass, "top", "I");
env->SetIntField(rect, fidTop, faceRect.top());
jfieldID fidRight = env->GetFieldID(rectClass, "right", "I");
env->SetIntField(rect, fidRight, faceRect.right());
jfieldID fidBottom = env->GetFieldID(rectClass, "bottom", "I");
env->SetIntField(rect, fidBottom, faceRect.bottom());
}
}
```
### 参考资料
*[Android使用JNI实现Java与C之间传递数据](https://blog.csdn.net/furongkang/article/details/6857610)
## 修改 app/CMakeLists.txt
```
# For more information about using CMake with Android Studio, read the
# documentation: https://d.android.com/studio/projects/add-native-code.html
# Sets the minimum version of CMake required to build the native library.
cmake_minimum_required(VERSION 3.4.1)
# 设置库输出路径变量
set(DISTRIBUTION_DIR ${CMAKE_SOURCE_DIR}/../distribution)
# 包含dlib的make信息
include(${CMAKE_SOURCE_DIR}/src/main/dlib/cmake)
# Creates and names a library, sets it as either STATIC
# or SHARED, and provides the relative paths to its source code.
# You can define multiple libraries, and CMake builds them for you.
# Gradle automatically packages shared libraries with your APK.
add_library( # Sets the name of the library.
face-recognition
# Sets the library as a shared library.
SHARED
# Provides a relative path to your source file(s).
src/main/cpp/face-recognition.cpp )
# 设置每个平台的ABI输出路径
set_target_properties(face-recognition PROPERTIES
LIBRARY_OUTPUT_DIRECTORY
${DISTRIBUTION_DIR}/libs/${ANDROID_ABI})
# Searches for a specified prebuilt library and stores the path as a
# variable. Because CMake includes system libraries in the search path by
# default, you only need to specify the name of the public NDK library
# you want to add. CMake verifies that the library exists before
# completing its build.
find_library( # Sets the name of the path variable.
log-lib
# Specifies the name of the NDK library that
# you want CMake to locate.
log )
# Specifies libraries CMake should link to your target library. You
# can link multiple libraries, such as libraries you define in this
# build script, prebuilt third-party libraries, or system libraries.
# 连接dlib和android
target_link_libraries( # Specifies the target library.
face-recognition
android
dlib
# Links the target library to the log library
# included in the NDK.
${log-lib} )
```
### 参考资料
* [AndroidStudio用Cmake方式编译NDK代码](https://blog.csdn.net/joe544351900/article/details/53637549)
## 修改 app/build.gradle
```
//修改为库
//apply plugin: 'com.android.application'
apply plugin: 'com.android.library'
android {
compileSdkVersion 26
defaultConfig {
//移除应用ID
//applicationId "com.wangjunjian.facerecognition"
minSdkVersion 21
targetSdkVersion 26
versionCode 1
versionName "1.0"
testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner"
externalNativeBuild {
cmake {
arguments '-DANDROID_PLATFORM=android-21',
'-DANDROID_TOOLCHAIN=clang', '-DANDROID_STL=c++_static', '-DCMAKE_BUILD_TYPE=Release ..'
cppFlags "-frtti -fexceptions -std=c++11 -O3"
}
}
//要生成的目标平台ABI
ndk {
abiFilters 'armeabi-v7a', 'arm64-v8a', 'x86', 'x86_64'
}
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
}
}
externalNativeBuild {
cmake {
path "CMakeLists.txt"
}
}
//JNI库输出路径
sourceSets {
main {
jniLibs.srcDirs = ['../distribution/libs']
}
}
//消除错误 Caused by: com.android.builder.merge.DuplicateRelativeFileException: More than one file was found with OS independent path 'lib/x86/libface-recognition.so'
packagingOptions {
pickFirst 'lib/armeabi-v7a/libface-recognition.so'
pickFirst 'lib/arm64-v8a/libface-recognition.so'
pickFirst 'lib/x86/libface-recognition.so'
pickFirst 'lib/x86_64/libface-recognition.so'
}
}
//打包jar到指定路径
task makeJar(type: Copy) {
delete 'build/libs/face-recognition.jar'
from('build/intermediates/packaged-classes/release/')
into('../distribution/libs/')
include('classes.jar')
rename('classes.jar', 'face-recognition.jar')
}
makeJar.dependsOn(build)
dependencies {
implementation fileTree(dir: 'libs', include: ['*.jar'])
implementation 'com.android.support:appcompat-v7:26.1.0'
testImplementation 'junit:junit:4.12'
androidTestImplementation 'com.android.support.test:runner:1.0.2'
androidTestImplementation 'com.android.support.test.espresso:espresso-core:3.0.2'
}
```
### 参考资料
* [Android NDK samples with Android Studio](https://github.com/googlesamples/android-ndk)
* [Android studio 将 Module 打包成 Jar 包](https://www.cnblogs.com/xinaixia/p/7660173.html)
* [could not load library "libc++_shared.so" needed by "libgpg.so"](https://github.com/playgameservices/play-games-plugin-for-unity/issues/280)
* [Android NDK cannot load libc++_shared.so, gets "cannot locate symbol 'rand' reference](https://stackoverflow.com/questions/28504875/android-ndk-cannot-load-libc-shared-so-gets-cannot-locate-symbol-rand-refe)
* [记录Android-Studio遇到的各种坑](https://blog.csdn.net/u012874222/article/details/50616698)
* [Gradle flavors for android with custom source sets - what should the gradle files look like?](https://stackoverflow.com/questions/19461145/gradle-flavors-for-android-with-custom-source-sets-what-should-the-gradle-file)
* [Android Studio 2.2 gradle调用ndk-build](https://www.jianshu.com/p/0e50ae3c4d0d)
* [Android NDK: How to build for ARM64-v8a with minimumSdkVersion = 19](https://stackoverflow.com/questions/41102128/android-ndk-how-to-build-for-arm64-v8a-with-minimumsdkversion-19)
## 编译输出开发库
打开Terminal窗口,输入命令
```bash
./gradlew makeJar
```

### 参考资料
* [-bash :gradlew command not found](https://blog.csdn.net/yyh352091626/article/details/52343951)
## 查看jar文档列表
```bash
jar vtf distribution/libs/face-recognition.jar
```
### 参考资料
* [Linux环境下查看jar包的归档目录](https://blog.csdn.net/tanga842428/article/details/55101253)
## 参考资料
* [Face Landmarks In Your Android App](http://slides.com/boywang/face-landmarks-in-your-android-app/fullscreen#/)
* [dlib-android](https://github.com/tzutalin/dlib-android)
* [深入理解Android(一):Gradle详解](http://www.infoq.com/cn/articles/android-in-depth-gradle/)
* [Android NDK Gradle3.0 以上最新生成.so之旅](https://blog.csdn.net/xiaozhu0922/article/details/78835144)
* [Android Studio 手把手教你NDK打包SO库文件,并提供对应API 使用它(赋demo)](https://blog.csdn.net/u011445031/article/details/72884703)
* [Building dlib for android ndk](https://stackoverflow.com/questions/41331400/building-dlib-for-android-ndk)
* [使用 Android Studio 写出第一个 NDK 程序(超详细)](https://blog.csdn.net/young_time/article/details/80346631)
* [Android studio3.0 JNI/NDK开发流程](https://www.jianshu.com/p/a37782b56770)
* [dlib 18 android编译dlib库,运行matrix_ex demo](https://blog.csdn.net/longji/article/details/78115807)
* [Android开发——Android Studio下使用Cmake在NDK环境下移植Dlib库](https://blog.csdn.net/u012525096/article/details/78950979)
* [android编译系统makefile(Android.mk)写法](http://www.cnblogs.com/hesiming/archive/2011/03/15/1984444.html)
* [dlib-android/jni/jni_detections/jni_pedestrian_det.cpp](https://github.com/tzutalin/dlib-android/blob/master/jni/jni_detections/jni_pedestrian_det.cpp)
* [Face Detection using MTCNN and TensorFlow in android](http://androidcodehub.com/face-detection-using-mtcnn-tensorflow-android/)
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import style
style.use("ggplot")
from sklearn import svm
import pandas as pd
import os
import scipy as sc
# get the annotated data to build the classifier
direc = r'C:\Users\Daniellab\Desktop\Light_level_videos_second_batch\Data\Step3\Annotation'
file = pd.read_csv(direc + '\Mahad_ManualAnnotation_pooledAllDataTogether.csv')
```
Check the distribution of the true and false trials
```
mu, sigma = 0, 0.1 # mean and standard deviation
s = np.random.normal(mu, sigma, 1000)
k2_test, p_test = sc.stats.normaltest(s, axis=0, nan_policy='omit')
print("p = {:g}".format(p_test))
if p_test < 0.05: # null hypothesis - the distribution is normally distributed; less than alpha - reject null hypothesis
print('This random distribution is not normally distributed')
else:
print('This random distribution is normally distributed')
trueTrials = file.FramesInView[file.TrialStatus == 1]
k2_true, p_true = sc.stats.normaltest(np.log(trueTrials), axis=0, nan_policy='omit')
print("p = {:g}".format(p_true))
if p_true < 0.05: # null hypothesis - the distribution is normally distributed; less than alpha - reject null hypothesis
print('the true trials are not normally distributed')
else:
print('The true trials are normally distributed')
falseTrials = file.FramesInView[file.TrialStatus == 0]
k2_false, p_false = sc.stats.normaltest(np.log(falseTrials), axis=0, nan_policy='omit')
print("p = {:g}".format(p_false))
if p_false < 0.05: # null hypothesis - the distribution is normally distributed; less than alpha - reject null hypothesis
print('the false trials are not normally distributed')
else:
print('The false trials are normally distributed')
x = np.asarray(file.FramesInView)
y = np.zeros(len(x))
data = np.transpose(np.array([x,y]))
Manual_Label = np.asarray(file.TrialStatus)
plt.scatter(data[:,0],data[:,1], c = Manual_Label) #see what the data looks like
# build the linear classifier
clf = svm.SVC(kernel = 'linear', C = 1.0)
clf.fit(data,Manual_Label)
w = clf.coef_[0]
y0 = clf.intercept_
new_line = w[0]*data[:,0] - y0
new_line.shape
# see what the classifier did to the labels - find a way to draw a line along the "point" and draw "margin"
plt.hist(trueTrials, bins =10**np.linspace(0, 4, 40), color = 'lightyellow', label = 'true trials', zorder=0)
plt.hist(falseTrials, bins =10**np.linspace(0, 4, 40), color = 'mediumpurple', alpha=0.35, label = 'false trials', zorder=5)
annotation = []
for x,_ in data:
YY = clf.predict([[x,0]])[0]
annotation.append(YY)
plt.scatter(data[:,0],data[:,1]+10, c = annotation,
alpha=0.3, edgecolors='none', zorder=10, label = 'post-classification')
# plt.plot(new_line)
plt.xscale("log")
plt.yscale('linear')
plt.xlabel('Trial length (in frame Number)')
plt.title('Using a Classifier to indentify true trials')
plt.legend()
# plt.savefig(r'C:\Users\Daniellab\Desktop\Light_level_videos_c-10\Data\Step3\Annotation\Figuers_3.svg')
plt.tight_layout()
# run the predictor for all dataset and annotate them
direc = r'C:\Users\Daniellab\Desktop\Light_level_videos_second_batch\Data\Step2_Tanvi_Method'
new_path = r'C:\Users\Daniellab\Desktop\Light_level_videos_second_batch\Data\Step3'
file = [file for file in os.listdir(direc) if file.endswith('.csv')]
# test = file[0]
for item in file:
print(item)
df = pd.read_csv(direc + '/' + item)
label = []
# run the classifer on this
for xx in df.Frames_In_View:
YY = clf.predict([[xx,0]])[0]
label.append(YY)
df1 = pd.DataFrame({'label': label})
new_df = pd.concat([df, df1], axis = 1)
# new_df.to_csv(new_path + '/' + item[:-4] + '_labeled.csv')
```
| github_jupyter |
#$EXERCISE_PREAMBLE$
As always, run the setup code below before working on the questions (and if you leave this notebook and come back later, remember to run the setup code again).
```
from learntools.core import binder; binder.bind(globals())
from learntools.python.ex5 import *
print('Setup complete.')
```
# Exercises
## 1.
Have you ever felt debugging involved a bit of luck? The following program has a bug. Try to identify the bug and fix it.
```
def has_lucky_number(nums):
"""Return whether the given list of numbers is lucky. A lucky list contains
at least one number divisible by 7.
"""
for num in nums:
if num % 7 == 0:
return True
else:
return False
```
Try to identify the bug and fix it in the cell below:
```
def has_lucky_number(nums):
"""Return whether the given list of numbers is lucky. A lucky list contains
at least one number divisible by 7.
"""
for num in nums:
if num % 7 == 0:
return True
else:
return False
q1.check()
#_COMMENT_IF(PROD)_
q1.hint()
#_COMMENT_IF(PROD)_
q1.solution()
```
## 2.
### a.
Look at the Python expression below. What do you think we'll get when we run it? When you've made your prediction, uncomment the code and run the cell to see if you were right.
```
#[1, 2, 3, 4] > 2
```
### b
R and Python have some libraries (like numpy and pandas) compare each element of the list to 2 (i.e. do an 'element-wise' comparison) and give us a list of booleans like `[False, False, True, True]`.
Implement a function that reproduces this behaviour, returning a list of booleans corresponding to whether the corresponding element is greater than n.
```
def elementwise_greater_than(L, thresh):
"""Return a list with the same length as L, where the value at index i is
True if L[i] is greater than thresh, and False otherwise.
>>> elementwise_greater_than([1, 2, 3, 4], 2)
[False, False, True, True]
"""
pass
q2.check()
#_COMMENT_IF(PROD)_
q2.solution()
```
## 3.
Complete the body of the function below according to its docstring
```
def menu_is_boring(meals):
"""Given a list of meals served over some period of time, return True if the
same meal has ever been served two days in a row, and False otherwise.
"""
pass
q3.check()
#_COMMENT_IF(PROD)_
q3.hint()
#_COMMENT_IF(PROD)_
q3.solution()
```
## 4. <span title="A bit spicy" style="color: darkgreen ">🌶️</span>
Next to the Blackjack table, the Python Challenge Casino has a slot machine. You can get a result from the slot machine by calling `play_slot_machine()`. The number it returns is your winnings in dollars. Usually it returns 0. But sometimes you'll get lucky and get a big payday. Try running it below:
```
play_slot_machine()
```
By the way, did we mention that each play costs $1? Don't worry, we'll send you the bill later.
On average, how much money can you expect to gain (or lose) every time you play the machine? The casino keeps it a secret, but you can estimate the average value of each pull using a technique called the **Monte Carlo method**. To estimate the average outcome, we simulate the scenario many times, and return the average result.
Complete the following function to calculate the average value per play of the slot machine.
```
def estimate_average_slot_payout(n_runs):
"""Run the slot machine n_runs times and return the average net profit per run.
Example calls (note that return value is nondeterministic!):
>>> estimate_average_slot_payout(1)
-1
>>> estimate_average_slot_payout(1)
0.5
"""
pass
```
When you think you know the expected value per spin, uncomment the line below to see how close you were.
```
#_COMMENT_IF(PROD)_
q4.solution()
```
#$KEEP_GOING$
| github_jupyter |
# Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
```
%matplotlib inline
%load_ext autoreload
%autoreload 2
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
```
## Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
```
data_path = "Bike-Sharing-Dataset/hour.csv"
rides = pd.read_csv(data_path)
rides.head()
```
## Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the `cnt` column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
```
rides[: 24 * 10].plot(x="dteday", y="cnt")
```
### Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to `get_dummies()`.
```
dummy_fields = ["season", "weathersit", "mnth", "hr", "weekday"]
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = [
"instant",
"dteday",
"season",
"weathersit",
"weekday",
"atemp",
"mnth",
"workingday",
"hr",
]
data = rides.drop(fields_to_drop, axis=1)
data.head()
```
### Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
```
quant_features = ["casual", "registered", "cnt", "temp", "hum", "windspeed"]
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean) / std
```
### Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
```
# Save data for approximately the last 21 days
test_data = data[-21 * 24 :]
# Now remove the test data from the data set
data = data[: -21 * 24]
# Separate the data into features and targets
target_fields = ["cnt", "casual", "registered"]
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = (
test_data.drop(target_fields, axis=1),
test_data[target_fields],
)
```
We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
```
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[: -60 * 24], targets[: -60 * 24]
val_features, val_targets = features[-60 * 24 :], targets[-60 * 24 :]
```
## Time to build the network
Below you'll build your network. We've built out the structure. You'll implement both the forward pass and backwards pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
<img src="assets/neural_network.png" width=300px>
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called *forward propagation*.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called *backpropagation*.
> **Hint:** You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set `self.activation_function` in `__init__` to your sigmoid function.
2. Implement the forward pass in the `train` method.
3. Implement the backpropagation algorithm in the `train` method, including calculating the output error.
4. Implement the forward pass in the `run` method.
```
#############
# In the my_answers.py file, fill out the TODO sections as specified
#############
from my_answers import NeuralNetwork
def MSE(y, Y):
return np.mean((y - Y) ** 2)
```
## Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly before you starting trying to train it. These tests must all be successful to pass the project.
```
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2], [0.4, 0.5], [-0.3, 0.2]])
test_w_h_o = np.array([[0.3], [-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == "bike-sharing-dataset/hour.csv")
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(
np.all(network.activation_function(0.5) == 1 / (1 + np.exp(-0.5)))
)
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(
np.allclose(
network.weights_hidden_to_output,
np.array([[0.37275328], [-0.03172939]]),
)
)
self.assertTrue(
np.allclose(
network.weights_input_to_hidden,
np.array(
[
[0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801],
]
),
)
)
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
```
## Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method known as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
### Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, this process can have sharply diminishing returns and can waste computational resources if you use too many iterations. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. The ideal number of iterations would be a level that stops shortly after the validation loss is no longer decreasing.
### Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. Normally a good choice to start at is 0.1; however, if you effectively divide the learning rate by n_records, try starting out with a learning rate of 1. In either case, if the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
### Choose the number of hidden nodes
In a model where all the weights are optimized, the more hidden nodes you have, the more accurate the predictions of the model will be. (A fully optimized model could have weights of zero, after all.) However, the more hidden nodes you have, the harder it will be to optimize the weights of the model, and the more likely it will be that suboptimal weights will lead to overfitting. With overfitting, the model will memorize the training data instead of learning the true pattern, and won't generalize well to unseen data.
Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose. You'll generally find that the best number of hidden nodes to use ends up being between the number of input and output nodes.
```
import sys
####################
### Set the hyperparameters in you myanswers.py file ###
####################
from my_answers import iterations, learning_rate, hidden_nodes, output_nodes
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {"train": [], "validation": []}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.iloc[batch].values, train_targets.iloc[batch]["cnt"]
network.train(X, y)
# Printing out the training progress
train_loss = MSE(
np.array(network.run(train_features)).T, train_targets["cnt"].values
)
val_loss = MSE(np.array(network.run(val_features)).T, val_targets["cnt"].values)
sys.stdout.write(
"\rProgress: {:2.1f}".format(100 * ii / float(iterations))
+ "% ... Training loss: "
+ str(train_loss)[:5]
+ " ... Validation loss: "
+ str(val_loss)[:5]
)
sys.stdout.flush()
losses["train"].append(train_loss)
losses["validation"].append(val_loss)
plt.plot(losses["train"], label="Training loss")
plt.plot(losses["validation"], label="Validation loss")
plt.legend()
_ = plt.ylim()
```
## Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
```
fig, ax = plt.subplots(figsize=(8, 4))
mean, std = scaled_features["cnt"]
predictions = np.array(network.run(test_features)).T * std + mean
ax.plot(predictions[0], label="Prediction")
ax.plot((test_targets["cnt"] * std + mean).values, label="Data")
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.iloc[test_data.index]["dteday"])
dates = dates.apply(lambda d: d.strftime("%b %d"))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
```
## OPTIONAL: Thinking about your results(this question will not be evaluated in the rubric).
Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?
> **Note:** You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter
#### Your answer below
| github_jupyter |
<a href="https://colab.research.google.com/github/rlberry-py/rlberry/blob/main/notebooks/introduction_to_rlberry.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Introduction to

# Colab setup
```
# install rlberry library
!git clone https://github.com/rlberry-py/rlberry.git
!cd rlberry && git pull && pip install -e . > /dev/null 2>&1
# install ffmpeg-python for saving videos
!pip install ffmpeg-python > /dev/null 2>&1
# install optuna for hyperparameter optimization
!pip install optuna > /dev/null 2>&1
# packages required to show video
!pip install pyvirtualdisplay > /dev/null 2>&1
!apt-get install -y xvfb python-opengl ffmpeg > /dev/null 2>&1
print("")
print(" ~~~ Libraries installed, please restart the runtime! ~~~ ")
print("")
# Create directory for saving videos
!mkdir videos > /dev/null 2>&1
# Initialize display and import function to show videos
import rlberry.colab_utils.display_setup
from rlberry.colab_utils.display_setup import show_video
```
# Interacting with a simple environment
```
from rlberry.envs import GridWorld
# A grid world is a simple environment with finite states and actions, on which
# we can test simple algorithms.
# -> The reward function can be accessed by: env.R[state, action]
# -> And the transitions: env.P[state, action, next_state]
env = GridWorld(nrows=3, ncols=10,
reward_at = {(1,1):0.1, (2, 9):1.0},
walls=((1,4),(2,4), (1,5)),
success_probability=0.9)
# Let's visuzalize a random policy in this environment!
env.enable_rendering()
env.reset()
for tt in range(20):
action = env.action_space.sample()
next_state, reward, is_terminal, info = env.step(action)
# save video and clear buffer
env.save_video('./videos/gw.mp4', framerate=5)
env.clear_render_buffer()
# show video
show_video('./videos/gw.mp4')
```
# Creating an agent
Let's create an agent that runs value iteration to find a near-optimal policy.
This is possible in our GridWorld, because we have access to the transitions `env.P` and the rewards `env.R`.
An Agent must implement at least two methods, **fit()** and **policy()**.
It can also implement **sample_parameters()** used for hyperparameter optimization with [Optuna](https://optuna.org/).
```
import numpy as np
from rlberry.agents import Agent
class ValueIterationAgent(Agent):
name = 'ValueIterationAgent'
def __init__(self, env, gamma=0.99, epsilon=1e-5, **kwargs): # it's important to put **kwargs to ensure compatibility with the base class
"""
gamma: discount factor
episilon: precision of value iteration
"""
Agent.__init__(self, env, **kwargs) # self.env is initialized in the base class
self.gamma = gamma
self.epsilon = epsilon
self.Q = None # Q function to be computed in fit()
def fit(self, **kwargs):
"""
Run value iteration.
"""
S, A = env.observation_space.n, env.action_space.n
Q = np.zeros((S, A))
V = np.zeros(S)
while True:
TQ = np.zeros((S, A))
for ss in range(S):
for aa in range(A):
TQ[ss, aa] = env.R[ss, aa] + self.gamma*env.P[ss, aa, :].dot(V)
V = TQ.max(axis=1)
if np.abs(TQ-Q).max() < self.epsilon:
break
Q = TQ
self.Q = Q
def policy(self, observation, **kwargs):
return self.Q[observation, :].argmax()
@classmethod
def sample_parameters(cls, trial):
"""
Sample hyperparameters for hyperparam optimization using Optuna (https://optuna.org/)
"""
gamma = trial.suggest_categorical('gamma', [0.1, 0.25, 0.5, 0.75, 0.99])
return {'gamma':gamma}
# Now, let's fit and test the agent!
agent = ValueIterationAgent(env)
agent.fit()
# Run agent's policy
env.enable_rendering()
state = env.reset()
for tt in range(20):
action = agent.policy(state)
state, reward, is_terminal, info = env.step(action)
# save video and clear buffer
env.save_video('./videos/gw.mp4', framerate=5)
env.clear_render_buffer()
# show video
show_video('./videos/gw.mp4')
```
# `AgentStats`: A powerfull class for hyperparameter optimization, training and evaluating agents.
```
# Create random agent as a baseline
class RandomAgent(Agent):
name = 'RandomAgent'
def __init__(self, env, gamma=0.99, epsilon=1e-5, **kwargs): # it's important to put **kwargs to ensure compatibility with the base class
"""
gamma: discount factor
episilon: precision of value iteration
"""
Agent.__init__(self, env, **kwargs) # self.env is initialized in the base class
def fit(self, **kwargs):
pass
def policy(self, observation, **kwargs):
return self.env.action_space.sample()
from rlberry.stats import AgentStats, compare_policies
# Define parameters
vi_params = {'gamma':0.1, 'epsilon':1e-3}
# Create AgentStats to fit 4 agents using 1 job
vi_stats = AgentStats(ValueIterationAgent, env, eval_horizon=20, init_kwargs=vi_params, n_fit=4, n_jobs=1)
vi_stats.fit()
# Create AgentStats for baseline
baseline_stats = AgentStats(RandomAgent, env, eval_horizon=20, n_fit=1)
# Compare policies using 10 Monte Carlo simulations
output = compare_policies([vi_stats, baseline_stats], n_sim=10)
# The value of gamma above makes our VI agent quite bad! Let's optimize it.
vi_stats.optimize_hyperparams(n_trials=15, timeout=30, n_sim=5, n_fit=1, n_jobs=1, sampler_method='random', pruner_method='none')
# fit with optimized params
vi_stats.fit()
# ... and see the results
output = compare_policies([vi_stats, baseline_stats], n_sim=10)
```
| github_jupyter |
# Setup
### Imports
```
import sys
sys.path.append('../')
del sys
%reload_ext autoreload
%autoreload 2
from toolbox.parsers import standard_parser, add_task_arguments, add_model_arguments
from toolbox.utils import load_task, get_pretrained_model, to_class_name
import modeling.models as models
```
### Notebook functions
```
from numpy import argmax, mean
def run_models(model_names, word2vec, bart, args, train=False):
args.word2vec = word2vec
args.bart = bart
pretrained_model = get_pretrained_model(args)
for model_name in model_names:
args.model = model_name
print(model_name)
model = getattr(models, to_class_name(args.model))(args=args, pretrained_model=pretrained_model)
model.play(task=task, args=args)
if train:
valid_scores = model.valid_scores['average_precision']
test_scores = model.test_scores['average_precision']
valid_scores = [mean(epoch_scores) for epoch_scores in valid_scores]
test_scores = [mean(epoch_scores) for epoch_scores in test_scores]
i_max = argmax(valid_scores)
print("max for epoch %i" % (i_max+1))
print("valid score: %.5f" % valid_scores[i_max])
print("test score: %.5f" % test_scores[i_max])
```
### Parameters
```
ap = standard_parser()
add_task_arguments(ap)
add_model_arguments(ap)
args = ap.parse_args(["-m", "",
"--root", ".."])
```
### Load the data
```
task = load_task(args)
```
# Basic baselines
```
run_models(model_names=["random",
"frequency"],
word2vec=False,
bart=False,
args=args)
```
# Basic baselines
```
run_models(model_names=["summaries-count",
"summaries-unique-count",
"summaries-overlap",
"activated-summaries",
"context-count",
"context-unique-count",
"summaries-context-count",
"summaries-context-unique-count",
"summaries-context-overlap"],
word2vec=False,
bart=False,
args=args)
```
# Embedding baselines
```
run_models(model_names=["summaries-average-embedding",
"summaries-overlap-average-embedding",
"context-average-embedding",
"summaries-context-average-embedding",
"summaries-context-overlap-average-embedding"],
word2vec=True,
bart=False,
args=args)
```
### Custom classifier
```
run_models(model_names=["custom-classifier"],
word2vec=True,
bart=False,
args=args,
train=True)
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
import warnings
warnings.filterwarnings("ignore")
df = pd.read_csv("Data.csv")
df.head(5)
df['sentiment'] = np.where(df['sentiment'] == "positive", 1, 0)
df.head()
df['sentiment'].value_counts().sort_index().plot(kind='bar',color = 'blue')
plt.xlabel('Sentiment')
plt.ylabel('Count')
df = df.sample(frac=0.1, random_state=0)
df.dropna(inplace=True)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(df['review'], df['sentiment'],test_size=0.1, random_state=0)
def cleanText(raw_text, remove_stopwords=False, stemming=False, split_text=False):
text = BeautifulSoup(raw_text, 'html.parser').get_text()
letters_only = re.sub("[^a-zA-Z]", " ", text)
words = letters_only.lower().split()
if remove_stopwords:
stops = set(stopwords.words("english"))
words = [w for w in words if not w in stops]
if stemming==True:
stemmer = SnowballStemmer('english')
words = [stemmer.stem(w) for w in words]
if split_text==True:
return (words)
return( " ".join(words))
import re
import nltk
from nltk.corpus import stopwords
from nltk.stem.porter import PorterStemmer
from nltk.stem import SnowballStemmer, WordNetLemmatizer
from nltk import sent_tokenize, word_tokenize, pos_tag
from bs4 import BeautifulSoup
import logging
from wordcloud import WordCloud
from gensim.models import word2vec
from gensim.models import Word2Vec
from gensim.models.keyedvectors import KeyedVectors
X_train_cleaned = []
X_test_cleaned = []
for d in X_train:
X_train_cleaned.append(cleanText(d))
print('Show a cleaned review in the training set : \n', X_train_cleaned[10])
for d in X_test:
X_test_cleaned.append(cleanText(d))
```
## CountVectorizer with Mulinomial Naive Bayes (Benchmark Model)
```
from sklearn.feature_extraction.text import CountVectorizer,TfidfVectorizer
from sklearn.naive_bayes import BernoulliNB, MultinomialNB
countVect = CountVectorizer()
X_train_countVect = countVect.fit_transform(X_train_cleaned)
print("Number of features : %d \n" %len(countVect.get_feature_names())) #6378
print("Show some feature names : \n", countVect.get_feature_names()[::1000])
# Train MultinomialNB classifier
mnb = MultinomialNB()
mnb.fit(X_train_countVect, y_train)
import pickle
pickle.dump(countVect,open('countVect_imdb.pkl','wb'))
from sklearn import metrics
from sklearn.metrics import accuracy_score,roc_auc_score
def modelEvaluation(predictions):
'''
Print model evaluation to predicted result
'''
print ("\nAccuracy on validation set: {:.4f}".format(accuracy_score(y_test, predictions)))
print("\nAUC score : {:.4f}".format(roc_auc_score(y_test, predictions)))
print("\nClassification report : \n", metrics.classification_report(y_test, predictions))
print("\nConfusion Matrix : \n", metrics.confusion_matrix(y_test, predictions))
predictions = mnb.predict(countVect.transform(X_test_cleaned))
modelEvaluation(predictions)
import pickle
pickle.dump(mnb,open('Naive_Bayes_model_imdb.pkl','wb'))
```
# TfidfVectorizer with Logistic Regression
```
from sklearn.linear_model import LogisticRegression
tfidf = TfidfVectorizer(min_df=5) #minimum document frequency of 5
X_train_tfidf = tfidf.fit_transform(X_train)
print("Number of features : %d \n" %len(tfidf.get_feature_names())) #1722
print("Show some feature names : \n", tfidf.get_feature_names()[::1000])
# Logistic Regression
lr = LogisticRegression()
lr.fit(X_train_tfidf, y_train)
feature_names = np.array(tfidf.get_feature_names())
sorted_coef_index = lr.coef_[0].argsort()
print('\nTop 10 features with smallest coefficients :\n{}\n'.format(feature_names[sorted_coef_index[:10]]))
print('Top 10 features with largest coefficients : \n{}'.format(feature_names[sorted_coef_index[:-11:-1]]))
predictions = lr.predict(tfidf.transform(X_test_cleaned))
modelEvaluation(predictions)
from sklearn.model_selection import GridSearchCV
from sklearn import metrics
from sklearn.metrics import roc_auc_score, accuracy_score
from sklearn.pipeline import Pipeline
estimators = [("tfidf", TfidfVectorizer()), ("lr", LogisticRegression())]
model = Pipeline(estimators)
params = {"lr__C":[0.1, 1, 10],
"tfidf__min_df": [1, 3],
"tfidf__max_features": [1000, None],
"tfidf__ngram_range": [(1,1), (1,2)],
"tfidf__stop_words": [None, "english"]}
grid = GridSearchCV(estimator=model, param_grid=params, scoring="accuracy", n_jobs=-1)
grid.fit(X_train_cleaned, y_train)
print("The best paramenter set is : \n", grid.best_params_)
# Evaluate on the validaton set
predictions = grid.predict(X_test_cleaned)
modelEvaluation(predictions)
```
# Word2Vec
<br>
**Step 1 : Parse review text to sentences (Word2Vec model takes a list of sentences as inputs)**
**Step 2 : Create volcabulary list using Word2Vec model.**
**Step 3 : Transform each review into numerical representation by computing average feature vectors of words therein.**
**Step 4 : Fit the average feature vectors to Random Forest Classifier.**
```
tokenizer = nltk.data.load('tokenizers/punkt/english.pickle')
def parseSent(review, tokenizer, remove_stopwords=False):
raw_sentences = tokenizer.tokenize(review.strip())
sentences = []
for raw_sentence in raw_sentences:
if len(raw_sentence) > 0:
sentences.append(cleanText(raw_sentence, remove_stopwords, split_text=True))
return sentences
# Parse each review in the training set into sentences
sentences = []
for review in X_train_cleaned:
sentences += parseSent(review, tokenizer,remove_stopwords=False)
print('%d parsed sentence in the training set\n' %len(sentences))
print('Show a parsed sentence in the training set : \n', sentences[10])
```
## Creating Volcabulary List usinhg Word2Vec Model
```
from wordcloud import WordCloud
from gensim.models import word2vec
from gensim.models.keyedvectors import KeyedVectors
num_features = 300 #embedding dimension
min_word_count = 10
num_workers = 4
context = 10
downsampling = 1e-3
print("Training Word2Vec model ...\n")
w2v = Word2Vec(sentences, workers=num_workers, min_count = min_word_count,\
window = context, sample = downsampling)
w2v.init_sims(replace=True)
w2v.save("w2v_300features_10minwordcounts_10context") #save trained word2vec model
print("Number of words in the vocabulary list : %d \n" %len(w2v.wv.index2word)) #4016
print("Show first 10 words in the vocalbulary list vocabulary list: \n", w2v.wv.index2word[0:10])
```
## Averaging Feature Vectors
```
def makeFeatureVec(review, model, num_features):
'''
Transform a review to a feature vector by averaging feature vectors of words
appeared in that review and in the volcabulary list created
'''
featureVec = np.zeros((num_features,),dtype="float32")
nwords = 0.
index2word_set = set(model.wv.index2word) #index2word is the volcabulary list of the Word2Vec model
isZeroVec = True
for word in review:
if word in index2word_set:
nwords = nwords + 1.
featureVec = np.add(featureVec, model[word])
isZeroVec = False
if isZeroVec == False:
featureVec = np.divide(featureVec, nwords)
return featureVec
def getAvgFeatureVecs(reviews, model, num_features):
'''
Transform all reviews to feature vectors using makeFeatureVec()
'''
counter = 0
reviewFeatureVecs = np.zeros((len(reviews),num_features),dtype="float32")
for review in reviews:
reviewFeatureVecs[counter] = makeFeatureVec(review, model,num_features)
counter = counter + 1
return reviewFeatureVecs
X_train_cleaned = []
for review in X_train:
X_train_cleaned.append(cleanText(review, remove_stopwords=True, split_text=True))
trainVector = getAvgFeatureVecs(X_train_cleaned, w2v, num_features)
print("Training set : %d feature vectors with %d dimensions" %trainVector.shape)
# Get feature vectors for validation set
X_test_cleaned = []
for review in X_test:
X_test_cleaned.append(cleanText(review, remove_stopwords=True, split_text=True))
testVector = getAvgFeatureVecs(X_test_cleaned, w2v, num_features)
print("Validation set : %d feature vectors with %d dimensions" %testVector.shape)
```
# Random Forest Classifer
```
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier(n_estimators=1000)
rf.fit(trainVector, y_train)
predictions = rf.predict(testVector)
modelEvaluation(predictions)
```
## LSTM
<br>
**Step 1 : Prepare X_train and X_test to 2D tensor.**
**Step 2 : Train a simple LSTM (embeddign layer => LSTM layer => dense layer).**
**Step 3 : Compile and fit the model using log loss function and ADAM optimizer.**
```
from keras.preprocessing import sequence
from keras.utils import np_utils
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation, Lambda
from keras.layers.embeddings import Embedding
from keras.layers.recurrent import LSTM, SimpleRNN, GRU
from keras.preprocessing.text import Tokenizer
from collections import defaultdict
from keras.layers.convolutional import Convolution1D
from keras import backend as K
from keras.layers.embeddings import Embedding
top_words = 40000
maxlen = 200
batch_size = 62
nb_classes = 4
nb_epoch = 6
# Vectorize X_train and X_test to 2D tensor
tokenizer = Tokenizer(nb_words=top_words) #only consider top 20000 words in the corpse
tokenizer.fit_on_texts(X_train)
# tokenizer.word_index #access word-to-index dictionary of trained tokenizer
sequences_train = tokenizer.texts_to_sequences(X_train)
sequences_test = tokenizer.texts_to_sequences(X_test)
X_train_seq = sequence.pad_sequences(sequences_train, maxlen=maxlen)
X_test_seq = sequence.pad_sequences(sequences_test, maxlen=maxlen)
# one-hot encoding of y_train and y_test
y_train_seq = np_utils.to_categorical(y_train, nb_classes)
y_test_seq = np_utils.to_categorical(y_test, nb_classes)
print('X_train shape:', X_train_seq.shape)
print("========================================")
print('X_test shape:', X_test_seq.shape)
print("========================================")
print('y_train shape:', y_train_seq.shape)
print("========================================")
print('y_test shape:', y_test_seq.shape)
print("========================================")
model1 = Sequential()
model1.add(Embedding(top_words, 128, dropout=0.2))
model1.add(LSTM(128, dropout_W=0.2, dropout_U=0.2))
model1.add(Dense(nb_classes))
model1.add(Activation('softmax'))
model1.summary()
model1.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model1.fit(X_train_seq, y_train_seq, batch_size=batch_size, nb_epoch=nb_epoch, verbose=1)
# Model evluation
score = model1.evaluate(X_test_seq, y_test_seq, batch_size=batch_size)
print('Test loss : {:.4f}'.format(score[0]))
print('Test accuracy : {:.4f}'.format(score[1]))
len(X_train_seq),len(y_train_seq)
print("Size of weight matrix in the embedding layer : ", \
model1.layers[0].get_weights()[0].shape)
# get weight matrix of the hidden layer
print("Size of weight matrix in the hidden layer : ", \
model1.layers[1].get_weights()[0].shape)
# get weight matrix of the output layer
print("Size of weight matrix in the output layer : ", \
model1.layers[2].get_weights()[0].shape)
import pickle
pickle.dump(model1,open('model1.pkl','wb'))
```
## LSTM with Word2Vec Embedding
```
2v = Word2Vec.load("w2v_300features_10minwordcounts_10context")
embedding_matrix = w2v.wv.syn0
print("Shape of embedding matrix : ", embedding_matrix.shape)
top_words = embedding_matrix.shape[0] #4016
maxlen = 300
batch_size = 62
nb_classes = 4
nb_epoch = 7
# Vectorize X_train and X_test to 2D tensor
tokenizer = Tokenizer(nb_words=top_words) #only consider top 20000 words in the corpse
tokenizer.fit_on_texts(X_train)
# tokenizer.word_index #access word-to-index dictionary of trained tokenizer
sequences_train = tokenizer.texts_to_sequences(X_train)
sequences_test = tokenizer.texts_to_sequences(X_test)
X_train_seq1 = sequence.pad_sequences(sequences_train, maxlen=maxlen)
X_test_seq1 = sequence.pad_sequences(sequences_test, maxlen=maxlen)
# one-hot encoding of y_train and y_test
y_train_seq1 = np_utils.to_categorical(y_train, nb_classes)
y_test_seq1 = np_utils.to_categorical(y_test, nb_classes)
print('X_train shape:', X_train_seq1.shape)
print("========================================")
print('X_test shape:', X_test_seq1.shape)
print("========================================")
print('y_train shape:', y_train_seq1.shape)
print("========================================")
print('y_test shape:', y_test_seq1.shape)
print("========================================")
len(X_train_seq1),len(y_train_seq1)
embedding_layer = Embedding(embedding_matrix.shape[0], #4016
embedding_matrix.shape[1], #300
weights=[embedding_matrix])
model2 = Sequential()
model2.add(embedding_layer)
model2.add(LSTM(128, dropout_W=0.2, dropout_U=0.2))
model2.add(Dense(nb_classes))
model2.add(Activation('softmax'))
model2.summary()
model2.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model2.fit(X_train_seq1, y_train_seq1, batch_size=batch_size, nb_epoch=nb_epoch, verbose=1)
# Model evaluation
score = model2.evaluate(X_test_seq1, y_test_seq1, batch_size=batch_size)
print('Test loss : {:.4f}'.format(score[0]))
print('Test accuracy : {:.4f}'.format(score[1]))
print("Size of weight matrix in the embedding layer : ", \
model2.layers[0].get_weights()[0].shape)
print("Size of weight matrix in the hidden layer : ", \
model2.layers[1].get_weights()[0].shape)
print("Size of weight matrix in the output layer : ", \
model2.layers[2].get_weights()[0].shape)
```
| github_jupyter |
```
print('Hello World')
import numpy
print(numpy.pi)
import numpy as np
print(np.pi)
import numpy as np
import numpy as np
import matplotlib.pyplot as plt
%pinfo print
%matplotlib inline
plt.style.use('../../solving_pde_mooc/notebooks/styles/mainstyle.use')
#create an empty python list that will create values of delta for all k
delta_list = []
delta_list_2 = []
#we have to loop for k=1,2,3,4,5,6,7,8,9
#we will use the range function range(min,max,step)
#this creates values of min, min+1*step, min+2*step, ..., max-step
#as you can see, max value is not (actually never!) included in the list
#min and step are optional and if you don't input them, then default values will be used
#the default are min=0,step=1
#range(5) will give 0,1,2,3,4 since it takes min=0, step=1 (default)
#range(1,5) will give 1,2,3,4 since it takes min=1
#range(2,5) will give 2,3,4
#range(2,8,2) will give 2,4,6
#once we have create the list of k, we have to create the delta_list
#we can create the delta_list by adding/appending values to the end of the list each time
#we can do that by using the append function as done below
for k in range(1,10):
delta_list.append(2**(-k))
print('\n')
print(delta_list)
#another way to create a list is to direcly add values to it
delta_list_2 = [2**(-k) for k in range(1,10)]
print('\n')
print(delta_list_2)
#here a list is being created using the values and fuctions that we want
#but to do operations on it, we need to create a numerical array out of it
#np(Numpy) helps us create an array out of it as shown below
delta = np.array(delta_list_2)
print('\n')
print(delta)
%%timeit
#method 1 - using append function
#create an empty list first
delta_list = []
#append values to the list
for k in range(1,10):
delta_list.append(2**(-k))
#assign the list to an array using numpy
delta = np.array(delta_list)
#%%timeit
#method 2 - directly adding values to the list
#create an empty list first
delta_list = []
#directly add values to the list
delta_list = [2**(-k) for k in range(1,10)]
#assign the list to an array using numpy
delta = np.array(delta_list)
print(delta_list[8])
#%%timeit
#method 3 - directly create a numpy array
#first create a numpy array with all zeros
delta_array = np.zeros(9)
print('\n')
print(delta_array)
#this will create an array of 9 elements - indexing as 0,1,2,...,8
#numpy array starts indexing from 0 and goest to N-1
# var =np.zeros(9)
#this will mean that the first element in the array will be var[0]
#last element in the array will be var(8)
print('\n')
print(len(delta))
print('\n')
#fill the array
#using len function, since i will start from zero
#also, numpy array starts indexing from 0 to N-1
for i in range(len(delta_array)):
print(i)
delta_array[i]=2**(-i-1)
print('\n')
print(delta_array[0])
print('\n')
print(delta_array[8])
#the method I am most used to
delta_array = np.zeros(9)
for i in range(len(delta_array)):
delta_array[i]=2**(-i-1)
print(delta_array)
print('\n')
print(delta_array[8])
```
| github_jupyter |
# Implementing TF-IDF
------------------------------------
Here we implement TF-IDF, (Text Frequency - Inverse Document Frequency) for the spam-ham text data.
We will use a hybrid approach of encoding the texts with sci-kit learn's TFIDF vectorizer. Then we will use the regular TensorFlow logistic algorithm outline.
Creating the TF-IDF vectors requires us to load all the text into memory and count the occurrences of each word before we can start training our model. Because of this, it is not implemented fully in Tensorflow, so we will use Scikit-learn for creating our TF-IDF embedding, but use Tensorflow to fit the logistic model.
We start by loading the necessary libraries.
```
import tensorflow as tf
import matplotlib.pyplot as plt
import csv
import numpy as np
import os
import string
import requests
import io
import nltk
from zipfile import ZipFile
from sklearn.feature_extraction.text import TfidfVectorizer
from tensorflow.python.framework import ops
ops.reset_default_graph()
```
Start a computational graph session.
```
sess = tf.Session()
```
We set two parameters, `batch_size` and `max_features`. `batch_size` is the size of the batch we will train our logistic model on, and `max_features` is the maximum number of tf-idf textual words we will use in our logistic regression.
```
batch_size = 200
max_features = 1000
```
Check if data was downloaded, otherwise download it and save for future use
```
save_file_name = 'temp_spam_data.csv'
if os.path.isfile(save_file_name):
text_data = []
with open(save_file_name, 'r') as temp_output_file:
reader = csv.reader(temp_output_file)
for row in reader:
text_data.append(row)
else:
zip_url = 'http://archive.ics.uci.edu/ml/machine-learning-databases/00228/smsspamcollection.zip'
r = requests.get(zip_url)
z = ZipFile(io.BytesIO(r.content))
file = z.read('SMSSpamCollection')
# Format Data
text_data = file.decode()
text_data = text_data.encode('ascii',errors='ignore')
text_data = text_data.decode().split('\n')
text_data = [x.split('\t') for x in text_data if len(x)>=1]
# And write to csv
with open(save_file_name, 'w') as temp_output_file:
writer = csv.writer(temp_output_file)
writer.writerows(text_data)
```
We now clean our texts. This will decrease our vocabulary size by converting everything to lower case, removing punctuation and getting rid of numbers.
```
texts = [x[1] for x in text_data]
target = [x[0] for x in text_data]
# Relabel 'spam' as 1, 'ham' as 0
target = [1. if x=='spam' else 0. for x in target]
# Normalize text
# Lower case
texts = [x.lower() for x in texts]
# Remove punctuation
texts = [''.join(c for c in x if c not in string.punctuation) for x in texts]
# Remove numbers
texts = [''.join(c for c in x if c not in '0123456789') for x in texts]
# Trim extra whitespace
texts = [' '.join(x.split()) for x in texts]
```
Define tokenizer function and create the TF-IDF vectors with SciKit-Learn.
```
import nltk
nltk.download('punkt')
def tokenizer(text):
words = nltk.word_tokenize(text)
return words
# Create TF-IDF of texts
tfidf = TfidfVectorizer(tokenizer=tokenizer, stop_words='english', max_features=max_features)
sparse_tfidf_texts = tfidf.fit_transform(texts)
```
Split up data set into train/test.
```
train_indices = np.random.choice(sparse_tfidf_texts.shape[0], round(0.8*sparse_tfidf_texts.shape[0]), replace=False)
test_indices = np.array(list(set(range(sparse_tfidf_texts.shape[0])) - set(train_indices)))
texts_train = sparse_tfidf_texts[train_indices]
texts_test = sparse_tfidf_texts[test_indices]
target_train = np.array([x for ix, x in enumerate(target) if ix in train_indices])
target_test = np.array([x for ix, x in enumerate(target) if ix in test_indices])
```
Now we create the variables and placeholders necessary for logistic regression. After which, we declare our logistic regression operation. Remember that the sigmoid part of the logistic regression will be in the loss function.
```
# Create variables for logistic regression
A = tf.Variable(tf.random_normal(shape=[max_features,1]))
b = tf.Variable(tf.random_normal(shape=[1,1]))
# Initialize placeholders
x_data = tf.placeholder(shape=[None, max_features], dtype=tf.float32)
y_target = tf.placeholder(shape=[None, 1], dtype=tf.float32)
# Declare logistic model (sigmoid in loss function)
model_output = tf.add(tf.matmul(x_data, A), b)
```
Next, we declare the loss function (which has the sigmoid in it), and the prediction function. The prediction function will have to have a sigmoid inside of it because it is not in the model output.
```
# Declare loss function (Cross Entropy loss)
loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=model_output, labels=y_target))
# Prediction
prediction = tf.round(tf.sigmoid(model_output))
predictions_correct = tf.cast(tf.equal(prediction, y_target), tf.float32)
accuracy = tf.reduce_mean(predictions_correct)
```
Now we create the optimization function and initialize the model variables.
```
# Declare optimizer
my_opt = tf.train.GradientDescentOptimizer(0.0025)
train_step = my_opt.minimize(loss)
# Intitialize Variables
init = tf.global_variables_initializer()
sess.run(init)
```
Finally, we perform our logisitic regression on the 1000 TF-IDF features.
```
train_loss = []
test_loss = []
train_acc = []
test_acc = []
i_data = []
for i in range(10000):
rand_index = np.random.choice(texts_train.shape[0], size=batch_size)
rand_x = texts_train[rand_index].todense()
rand_y = np.transpose([target_train[rand_index]])
sess.run(train_step, feed_dict={x_data: rand_x, y_target: rand_y})
# Only record loss and accuracy every 100 generations
if (i+1)%100==0:
i_data.append(i+1)
train_loss_temp = sess.run(loss, feed_dict={x_data: rand_x, y_target: rand_y})
train_loss.append(train_loss_temp)
test_loss_temp = sess.run(loss, feed_dict={x_data: texts_test.todense(), y_target: np.transpose([target_test])})
test_loss.append(test_loss_temp)
train_acc_temp = sess.run(accuracy, feed_dict={x_data: rand_x, y_target: rand_y})
train_acc.append(train_acc_temp)
test_acc_temp = sess.run(accuracy, feed_dict={x_data: texts_test.todense(), y_target: np.transpose([target_test])})
test_acc.append(test_acc_temp)
if (i+1)%500==0:
acc_and_loss = [i+1, train_loss_temp, test_loss_temp, train_acc_temp, test_acc_temp]
acc_and_loss = [np.round(x,2) for x in acc_and_loss]
print('Generation # {}. Train Loss (Test Loss): {:.2f} ({:.2f}). Train Acc (Test Acc): {:.2f} ({:.2f})'.format(*acc_and_loss))
```
Here is matplotlib code to plot the loss and accuracies.
```
# Plot loss over time
plt.plot(i_data, train_loss, 'k-', label='Train Loss')
plt.plot(i_data, test_loss, 'r--', label='Test Loss', linewidth=4)
plt.title('Cross Entropy Loss per Generation')
plt.xlabel('Generation')
plt.ylabel('Cross Entropy Loss')
plt.legend(loc='upper right')
plt.show()
# Plot train and test accuracy
plt.plot(i_data, train_acc, 'k-', label='Train Set Accuracy')
plt.plot(i_data, test_acc, 'r--', label='Test Set Accuracy', linewidth=4)
plt.title('Train and Test Accuracy')
plt.xlabel('Generation')
plt.ylabel('Accuracy')
plt.legend(loc='lower right')
plt.show()
test complete; Gopal
```
| github_jupyter |
# Table of Contents
<p><div class="lev1 toc-item"><a href="#Texte-d'oral-de-modélisation---Agrégation-Option-Informatique" data-toc-modified-id="Texte-d'oral-de-modélisation---Agrégation-Option-Informatique-1"><span class="toc-item-num">1 </span>Texte d'oral de modélisation - Agrégation Option Informatique</a></div><div class="lev2 toc-item"><a href="#Préparation-à-l'agrégation---ENS-de-Rennes,-2016-17" data-toc-modified-id="Préparation-à-l'agrégation---ENS-de-Rennes,-2016-17-11"><span class="toc-item-num">1.1 </span>Préparation à l'agrégation - ENS de Rennes, 2016-17</a></div><div class="lev2 toc-item"><a href="#À-propos-de-ce-document" data-toc-modified-id="À-propos-de-ce-document-12"><span class="toc-item-num">1.2 </span>À propos de ce document</a></div><div class="lev2 toc-item"><a href="#Implémentation" data-toc-modified-id="Implémentation-13"><span class="toc-item-num">1.3 </span>Implémentation</a></div><div class="lev3 toc-item"><a href="#Une-bonne-structure-de-donnée-pour-des-intervalles-et-des-graphes-d'intervales" data-toc-modified-id="Une-bonne-structure-de-donnée-pour-des-intervalles-et-des-graphes-d'intervales-131"><span class="toc-item-num">1.3.1 </span>Une bonne structure de donnée pour des intervalles et des graphes d'intervales</a></div><div class="lev3 toc-item"><a href="#Algorithme-de-coloriage-de-graphe-d'intervalles" data-toc-modified-id="Algorithme-de-coloriage-de-graphe-d'intervalles-132"><span class="toc-item-num">1.3.2 </span>Algorithme de coloriage de graphe d'intervalles</a></div><div class="lev3 toc-item"><a href="#Algorithme-pour-calculer-le-stable-maximum-d'un-graphe-d'intervalles" data-toc-modified-id="Algorithme-pour-calculer-le-stable-maximum-d'un-graphe-d'intervalles-133"><span class="toc-item-num">1.3.3 </span>Algorithme pour calculer le <em>stable maximum</em> d'un graphe d'intervalles</a></div><div class="lev2 toc-item"><a href="#Exemples" data-toc-modified-id="Exemples-14"><span class="toc-item-num">1.4 </span>Exemples</a></div><div class="lev3 toc-item"><a href="#Qui-a-tué-le-Duc-de-Densmore-?" data-toc-modified-id="Qui-a-tué-le-Duc-de-Densmore-?-141"><span class="toc-item-num">1.4.1 </span>Qui a tué le Duc de Densmore ?</a></div><div class="lev4 toc-item"><a href="#Comment-résoudre-ce-problème-?" data-toc-modified-id="Comment-résoudre-ce-problème-?-1411"><span class="toc-item-num">1.4.1.1 </span>Comment résoudre ce problème ?</a></div><div class="lev4 toc-item"><a href="#Solution" data-toc-modified-id="Solution-1412"><span class="toc-item-num">1.4.1.2 </span>Solution</a></div><div class="lev3 toc-item"><a href="#Le-problème-des-frigos" data-toc-modified-id="Le-problème-des-frigos-142"><span class="toc-item-num">1.4.2 </span>Le problème des frigos</a></div><div class="lev3 toc-item"><a href="#Le-problème-du-CSA" data-toc-modified-id="Le-problème-du-CSA-143"><span class="toc-item-num">1.4.3 </span>Le problème du CSA</a></div><div class="lev3 toc-item"><a href="#Le-problème-du-wagon-restaurant" data-toc-modified-id="Le-problème-du-wagon-restaurant-144"><span class="toc-item-num">1.4.4 </span>Le problème du wagon restaurant</a></div><div class="lev4 toc-item"><a href="#Solution-via-l'algorithme-de-coloriage-de-graphe-d'intervalles" data-toc-modified-id="Solution-via-l'algorithme-de-coloriage-de-graphe-d'intervalles-1441"><span class="toc-item-num">1.4.4.1 </span>Solution via l'algorithme de coloriage de graphe d'intervalles</a></div><div class="lev2 toc-item"><a href="#Bonus-?" data-toc-modified-id="Bonus-?-15"><span class="toc-item-num">1.5 </span>Bonus ?</a></div><div class="lev3 toc-item"><a href="#Visualisation-des-graphes-définis-dans-les-exemples" data-toc-modified-id="Visualisation-des-graphes-définis-dans-les-exemples-151"><span class="toc-item-num">1.5.1 </span>Visualisation des graphes définis dans les exemples</a></div><div class="lev2 toc-item"><a href="#Conclusion" data-toc-modified-id="Conclusion-16"><span class="toc-item-num">1.6 </span>Conclusion</a></div>
# Texte d'oral de modélisation - Agrégation Option Informatique
## Préparation à l'agrégation - ENS de Rennes, 2016-17
- *Date* : 3 avril 2017
- *Auteur* : [Lilian Besson](https://GitHub.com/Naereen/notebooks/)
- *Texte*: Annale 2006, "Crime Parfait"
## À propos de ce document
- Ceci est une *proposition* de correction, partielle et probablement non-optimale, pour la partie implémentation d'un [texte d'annale de l'agrégation de mathématiques, option informatique](http://Agreg.org/Textes/).
- Ce document est un [notebook Jupyter](https://www.Jupyter.org/), et [est open-source sous Licence MIT sur GitHub](https://github.com/Naereen/notebooks/tree/master/agreg/), comme les autres solutions de textes de modélisation que [j](https://GitHub.com/Naereen)'ai écrite cette année.
- L'implémentation sera faite en OCaml, version 4+ :
```
Sys.command "ocaml -version";;
```
----
## Implémentation
La question d'implémentation était la question 2) en page 7.
> « Proposer une structure de donnée adaptée pour représenter un graphe d'intervalles dont une représentation sous forme de famille d’intervalles est connue.
> Implémenter de manière efficace l’algorithme de coloriage de graphes d'intervalles et illustrer cet algorithme sur une application bien choisie citée dans le texte. »
Nous allons donc d'abord définir une structure de donnée pour une famille d'intervalles ainsi que pour un graphe d'intervalle, ainsi qu'une fonction convertissant l'un en l'autre.
Cela permettra de facilement définr les différents exemples du texte, et de les résoudre.
### Une bonne structure de donnée pour des intervalles et des graphes d'intervales
- Pour des **intervalles** à valeurs réelles, on se restreint par convénience à des valeurs entières.
```
type intervalle = (int * int);;
type intervalles = intervalle list;;
```
- Pour des **graphes d'intervalles**, on utilise une simple représentation sous forme de liste d'adjacence, plus facile à mettre en place en OCaml qu'une représentation sous forme de matrice. Ici, tous nos graphes ont pour sommets $0 \dots n - 1$.
```
type sommet = int;;
type voisins = sommet list;;
type graphe_intervalle = voisins list;;
```
> *Note:* j'ai préféré garder une structure très simple, pour les intervalles, les graphes d'intervalles et les coloriages, mais on perd un peu en lisibilité dans la fonction coloriage.
>
> Implicitement, dès qu'une liste d'intervalles est fixée, de taille $n$, ils sont numérotés de $0$ à $n-1$. Le graphe `g` aura pour sommet $0 \dots n-1$, et le coloriage sera un simple tableau de couleurs `c` (i.e., d'entiers), donnant en `c[i]` la couleur de l'intervalle numéro `i`.
>
> Une solution plus intelligente aurait été d'utiliser des tables d'association, cf. le module [Map](http://caml.inria.fr/pub/docs/manual-ocaml/libref/Map.html) de OCaml, et le code proposé par Julien durant son oral.
- On peut rapidement écrire une fonction qui va convertir une liste d'intervalle (`intervalles`) en un graphe d'intervalle. On crée les sommets du graphes, via `index_intvls` qui associe un intervalle à son indice, et ensuite on ajoute les arêtes au graphe selon les contraintes définissant un graphe d'intervalle :
$$ \forall I, I' \in V, (I,I') \in E \Leftrightarrow I \neq I' \;\text{and}\; I \cap I' \neq \emptyset $$
Donc avec des intervales $I = [x,y]$ et $I' = [a,b]$, cela donne :
$$ \forall I = [x,y], I' = [a,b] \in V, (I,I') \in E \Leftrightarrow (x,y) \neq (a,b) \;\text{and}\; \neg (b < x \;\text{or}\; y < a) $$
```
let graphe_depuis_intervalles (intvls : intervalles) : graphe_intervalle =
let n = List.length intvls in (* Nomber de sommet *)
let array_intvls = Array.of_list intvls in (* Tableau des intervalles *)
let index_intvls = Array.to_list (
Array.init n (fun i -> (
array_intvls.(i), i) (* Associe un intervalle à son indice *)
)
) in
let gr = List.map (fun (a, b) -> (* Pour chaque intervalle [a, b] *)
List.filter (fun (x, y) -> (* On ajoute [x, y] s'il intersecte [a, b] *)
(x, y) <> (a, b) (* Intervalle différent *)
&& not ( (b < x) || (y < a) ) (* pas x---y a---b ni a---b x---y *)
) intvls
) intvls in
(* On transforme la liste de liste d'intervalles en une liste de liste d'entiers *)
List.map (fun voisins ->
List.map (fun sommet -> (* Grace au tableau index_intvls *)
List.assoc sommet index_intvls
) voisins
) gr
;;
```
### Algorithme de coloriage de graphe d'intervalles
> Étant donné un graphe $G = (V, E)$, on cherche un entier $n$ minimal et une fonction $c : V \to \{1, \cdots, n\}$ telle que si $(v_1 , v_2) \in E$, alors $c(v_1) \neq c(v_2)$.
On suit les indications de l'énoncé pour implémenter facilement cet algorithme.
> Une *heuristique* simple pour résoudre ce problème consiste à appliquer l’algorithme glouton suivant :
> - tant qu'il reste reste des sommets non coloriés,
> + en choisir un
> + et le colorier avec le plus petit entier qui n’apparait pas dans les voisins déjà coloriés.
> En choisissant bien le nouveau sommet à colorier à chaque fois, cette heuristique se révelle optimale pour les graphes d’intervalles.
On peut d'abord définir un type de donnée pour un coloriage, sous la forme d'une liste de couple d'intervalle et de couleur.
Ainsi, `List.assoc` peut être utilisée pour donner le coloriage de chaque intervalle.
```
type couleur = int;;
type coloriage = (intervalle * couleur) list;;
let coloriage_depuis_couleurs (intvl : intervalles) (c : couleur array) : coloriage =
Array.to_list (Array.init (Array.length c) (fun i -> (List.nth intvl i), c.(i)));;
let quelle_couleur (intvl : intervalle) (colors : coloriage) =
List.assoc intvl colors
;;
```
Ensuite, l'ordre partiel $\prec_i$ sur les intervalles est défini comme ça :
$$ I = (a,b) \prec_i J=(x, y) \Longleftrightarrow a < x $$
```
let ordre_partiel ((a, _) : intervalle) ((x, _) : intervalle) =
a < x
;;
```
On a ensuite besoin d'une fonction qui va calculer l'inf de $\mathbb{N} \setminus \{x : x \in \mathrm{valeurs} \}$:
```
let inf_N_minus valeurs =
let res = ref 0 in (* Très important d'utiliser une référence ! *)
while List.mem !res valeurs do
incr res;
done;
!res
;;
```
On vérifie rapidement sur deux exemples :
```
inf_N_minus [0; 1; 3];; (* 2 *)
inf_N_minus [0; 1; 2; 3; 4; 5; 6; 10];; (* 7 *)
```
Enfin, on a besoin d'une fonction pour trouver l'intervalle $I \in V$, minimal pour $\prec_i$, tel que $c(I) = +\infty$.
```
let trouve_min_interval intvl (c : coloriage) (inf : couleur) =
let colorie inter = quelle_couleur inter c in
(* D'abord on extraie {I : c(I) = +oo} *)
let intvl2 = List.filter (fun i -> (colorie i) = inf) intvl in
(* Puis on parcourt la liste et on garde le plus petit pour l'ordre *)
let i0 = ref 0 in
for j = 1 to (List.length intvl2) - 1 do
if ordre_partiel (List.nth intvl2 j) (List.nth intvl2 !i0) then
i0 := j;
done;
List.nth intvl2 !i0;
;;
```
Et donc tout cela permet de finir l'algorithme, tel que décrit dans le texte :
<img style="width:65%;" alt="images/algorithme_coloriage.png" src="images/algorithme_coloriage.png">
```
let coloriage_intervalles (intvl : intervalles) : coloriage =
let n = List.length intvl in (* Nombre d'intervalles *)
let array_intvls = Array.of_list intvl in (* Tableau des intervalles *)
let index_intvls = Array.to_list (
Array.init n (fun i -> (
array_intvls.(i), i) (* Associe un intervalle à son indice *)
)
) in
let gr = graphe_depuis_intervalles intvl in
let inf = n + 10000 in (* Grande valeur, pour +oo *)
let c = Array.make n inf in (* Liste des couleurs, c(I) = +oo pour tout I *)
let maxarray = Array.fold_left max (-inf - 10000) in (* Initialisé à -oo *)
while maxarray c = inf do (* Il reste un I in V tel que c(I) = +oo *)
begin (* C'est la partie pas élégante *)
(* On récupère le coloriage depuis la liste de couleurs actuelle *)
let coloriage = (coloriage_depuis_couleurs intvl c) in
(* Puis la fonction [colorie] pour associer une couleur à un intervalle *)
let colorie inter = quelle_couleur inter coloriage in
(* On choisit un I, minimal pour ordre_partiel, tel que c(I) = +oo *)
let inter = trouve_min_interval intvl coloriage inf in
(* On trouve son indice *)
let i = List.assoc inter index_intvls in
(* On trouve les voisins de i dans le graphe *)
let adj_de_i = List.nth gr i in
(* Puis les voisins de I en tant qu'intervalles *)
let adj_de_I = List.map (fun j -> List.nth intvl j) adj_de_i in
(* Puis on récupère leurs couleurs *)
let valeurs = List.map colorie adj_de_I in
(* c(I) = inf(N - {c(J) : J adjacent a I} ) *)
c.(i) <- inf_N_minus valeurs;
end;
done;
coloriage_depuis_couleurs intvl c;
;;
```
Une fois qu'on a un coloriage, à valeurs dans $0,\dots,k$ on récupère le nombre de couleurs comme $1 + \max c$, i.e., $k+1$.
```
let max_valeurs = List.fold_left max 0;;
let nombre_chromatique (colorg : coloriage) : int =
1 + max_valeurs (List.map snd colorg)
;;
```
### Algorithme pour calculer le *stable maximum* d'un graphe d'intervalles
On répond ici à la question 7.
> « Proposer un algorithme efficace pour construire un stable maximum (i.e., un ensemble de sommets indépendants) d'un graphe d’intervalles dont on connaı̂t une représentation sous forme d'intervalles.
> On pourra chercher à quelle condition l'intervalle dont l'extrémité droite est la plus à gauche appartient à un stable maximum. »
**FIXME, je ne l'ai pas encore fait.**
----
## Exemples
On traite ici l'exemple introductif, ainsi que les trois autres exemples proposés.
### Qui a tué le Duc de Densmore ?
> On ne rappelle pas le problème, mais les données :
> - Ann dit avoir vu Betty, Cynthia, Emily, Felicia et Georgia.
- Betty dit avoir vu Ann, Cynthia et Helen.
- Cynthia dit avoir vu Ann, Betty, Diana, Emily et Helen.
- Diana dit avoir vu Cynthia et Emily.
- Emily dit avoir vu Ann, Cynthia, Diana et Felicia.
- Felicia dit avoir vu Ann et Emily.
- Georgia dit avoir vu Ann et Helen.
- Helen dit avoir vu Betty, Cynthia et Georgia.
Transcrit sous forme de graphe, cela donne :
```
(* On définit des entiers, c'est plus simple *)
let ann = 0
and betty = 1
and cynthia = 2
and diana = 3
and emily = 4
and felicia = 5
and georgia = 6
and helen = 7;;
let graphe_densmore = [
[betty; cynthia; emily; felicia; georgia]; (* Ann *)
[ann; cynthia; helen]; (* Betty *)
[ann; betty; diana; emily; helen]; (* Cynthia *)
[cynthia; emily]; (* Diana *)
[ann; cynthia; diana; felicia]; (* Emily *)
[ann; emily]; (* Felicia *)
[ann; helen]; (* Georgia *)
[betty; cynthia; georgia] (* Helen *)
];;
```

> Figure 1. Graphe d'intervalle pour le problème de l'assassinat du duc de Densmore.
Avec les prénoms plutôt que des numéros, cela donne :

> Figure 2. Graphe d'intervalle pour le problème de l'assassinat du duc de Densmore.
#### Comment résoudre ce problème ?
> Il faut utiliser la caractérisation du théorème 2 du texte, et la définition des graphes parfaits.
- Définition + Théorème 2 (point 1) :
On sait qu'un graphe d'intervalle est parfait, et donc tous ses graphes induits le sont aussi.
La caractérisation via les cordes sur les cycles de taille $\geq 4$ permet de dire qu'un quadrilatère (cycle de taille $4$) n'est pas un graphe d'intervalle.
Donc un graphe qui contient un graphe induit étant un quadrilatère ne peut être un graphe d'intervalle.
Ainsi, sur cet exemple, comme on a deux quadrilatères $A B H G$ et $A G H C$, on en déduit que $A$, $G$, ou $H$ ont menti.
- Théorème 2 (point 2) :
Ensuite, si on enlève $G$ ou $H$, le graphe ne devient pas un graphe d'intervalle, par les considérations suivantes, parce que son complémentaire n'est pas un graphe de comparaison.
En effet, par exemple si on enlève $G$, $A$ et $H$ et $D$ forment une clique dans le complémentaire $\overline{G}$ de $G$, et l'irréflexivité d'une éventuelle relation $R$ rend cela impossible. Pareil si on enlève $H$, avec $G$ et $B$ et $D$ qui formet une clique dans $\overline{G}$.
Par contre, si on enlève $A$, le graphe devient triangulé (et de comparaison, mais c'est plus dur à voir !).
Donc seule $A$ reste comme potentielle menteuse.
> « Mais... Ça semble difficile de programmer une résolution automatique de ce problème ? »
En fait, il suffit d'écrire une fonction de vérification qu'un graphe est un graphe d'intervalle, puis on essaie d'enlever chaque sommet, tant que le graphe n'est pas un graphe d'intervalle.
Si le graphe devient valide en enlevant un seul sommet, et qu'il n'y en a qu'un seul qui fonctionne, alors il y a un(e) seul(e) menteur(se) dans le graphe, et donc un(e) seul(e) coupable !
#### Solution
C'est donc $A$, i.e., Ann l'unique menteuse et donc la coupable.
> Ce n'est pas grave de ne pas avoir réussi à répondre durant l'oral !
> Au contraire, vous avez le droit de vous détacher du problème initial du texte !
> Une solution bien expliquée peut être trouvée dans [cette vidéo](https://youtu.be/ZGhSyVvOelg) :
<iframe width="640" height="360" src="https://www.youtube.com/embed/ZGhSyVvOelg" frameborder="1" allowfullscreen></iframe>
### Le problème des frigos
> Dans un grand hopital, les réductions de financement public poussent le gestionnaire du service d'immunologie à faire des économies sur le nombre de frigos à acheter pour stocker les vaccins. A peu de chose près, il lui faut stocker les vaccins suivants :
> | Numéro | Nom du vaccin | Température de conservation
| :-----: | :------------ | -------------------------: |
| 0 | Rougeole-Rubéole-Oreillons (RRO) | $4 \cdots 12$ °C
| 1 | BCG | $8 \cdots 15$ °C
| 2 | Di-Te-Per | $0 \cdots 20$ °C
| 3 | Anti-polio | $2 \cdots 3$ °C
| 4 | Anti-hépatite B | $-3 \cdots 6$ °C
| 5 | Anti-amarile | $-10 \cdots 10$ °C
| 6 | Variole | $6 \cdots 20$ °C
| 7 | Varicelle | $-5 \cdots 2$ °C
| 8 | Antihaemophilus | $-2 \cdots 8$ °C
> Combien le gestionaire doit-il acheter de frigos, et sur quelles températures doit-il les régler ?
```
let vaccins : intervalles = [
(4, 12);
(8, 15);
(0, 20);
(2, 3);
(-3, 6);
(-10, 10);
(6, 20);
(-5, 2);
(-2, 8)
]
```
Qu'on peut visualiser sous forme de graphe facilement :
```
let graphe_vaccins = graphe_depuis_intervalles vaccins;;
```

> Figure 3. Graphe d'intervalle pour le problème des frigos et des vaccins.
Avec des intervalles au lieu de numéro :

> Figure 4. Graphe d'intervalle pour le problème des frigos et des vaccins.
On peut récupérer une coloriage minimal pour ce graphe :
```
coloriage_intervalles vaccins;;
```
La couleur la plus grande est `5`, donc le nombre chromatique de ce graphe est `6`.
```
nombre_chromatique (coloriage_intervalles vaccins);;
```
Par contre, la solution au problème des frigos et des vaccins réside dans le nombre de couverture de cliques, $k(G)$, pas dans le nombre chromatique $\chi(G)$.
On peut le résoudre en répondant à la question 7, qui demandait de mettre au point un algorithme pour construire un *stable maximum* pour un graphe d'intervalle.
### Le problème du CSA
> Le Conseil Supérieur de l’Audiovisuel doit attribuer de nouvelles bandes de fréquences d’émission pour la stéréophonie numérique sous-terraine (SNS).
> Cette technologie de pointe étant encore à l'état expérimental, les appareils capables d'émettre ne peuvent utiliser que les bandes de fréquences FM suivantes :
> | Bandes de fréquence | Intervalle (kHz) |
| :-----------------: | ---------: |
| 0 | $32 \cdots 36$ |
| 1 | $24 \cdots 30$ |
| 2 | $28 \cdots 33$ |
| 3 | $22 \cdots 26$ |
| 4 | $20 \cdots 25$ |
| 5 | $30 \cdots 33$ |
| 6 | $31 \cdots 34$ |
| 7 | $27 \cdots 31$ |
> Quelles bandes de fréquences doit-on retenir pour permettre à le plus d'appareils possibles d'être utilisés, sachant que deux appareils dont les bandes de fréquences s'intersectent pleinement (pas juste sur les extrémités) sont incompatibles.
```
let csa : intervalles = [
(32, 36);
(24, 30);
(28, 33);
(22, 26);
(20, 25);
(30, 33);
(31, 34);
(27, 31)
];;
let graphe_csa = graphe_depuis_intervalles csa;;
```

> Figure 5. Graphe d'intervalle pour le problème du CSA.
Avec des intervalles au lieu de numéro :

> Figure 6. Graphe d'intervalle pour le problème du CSA.
On peut récupérer une coloriage minimal pour ce graphe :
```
coloriage_intervalles csa;;
```
La couleur la plus grande est `3`, donc le nombre chromatique de ce graphe est `4`.
```
nombre_chromatique (coloriage_intervalles csa);;
```
Par contre, la solution au problème CSA réside dans le nombre de couverture de cliques, $k(G)$, pas dans le nombre chromatique $\chi(G)$.
On peut le résoudre en répondant à la question 7, qui demandait de mettre au point un algorithme pour construire un *stable maximum* pour un graphe d'intervalle.
### Le problème du wagon restaurant
> Le chef de train de l'Orient Express doit aménager le wagon restaurant avant le départ du train. Ce wagon est assez petit et doit être le moins encombré de tables possibles, mais il faut prévoir suffisemment de tables pour accueillir toutes personnes qui ont réservé :
> | Numéro | Personnage(s) | Heures de dîner | En secondes |
| :----------------- | --------- | :---------: | :---------: |
| 0 | Le baron et la baronne Von Haussplatz | 19h30 .. 20h14 | $1170 \cdots 1214$
| 1 | Le général Cook | 20h30 .. 21h59 | $1230 \cdots 1319$
| 2 | Les époux Steinberg | 19h .. 19h59 | $1140 \cdots 1199$
| 3 | La duchesse de Colombart | 20h15 .. 20h59 | $1215 \cdots 1259$
| 4 | Le marquis de Carquamba | 21h .. 21h59 | $1260 \cdots 1319$
| 5 | La Vociafiore | 19h15 .. 20h29 | $1155 \cdots 1229$
| 6 | Le colonel Ferdinand | 20h .. 20h59 | $1200 \cdots 1259$
> Combien de tables le chef de train doit-il prévoir ?
```
let restaurant = [
(1170, 1214);
(1230, 1319);
(1140, 1199);
(1215, 1259);
(1260, 1319);
(1155, 1229);
(1200, 1259)
];;
let graphe_restaurant = graphe_depuis_intervalles restaurant;;
```

> Figure 7. Graphe d'intervalle pour le problème du wagon restaurant.
Avec des intervalles au lieu de numéro :

> Figure 8. Graphe d'intervalle pour le problème du wagon restaurant.
```
coloriage_intervalles restaurant;;
```
La couleur la plus grande est `2`, donc le nombre chromatique de ce graphe est `3`.
```
nombre_chromatique (coloriage_intervalles restaurant);;
```
#### Solution via l'algorithme de coloriage de graphe d'intervalles
Pour ce problème là, la solution est effectivement donnée par le nombre chromatique.
La couleur sera le numéro de table pour chaque passagers (ou couple de passagers), et donc le nombre minimal de table à installer dans le wagon restaurant est exactement le nombre chromatique.
Une solution peut être la suivante, avec **3 tables** :
| Numéro | Personnage(s) | Heures de dîner | Numéro de table |
| :----------------- | --------- | :---------: | :---------: |
| 0 | Le baron et la baronne Von Haussplatz | 19h30 .. 20h14 | 2
| 1 | Le général Cook | 20h30 .. 21h59 | 1
| 2 | Les époux Steinberg | 19h .. 19h59 | 0
| 3 | La duchesse de Colombart | 20h15 .. 20h59 | 2
| 4 | Le marquis de Carquamba | 21h .. 21h59 | 0
| 5 | La Vociafiore | 19h15 .. 20h29 | 1
| 6 | Le colonel Ferdinand | 20h .. 20h59 | 0
On vérifie manuellement que la solution convient.
Chaque passager devra quitter sa tableau à la minute près par contre !
On peut afficher la solution avec un graphe colorié.
La table `0` sera <span style="color:red;">rouge</span>, `1` sera <span style="color:blue;">bleu</span> et `2` sera <span style="color:yellow;">jaune</span> :

> Figure 9. Solution pour le problème du wagon restaurant.
----
## Bonus ?
### Visualisation des graphes définis dans les exemples
- J'utilise une petite fonction facile à écrire, qui convertit un graphe (`int list list`) en une chaîne de caractère au format [DOT Graph](http://www.graphviz.org/doc/info/lang.html).
- Ensuite, un appel `dot -Tpng ...` en ligne de commande convertit ce graphe en une image, que j'inclus ensuite manuellement.
```
(** Transforme un [graph] en une chaîne représentant un graphe décrit par le langage DOT,
voir http://en.wikipedia.org/wiki/DOT_language pour plus de détails sur ce langage.
@param graphname Donne le nom du graphe tel que précisé pour DOT
@param directed Vrai si le graphe doit être dirigé (c'est le cas ici) faux sinon. Change le style des arêtes ([->] ou [--])
@param verb Affiche tout dans le terminal.
@param onetoone Si on veut afficher le graphe en mode carré (échelle 1:1). Parfois bizarre, parfois génial.
*)
let graph_to_dotgraph ?(graphname = "graphname") ?(directed = false) ?(verb = false) ?(onetoone = false) (glist : int list list) =
let res = ref "" in
let log s =
if verb then print_string s; (* Si [verb] affiche dans le terminal le résultat du graphe. *)
res := !res ^ s
in
log (if directed then "digraph " else "graph ");
log graphname; log " {";
if onetoone then
log "\n size=\"1,1\";";
let g = Array.of_list (List.map Array.of_list glist) in
(* On affiche directement les arc, un à un. *)
for i = 0 to (Array.length g) - 1 do
for j = 0 to (Array.length g.(i)) - 1 do
if i < g.(i).(j) then
log ("\n \""
^ (string_of_int i) ^ "\" "
^ (if directed then "->" else "--")
^ " \"" ^ (string_of_int g.(i).(j)) ^ "\""
);
done;
done;
log "\n}\n// generated by OCaml with the function graphe_to_dotgraph.";
!res;;
(** Fonction ecrire_sortie : plus pratique que output. *)
let ecrire_sortie monoutchanel machaine =
output monoutchanel machaine 0 (String.length machaine);
flush monoutchanel;;
(** Fonction ecrire_dans_fichier : pour écrire la chaine dans le fichier à l'adresse renseignée. *)
let ecrire_dans_fichier ~chaine ~adresse =
let mon_out_channel = open_out adresse in
ecrire_sortie mon_out_channel chaine;
close_out mon_out_channel;;
let s_graphe_densmore = graph_to_dotgraph ~graphname:"densmore" ~directed:false ~verb:false graphe_densmore;;
let s_graphe_vaccins = graph_to_dotgraph ~graphname:"vaccins" ~directed:false ~verb:false graphe_vaccins;;
let s_graphe_csa = graph_to_dotgraph ~graphname:"csa" ~directed:false ~verb:false graphe_csa;;
let s_graphe_restaurant = graph_to_dotgraph ~graphname:"restaurant" ~directed:false ~verb:false graphe_restaurant;;
ecrire_dans_fichier ~chaine:s_graphe_densmore ~adresse:"/tmp/densmore.dot" ;;
(* Sys.command "fdp -Tpng /tmp/densmore.dot > images/densmore.png";; *)
ecrire_dans_fichier ~chaine:s_graphe_vaccins ~adresse:"/tmp/vaccins.dot" ;;
(* Sys.command "fdp -Tpng /tmp/vaccins.dot > images/vaccins.png";; *)
ecrire_dans_fichier ~chaine:s_graphe_csa ~adresse:"/tmp/csa.dot" ;;
(* Sys.command "fdp -Tpng /tmp/csa.dot > images/csa.png";; *)
ecrire_dans_fichier ~chaine:s_graphe_restaurant ~adresse:"/tmp/restaurant.dot" ;;
(* Sys.command "fdp -Tpng /tmp/restaurant.dot > images/restaurant.png";; *)
```
On pourrait étendre cette fonction pour qu'elle prenne les intervalles initiaux, pour afficher des bonnes étiquettes et pas des entiers, et un coloriage pour colorer directement les noeuds, mais ça prend du temps pour pas grand chose.
----
## Conclusion
Voilà pour la question obligatoire de programmation, sur l'algorithme de coloriage.
- on a décomposé le problème en sous-fonctions,
- on a fait des exemples et *on les garde* dans ce qu'on présente au jury,
- on a testé la fonction exigée sur de petits exemples et sur un exemple de taille réelle (venant du texte)
Et on a pas essayé de faire *un peu plus*.
Avec plus de temps, on aurait aussi pu écrire un algorithme pour calculer le stable maximum (ensemble de sommets indépendants de taille maximale).
> Bien-sûr, ce petit notebook ne se prétend pas être une solution optimale, ni exhaustive.
| github_jupyter |
Subsets and Splits