prompt
stringlengths 501
4.98M
| target
stringclasses 1
value | chunk_prompt
bool 1
class | kind
stringclasses 2
values | prob
float64 0.2
0.97
⌀ | path
stringlengths 10
394
⌀ | quality_prob
float64 0.4
0.99
⌀ | learning_prob
float64 0.15
1
⌀ | filename
stringlengths 4
221
⌀ |
---|---|---|---|---|---|---|---|---|
### Training a Graph Convolution Model
Now that we have the data appropriately formatted, we can use this data to train a Graph Convolution model. First we need to import the necessary libraries.
```
import deepchem as dc
from deepchem.models import GraphConvModel
import numpy as np
import sys
import pandas as pd
import seaborn as sns
from rdkit.Chem import PandasTools
from tqdm.auto import tqdm
```
Now let's define a function to create a GraphConvModel. In this case we will be creating a classification model. Since we will be apply the model later on a different dataset, it's a good idea to create a directory in which to store the model.
```
def generate_graph_conv_model():
batch_size = 128
model = GraphConvModel(1, batch_size=batch_size, mode='classification', model_dir="./model_dir")
return model
```
Now we will read in the dataset that we just created.
```
dataset_file = "dude_erk2_mk01.csv"
tasks = ["is_active"]
featurizer = dc.feat.ConvMolFeaturizer()
loader = dc.data.CSVLoader(tasks=tasks, feature_field="SMILES", featurizer=featurizer)
dataset = loader.create_dataset(dataset_file, shard_size=8192)
```
Now that we have the dataset loaded, let's build a model.
We will create training and test sets to evaluate the model's performance. In this case we will use the RandomSplitter(). DeepChem offers a number of other splitters such as the ScaffoldSplitter, which will divide the dataset by chemical scaffold or the ButinaSplitter which will first cluster the data then split the dataset so that different clusters will end up in the training and test sets.
```
splitter = dc.splits.RandomSplitter()
```
With the dataset split, we can train a model on the training set and test that model on the validation set.
At this point we can define some metrics and evaluate the performance of our model. In this case our dataset is unbalanced, we have a small number of active compounds and a large number of inactive compounds. Given this difference, we need to use a metric that reflects the performance on unbalanced datasets. One metric that is apporpriate for datasets like this is the Matthews correlation coefficient (MCC). Put more info about MCC here.
```
metrics = [dc.metrics.Metric(dc.metrics.matthews_corrcoef, np.mean)]
```
In order to evaluate the performance of our moldel, we will perform 10 folds of cross valiation, where we train a model on the training set and validate on the validation set.
```
training_score_list = []
validation_score_list = []
transformers = []
cv_folds = 10
for i in tqdm(range(0,cv_folds)):
model = generate_graph_conv_model()
train_dataset, valid_dataset, test_dataset = splitter.train_valid_test_split(dataset)
model.fit(train_dataset)
train_scores = model.evaluate(train_dataset, metrics, transformers)
training_score_list.append(train_scores["mean-matthews_corrcoef"])
validation_scores = model.evaluate(valid_dataset, metrics, transformers)
validation_score_list.append(validation_scores["mean-matthews_corrcoef"])
print(training_score_list)
print(validation_score_list)
```
To visualize the preformance of our models on the training and test data, we can make boxplots of the models' performance.
```
sns.boxplot(x=["training"]*cv_folds+["validation"]*cv_folds,y=training_score_list+validation_score_list);
```
It is also useful to visualize the result of our model. In order to do this, we will generate a set of predictions for a validation set.
```
pred = [x.flatten() for x in model.predict(valid_dataset)]
pred
```
**The results of predict on a GraphConv model are returned as a list of lists. Is this the intent? It doesn't seem consistent across models. RandomForest returns a list. For convenience, we will put our predicted results into a Pandas dataframe.**
```
pred_df = pd.DataFrame(pred,columns=["neg","pos"])
```
We can easily add the activity class (1 = active, 0 = inactive) and the SMILES string for our predicted moleculesto the dataframe. __Is the moleculed id retained as part of the DeepChem dataset? I can't find it__
```
pred_df["active"] = [int(x) for x in valid_dataset.y]
pred_df["SMILES"] = valid_dataset.ids
pred_df.head()
pred_df.sort_values("pos",ascending=False).head(25)
sns.boxplot(x=pred_df.active,y=pred_df.pos)
```
The performance of our model is very good, we can see a clear separation between the active and inactive compounds. It appears that only one of our active compounds receieved a low positive score. Let's look more closely.
```
false_negative_df = pred_df.query("active == 1 & pos < 0.5").copy()
PandasTools.AddMoleculeColumnToFrame(false_negative_df,"SMILES","Mol")
false_negative_df
false_positive_df = pred_df.query("active == 0 & pos > 0.5").copy()
PandasTools.AddMoleculeColumnToFrame(false_positive_df,"SMILES","Mol")
false_positive_df
```
Now that we've evaluated our model's performance we can retrain the model on the entire dataset and save it.
```
model.fit(dataset)
```
| true |
code
| 0.620794 | null | null | null | null |
|
# Transmission
```
%matplotlib inline
import numpy as np
np.seterr(divide='ignore') # Ignore divide by zero in log plots
from scipy import signal
import scipy.signal
from numpy.fft import fft, fftfreq
import matplotlib.pyplot as plt
#import skrf as rf # pip install scikit-rf if you want to run this one
```
First, let's set up a traditional, full-precision modulator and plot the spectrum of that as a baseline
```
def prbs(n=0, taps=[]):
state = [1]*n
shift = lambda s: [sum([s[i] for i in taps]) % 2] + s[0:-1]
out = []
for i in range(2**n - 1):
out.append(state[-1])
state = shift(state)
return out
prbs9 = lambda: prbs(n=9, taps=[4,8])
def make_carrier(freq=None, sample_rate=None, samples=None, phase=0):
t = (1/sample_rate)*np.arange(samples)
return np.real(np.exp(1j*(2*np.pi*freq*t - phase)))
def modulate_gmsk(bits, carrier_freq=2.402e9, sample_rate=5e9, baseband=False, phase_offset=0, include_phase=False):
symbol_rate = 1e6 # 1Mhz
BT = 0.5
bw = symbol_rate*BT/sample_rate
samples_per_symbol = int(sample_rate/symbol_rate)
# This looks scary but it's just a traditional gaussian distribution from wikipedia
kernel = np.array([(np.sqrt(2*np.pi/np.log(2))*bw)*np.exp(-(2/np.log(2))*np.power(np.pi*t*bw, 2)) for t in range(-5000,5000)])
kernel /= sum(kernel) # Normalize so things amplitude after convolution remains the same
rotation = np.repeat(bits, sample_rate/symbol_rate)*2.0 - 1.0
smoothed_rotation = np.convolve(rotation, kernel,mode='same')
angle_per_sample = (np.pi/2.0)/(samples_per_symbol)
current_angle = phase_offset
modulated = np.zeros((len(smoothed_rotation),), dtype=np.complex64) # Represents I and Q as a complex number
i = 0
for bit in smoothed_rotation:
current_angle += angle_per_sample*bit
modulated[i] = np.exp(1j*current_angle)
i += 1
if baseband:
return modulated
I = make_carrier(freq=carrier_freq, sample_rate=sample_rate, samples=len(modulated), phase=0)
Q = make_carrier(freq=carrier_freq, sample_rate=sample_rate, samples=len(modulated), phase=np.pi/2)
if include_phase:
return np.real(modulated)*I + np.imag(modulated)*Q, np.angle(modulated)
return np.real(modulated)*I + np.imag(modulated)*Q
```
Now let's look at the FFT of this...
```
sample_rate = 6e9
modulated = modulate_gmsk(prbs9(), sample_rate=sample_rate)
fftm = np.abs(fft(modulated))
fftm = fftm/np.max(fftm)
fftbins = fftfreq(len(modulated), 1/sample_rate)
plt.figure(figsize=(10,6))
plt.plot(fftbins, 10*np.log10(fftm))
plt.gca().set_ylim(-40, 0)
```
This is clean (as one would expect), now let's see what happens if we reduce things to 1-bit of precision by just rounding
# The Naive Approach (Rounding)
```
sample_rate=5e9
modulates_5g = modulated = np.sign(modulate_gmsk(prbs9(), sample_rate=sample_rate))
fftm = np.abs(fft(modulated))
fftm = fftm/np.max(fftm)
fftbins = fftfreq(len(modulated), 1/sample_rate)
plt.figure(figsize=(10,6))
plt.plot(fftbins, 10*np.log10(fftm))
plt.gca().set_ylim(-40, 0)
plt.gca().set_xlim(0, 3e9)
```
_Oof_ this is not pretty. What's happening here is that (I think) the aliases are mixing with each other to produce these interference paterns. In this case, it looks like the big subharmonics are spaced about 200Mhz which makes sense given the alias of 2.402ghz at 2.698ghz when sampling at 2.5ghz.
```
sample_rate = 6e9
modulated_6g = modulated = np.sign(modulate_gmsk(prbs9(), sample_rate=sample_rate))
fftm = np.abs(fft(modulated))
fftm = fftm/np.max(fftm)
fftbins = fftfreq(len(modulated), 1/sample_rate)
plt.figure(figsize=(10,6))
plt.plot(fftbins, 10*np.log10(fftm))
plt.gca().set_ylim(-40, 0)
plt.gca().set_xlim(0, 3e9)
plt.title("Unfiltered")
```
Ok, in this case, the alias is at `3 + (3 - 2.402) = 3.6ghz`. The difference between this and 2.402ghz is about 1.2ghz, which looking at the next big peak, looks to be about 1.2ghz, so this makes sense. From this math, we can intuit that it's a good idea for the sample rate to be a whole number multiple of the carrier frequency. In the ideal case, 4 times the carrier:
```
sample_rate = 2.402e9*4
modulated_4x = modulated = np.sign(modulate_gmsk(prbs9(), sample_rate=sample_rate))
fftm = np.abs(fft(modulated))
fftm = fftm/np.max(fftm)
fftbins = fftfreq(len(modulated), 1/sample_rate)
plt.figure(figsize=(10,6))
plt.plot(fftbins, 10*np.log10(fftm))
plt.gca().set_ylim(-40, 0)
plt.gca().set_xlim(0, 3e9)
```
There a couple of challenges here, however:
1. In order to get the clean(ish) spectrum, we have to clock the output frequency at a rate relative to the carrier frequency. If we only intended to use one frequency, this would be fine but Bluetooth (as an example) hops around frequency constantly by design. This might be doable, but it's kind of painful (this might require various SERDES resets which aren't instantaneous)
2. At 2.402ghz, 4x this would be... 9.6ghz, which is too fast for my (low-end-ish) SERDES which maxes out around 6ghz.
# Adding a Reconstruction Filter
In order to prevent a friendly visit from an unmarked FCC van, it's more or less mandatory that we filter noise outside of the band of interest. In our case, I have a tiny 2.4Ghz surface mount band pass filter that I've put onto a test board. This is the delightfully named "DEA252450BT-2027A1" which is a surface mount part which has a frequency response of:

To (more fully) characterize this filter, I hooked it up to a NanoVNA2 and saved its S parameters using a NanoVNA Saver:
```
# pip install scikit-rf if you want to run this one
# Note: running this before we've plotted anything, borks matplotlib
import skrf as rf
filter2_4 = rf.Network('2_4ghzfilter.s2p')
filter2_4.s21.plot_s_db()
```
Hey that's not too far off from data sheet (at least up to 4.4Ghz).
To turn this into a filter, we can use the scipy-rf to compute an impulse response which we can then convolve with our input data to see what the filtered output would be:
```
ts, ms = filter2_4.s21.impulse_response()
impulse_response = ms[list(ts).index(0):]
impulse_response = impulse_response/np.max(impulse_response)
tstep = ts[1] - ts[0]
print("Timestep {} seconds, frequency {:e} hz".format(tstep, 1/tstep))
plt.plot(impulse_response)
plt.gca().set_xlim(0, 300)
```
This is great and all but the impulse response is sampled at north of 30ghz (!). Our output serdes runs at around 6ghz so let's resample this to that rate
```
# Truncate the impulse response so we can get relatively close to 6ghz
trunc = impulse_response[:-4]
size = int((tstep*(len(trunc) - 1))/(1/6e9) + 1)
print(size)
impulse_response_6g = scipy.signal.resample(impulse_response, size)
plt.plot(impulse_response_6g)
plt.gca().set_xlim(0, 50)
```
Not quite as pretty, but it's what we need. Let's verify that this does "the thing" by filtering our 6ghz signal:
```
sample_rate=6e9
fftm = np.abs(fft(np.convolve(modulated_6g, impulse_response_6g, mode="same")))
fftm = fftm/np.max(fftm)
fftbins = fftfreq(len(fftm), 1/sample_rate)
plt.figure(figsize=(10,6))
plt.plot(fftbins, 10*np.log10(fftm))
plt.grid(b=True)
plt.gca().set_ylim(-40, 0)
plt.gca().set_xlim(0, 3e9)
```
This looks better, but the passband for my filter is still super wide (hundreds of MHz, not surprising for a 50c filter, I should look at B39242B9413K610 which is a $1 surface acoustic wave filter). We see some nontrivial imaging up to -12db, which is... not great.
What to do?
# Delta Sigma Modulation
A way around this is to use something called Delta Sigma Modulation. The way to think about this conceptually is that we keep a running sum of values we've output (think of this as the error) and factor this into the value we decide to output (versus just blindly rounding the current value). Further, you can filter this feedback loop to "shape" the noise to different parts of the spectrum (that we can filter out elsewhere).
A good place to read about this is [Wikipedia](https://en.wikipedia.org/wiki/Delta-sigma_modulation#Oversampling). In [Novel Architectures for Flexible and Wideband All-digital Transmitters](https://ria.ua.pt/bitstream/10773/23875/1/Documento.pdf) by Rui Fiel Cordeiro, Rui proposes using a filter that has a zero at the carrier of interest, which looks like the following
```
def pwm2(sig, k=1.0):
z1 = 0.0
z2 = 0.0
out = np.zeros((len(sig,)))
for i in range(len(sig)):
v = sig[i] - (k*z1 + z2)
out[i] = np.sign(v)
z2 = z1
z1 = v - out[i]
return out
```
To be clear, `pwm2` is replacing `np.sign`
```
sample_rate = 6e9
modulated = modulate_gmsk(prbs9(), sample_rate=sample_rate)
modulatedsd5 = modulated = pwm2(modulated, k=-2.0*np.cos(2.0*np.pi*2.402e9/sample_rate))
fftm = np.abs(fft(np.sign(modulated)))
fftm = fftm/np.max(fftm)
fftbins = fftfreq(len(modulated), 1/sample_rate)
plt.figure(figsize=(10,6))
plt.plot(fftbins, 10*np.log10(fftm))
plt.gca().set_ylim(-40, 0)
plt.gca().set_xlim(0, 3e9)
plt.title("Second order Delta Sigma Modulation")
```
Now let's filter this with our output filter
```
fftm = np.abs(fft(np.convolve(modulatedsd5, impulse_response_6g, mode="same")))
fftm = fftm/np.max(fftm)
fftbins = fftfreq(len(fftm), 1/sample_rate)
plt.figure(figsize=(10,6))
plt.plot(fftbins, 10*np.log10(fftm))
plt.grid(b=True)
plt.gca().set_ylim(-40, 0)
plt.gca().set_xlim(0, 3e9)
plt.title("Filtered Second Order Delta Sigma Modulation")
```
This is better in the immediate vicinity of our signal.
You'll notice on the wikipedia page that we can use increasing filter orders to increase the steepness of the valley around our signal of interest.
On one hand this is good, but because our filter is not very good (tm) this actually results in higher peaks than we'd like at around 2.2ghz.
Given that our filter is... not that good, can we design the filter in the modulator to compliment it?
# Filter-Aware Sigma Delta Modulator
I lay no claim to this awesome work by the folks who wrote pydsm, but it's great -- feed it an impulse response for a reconstruction filter and it will optimize a noise transfer function that matches it:
```
from pydsm.ir import impulse_response
from pydsm.delsig import synthesizeNTF, simulateDSM, evalTF
from pydsm.delsig import dbv, dbp
from pydsm.NTFdesign import quantization_noise_gain
from pydsm.NTFdesign.legacy import q0_from_filter_ir
from pydsm.NTFdesign.weighting import ntf_fir_from_q0
H_inf = 1.6 # This is out of band noise level in dB
q0 = q0_from_filter_ir(51, impulse_response_6g) # 51 is the number of filter coefficients
ntf_opti = ntf_fir_from_q0(q0, H_inf=H_inf)
```
Let's see how well we did. Anecdotally, this is not a _great_ solution (likely constrained by the low oversampling) but I'd wager this is because the oversampling rate is super low.
```
# Take the frequency response
samples = filter2_4.s21.s_db[:,0,0]
# Normalize the samples
ff = filter2_4.f/6e9
# Compute frequency response data
resp_opti = evalTF(ntf_opti, np.exp(1j*2*np.pi*ff))
# Plot the output filter,
plt.figure()
plt.plot(ff*6e9, dbv(resp_opti), 'r', label="Optimal NTF")
plt.plot(ff*6e9, samples, 'b', label="External Filter")
plt.plot(ff*6e9, dbv(resp_opti) + samples, 'g', label="Resulting Noise Shape")
plt.gca().set_xlim(0, 3e9)
plt.legend(loc="lower right")
plt.suptitle("Output filter and NTFs")
```
Ok, so it's not amazing but definitely an improvement. But now that we've got this monstrous 49 coefficient NTF, how do we modulate with it?
Fortunately we have the pydsm to the rescue!
```
sample_rate = 6e9
modulated = modulate_gmsk(prbs9(), sample_rate=sample_rate)
xx_opti = simulateDSM(modulated, ntf_opti)
fftm = np.abs(fft(np.convolve(xx_opti[0], impulse_response_6g, mode="same")))
fftm = fftm/np.max(fftm)
fftbins = fftfreq(len(fftm), 1/sample_rate)
plt.figure(figsize=(10,6))
plt.plot(fftbins, 10*np.log10(fftm))
plt.grid(b=True)
plt.gca().set_ylim(-40, 0)
plt.gca().set_xlim(0, 3e9)
```
Ok, so we've basically "filled in the valley" with the peaks from eithe sides. We've cut the max spurs down by about 3db. Not amazing, but not bad!
After looking around at SAW filters I realized how impressive they can be in this frequency rance, so I tried to order one (CBPFS-2441) and try with that. Unfortunately, the datasheets only show _drawing_ of parameters (and only phase) and actual s2p files are impossible to find. This seems dumb. Nevertheless, https://apps.automeris.io/wpd/ exists which allow you to extimate a graph from an image.
```
import csv
from scipy.interpolate import interp1d
traced = np.array([(float(f), float(d)) for f,d in csv.reader(open('saw_filter_traced.csv'))])
# Interpolate to 600 equally spaced points (this means 1200 total, so 1200 * 5MHz -> 6GHz sampling rate)
x = traced[:,0]
y = -1*traced[:,1]
f = interp1d(x, y)
x = np.array(np.linspace(5, 3000, 600))
y = np.array(f(x))
x = np.concatenate((np.flip(x)*-1, np.array([0]), x))
# In FFT format
y_orig = 10**(np.concatenate((np.array([-70]), y, np.flip(y)))/10)
y = 10**(np.concatenate((np.flip(y), np.array([-70]), y))/10.0)
plt.plot(x, 10*np.log10(y))
```
Let's look at the impulse respponse quickly
```
impulse = np.fft.ifft(y_orig)
impulse_trunc = impulse[:300]
plt.plot(np.real(impulse_trunc))
```
**Update:** The filter finally arrived and I can characterize it, as shown below...
(the remaining code uses the measured filter response rather than the one trace from the image)
```
sawfilter = rf.Network('crysteksawfilter.s2p')
sawfilter.s21.plot_s_db()
filter2_4.s21.plot_s_db()
ts, ms = sawfilter.s21.impulse_response()
impulse_response = ms[list(ts).index(0):]
impulse_response = impulse_response/np.max(impulse_response)
tstep = ts[1] - ts[0]
print("Timestep {} seconds, frequency {:e} hz".format(tstep, 1/tstep))
plt.plot(impulse_response)
plt.gca().set_xlim(0, 600)
plt.show()
trunc = impulse_response[:-2]
size = int((tstep*(len(trunc) - 1))/(1/6e9) + 1)
print(size)
impulse_response_6g = scipy.signal.resample(impulse_response, size)
plt.plot(impulse_response_6g)
plt.gca().set_xlim(0, 400)
```
Wow that is a fair bit sharper.
```
H_inf = 1.5
q0 = q0_from_filter_ir(49, np.real(impulse_response_6g))
ntf_opti = ntf_fir_from_q0(q0, H_inf=H_inf)
# Take the frequency response
#samples = 10*np.log10(y)
# Normalize the samples
#ff = x*1e6/6e9
# Take the frequency response
samples = sawfilter.s21.s_db[:,0,0]
# Normalize the samples
ff = sawfilter.f/6e9
# Compute frequency response data
resp_opti = evalTF(ntf_opti, np.exp(1j*2*np.pi*ff))
# Plot the output filter,
plt.figure()
plt.plot(ff*6e9, dbv(resp_opti), 'r', label="Optimal NTF")
plt.plot(ff*6e9, samples, 'b', label="External Filter")
plt.plot(ff*6e9, dbv(resp_opti) + samples, 'g', label="Resulting Noise Shape")
plt.gca().set_xlim(0, 3e9)
plt.legend(loc="lower left")
plt.suptitle("Output filter and NTFs")
sample_rate = 6e9
modulated = modulate_gmsk(prbs9(), sample_rate=sample_rate)
xx_opti = simulateDSM(modulated, ntf_opti)
fftm = np.abs(fft(np.convolve(xx_opti[0], impulse_response_6g, mode="same")))
fftm = fftm/np.max(fftm)
fftbins = fftfreq(len(fftm), 1/sample_rate)
plt.figure(figsize=(10,6))
plt.plot(fftbins, 10*np.log10(fftm))
plt.grid(b=True)
plt.gca().set_ylim(-40, 0)
plt.gca().set_xlim(0, 3e9)
plt.title("Optimized Filtered Output")
```
Wow, the baseline noise level has dropped by almost 10db! Impressive!
# Symbol Dictionaries
Now that we've figured out how much noise we can stifle with this setup, we can begin to design our transmitter.
Now you may notice that the above noise transfer function filter is... quite expensive, clocking in at 51 coefficients. While we might be able to implement this on our FPGA, a better question is -- can we avoid it?
Given that we're transmitting digital data with a finite number of symbols, it turns out we can just pre-compute the symbols, store them in a dictionary and then play back the relevant pre-processed symbol when we need to transmit a given symbol. Simple!
Except, GMSK is not _quite_ that simple in this context because not only do we have to consider 1s and 0s but also where we currently are on a phase plot. If you think about GMSK visually on a constellation diagram, one symbols is recomended by a 90 degree arc on a unit circle that is either moving clockwise or counter clockwise. This is futher complicated by the fact that the gaussian smoothing, makes the velocity of the arc potentially slow down if the next bit is different from the current bit (because it needs to gradually change direction).
The result of this (if you enumerate out all the computations) is that we actually end up with a 32-symbol table. This is not the _only_ way to simplify these symbols, nor the most efficient, but it's simplest from an implementation perspective. I spent some time figuring out a train of bits that would iterate through each symbol. I'm sur ethere's a more optimal pattern, but efficiency is not hugely important when we only need to run this once when precomputing.
```
carrier_freq = 2.402e9
sample_rate = 6e9
symbol_rate = 1e6
samples_per_symbol = int(sample_rate/symbol_rate)
# Used to test that we've mapped things correctly.
# Note that this returns the phase angle, not the output bits
def demodulate_gmsk(sig, phase_offset=0):
I = make_carrier(freq=carrier_freq, sample_rate=sample_rate, samples=len(sig), phase=0 + phase_offset)
Q = make_carrier(freq=carrier_freq, sample_rate=sample_rate, samples=len(sig), phase=np.pi/2 + phase_offset)
# Mix down to (complex) baseband
down = sig*I + 1j*sig*Q
# Create a low pass filter at the symbol rate
sos = signal.butter(5, symbol_rate, 'low', fs=sample_rate, output='sos')
filtered_down = signal.sosfilt(sos, down)
# Take the phase angle of the baseband
return np.angle(filtered_down)
# The sequence of bits to modulate
seq = [0, 0, 0, 1, 1, 1,
0, 0, 1, 0, 1, 1,
0, 0,
1, 0, 1, 0, 1, 0,
0, 1,
1, 0, 1, 0, 0, 0]
# The relevant samples to pull out and store in the dictionary
samples = np.array([1, 4, 7, 10, 14, 17, 22, 25])
fig, axs = plt.subplots(4, 8, sharey=True, figsize=(24, 12))
dictionary = np.zeros((4*8, samples_per_symbol))
for q in range(4):
current_angle = [0, np.pi/2, np.pi, np.pi*3/2][q]
# Modulate the symbol with out optimized delta-sigma-modulator
modulated, angle = modulate_gmsk(seq, phase_offset=current_angle, sample_rate=sample_rate, include_phase=True)
modulated = simulateDSM(modulated, ntf_opti)[0]
demodulated = demodulate_gmsk(modulated, phase_offset=0)
n = 0
for i in samples:
iqsymbol = modulated[samples_per_symbol*i:samples_per_symbol*(i+1)]
dictionary[q*8 + n,:] = iqsymbol
axs[q, n].plot(np.unwrap(angle[samples_per_symbol*i:samples_per_symbol*(i+1)]))
n += 1
```
With these established, let's concatenate a few symbols together, demodulate to phase angle and make sure things look nice and smooth
```
def sim(out):
carrier=2.402e9
I = make_carrier(freq=carrier, sample_rate=sample_rate, samples=len(out), phase=0)
Q = make_carrier(freq=carrier, sample_rate=sample_rate, samples=len(out), phase=np.pi/2)
sos = signal.butter(2, symbol_rate, 'low', fs=sample_rate, output='sos')
rx_baseband = signal.sosfilt(sos, out*I + 1j*out*Q)
plt.plot(np.angle(rx_baseband))
sim(np.concatenate((dictionary[4,:], dictionary[5,:], dictionary[4,:], dictionary[5,:])))
sim(-1.0*np.concatenate((dictionary[13,:], dictionary[12,:], dictionary[13,:], dictionary[12,:])))
sim(np.concatenate((dictionary[21,:], dictionary[20,:], dictionary[21,:], dictionary[20,:])))
sim(-1.0*np.concatenate((dictionary[28,:], dictionary[29,:], dictionary[28,:], dictionary[29,:])))
```
Now, in order to synthesize this, we need a bit more logic to map between a bit stream and its respective symbols.
Note that there is additional state (i.e. the current phase offset) that factors into the symbol encoding beyond just the symbol value itself, which makes things a bit more complicate than most other forms of simple modulation. The code below keeps track of the starting phase angle at a given symbol as well as the before and after symbols to then output the right symbol.
```
idx = {
'000': 0,
'111': 1,
'001': 2,
'011': 3,
'010': 4,
'101': 5,
'110': 6,
'100': 7
}
start_q = [
[3, 2, 3, 2, 2, 3, 2, 3],
[0, 3, 0, 3, 3, 0, 3, 0],
[1, 0, 1, 0, 0, 1, 0, 1],
[2, 1, 2, 1, 1, 2, 1, 2]
]
def encode(bitstream):
out = np.zeros((len(bitstream)*samples_per_symbol,))
q = 0
prev = bitstream[0]
bitstream = bitstream + [bitstream[-1]] # Pad at the end so we can do a lookup
syms = []
for i in range(len(bitstream) - 1):
n = idx[str(prev) + str(bitstream[i]) + str(bitstream[i+1])]
d = -1
for j in range(4):
if start_q[j][n] == q:
d = j*8 + n
assert d != -1
syms.append(d)
out[i*samples_per_symbol:(i+1)*samples_per_symbol] = dictionary[d]
if bitstream[i]:
q = (q + 1) % 4
else:
q = (q + 4 - 1) % 4
prev = bitstream[i]
return out, syms
# Whitened bits from elsewhere
wbits = [0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1]
out, syms = encode([1 - b for b in wbits])
# Let's look at the resulting symbol indexes
print(syms)
```
As a reminder, the dictionary is really just one bit of precision:
```
dictionary[0][:100]
```
Let's demodulate the encoded bits to check that things make sense (note that the filtering will delay the output a bit in time, but it demodulates correctly)
```
def demodulate_gmsk(sig):
carrier_freq=2.402e9
I = make_carrier(freq=carrier_freq, sample_rate=sample_rate, samples=len(sig), phase=0)
Q = make_carrier(freq=carrier_freq, sample_rate=sample_rate, samples=len(sig), phase=np.pi/2)
# Mix down to (complex) baseband
down = sig*I + 1j*sig*Q
# Create a low pass filter at the symbol rate
sos = signal.butter(5, symbol_rate, 'low', fs=sample_rate, output='sos')
filtered_down = signal.sosfilt(sos, down)
# Take the phase angle of the baseband
angle = np.unwrap(np.angle(filtered_down))
# Take the derivative of the phase angle and hard limit it to 1:-1
return -(np.sign(angle[1:] - angle[:-1]) + 1.0)/2.0
plt.figure(figsize=(40,3))
plt.plot(demodulate_gmsk(out))
plt.plot(np.repeat(wbits, int(sample_rate/1e6)) + 1.5)
plt.gca().set_xlim(0, 0.6e6)
fftout = np.abs(fft(out))
fftout = fftout/np.max(fftout)
plt.figure(figsize=(10,6))
plt.plot(fftfreq(len(out), d=1/sample_rate), 10*np.log(fftout))
plt.gca().set_xlim(0, 3e9)
plt.gca().set_ylim(-80, 0)
plt.title("BLE Packet Before Reconstruction Filter")
plt.show()
fftm = np.abs(fft(np.convolve(out, impulse_response_6g, mode="same")))
fftm = fftm/np.max(fftm)
plt.figure(figsize=(10,6))
plt.plot(fftfreq(len(out), d=1/sample_rate), 10*np.log(fftm))
plt.gca().set_xlim(0, 3e9)
plt.gca().set_ylim(-80, 0)
plt.title("BLE Packet After Reconstruction Filter")
plt.show()
plt.figure(figsize=(10,6))
plt.plot(fftfreq(len(out), d=1/sample_rate), 10*np.log(fftm))
plt.gca().set_xlim(2.402e9 - 5e6, 2.402e9 + 5e6)
plt.gca().set_ylim(-80, 0)
plt.title("BLE Packet After Reconstruction Filter (10MHz span)")
```
The library used to generate the NTF filter uses a copyleft license, so rather than integrate that into the code, we save out the resulting symbol waveforms and use those directly.
```
np.save('../data/gmsk_2402e6_6e9.npy', dictionary)
```
| true |
code
| 0.557966 | null | null | null | null |
|
# Telescopes: Tutorial 5
This notebook will build on the previous tutorials, showing more features of the `PsrSigSim`. Details will be given for new features, while other features have been discussed in the previous tutorial notebook. This notebook shows the details of different telescopes currently included in the `PsrSigSim`, how to call them, and how to define a user `telescope` for a simulated observation.
We again simulate precision pulsar timing data with high signal-to-noise pulse profiles in order to clearly show the input pulse profile in the final simulated data product. We note that the use of different telescopes will result in different signal strengths, as would be expected.
This example will follow previous notebook in defining all necessary classes except for `telescope`.
```
# import some useful packages
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# import the pulsar signal simulator
import psrsigsim as pss
```
## The Folded Signal
Here we will use the same `Signal` definitions that have been used in the previous tutorials. We will again simulate a 20-minute-long observation total, with subintegrations of 1 minute. The other simulation parameters will be 64 frequency channels each 12.5 MHz wide (for 800 MHz bandwidth).
We will simulate a real pulsar, J1713+0747, as we have a premade profile for this pulsar. The period, dm, and other relavent pulsar parameters come from the NANOGrav 11-yr data release.
```
# Define our signal variables.
f0 = 1500 # center observing frequecy in MHz
bw = 800.0 # observation MHz
Nf = 64 # number of frequency channels
# We define the pulse period early here so we can similarly define the frequency
period = 0.00457 # pulsar period in seconds for J1713+0747
f_samp = (1.0/period)*2048*10**-6 # sample rate of data in MHz (here 2048 samples across the pulse period)
sublen = 60.0 # subintegration length in seconds, or rate to dump data at
# Now we define our signal
signal_1713_GBT = pss.signal.FilterBankSignal(fcent = f0, bandwidth = bw, Nsubband=Nf, sample_rate = f_samp,
sublen = sublen, fold = True) # fold is set to `True`
```
## The Pulsar and Profiles
Now we will load the pulse profile as in Tutorial 3 and initialize a single `Pulsar` object.
```
# First we load the data array
path = 'psrsigsim/data/J1713+0747_profile.npy'
J1713_dataprof = np.load(path)
# Now we define the data profile
J1713_prof = pss.pulsar.DataProfile(J1713_dataprof)
# Define the values needed for the puslar
Smean = 0.009 # The mean flux of the pulsar, J1713+0747 at 1400 MHz from the ATNF pulsar catatlog, here 0.009 Jy
psr_name = "J1713+0747" # The name of our simulated pulsar
# Now we define the pulsar with the scaled J1713+0747 profiles
pulsar_J1713 = pss.pulsar.Pulsar(period, Smean, profiles=J1713_prof, name = psr_name)
# define the observation length
obslen = 60.0*20 # seconds, 20 minutes in total
```
## The ISM
Here we define the `ISM` class used to disperse the simulated pulses.
```
# Define the dispersion measure
dm = 15.921200 # pc cm^-3
# And define the ISM object, note that this class takes no initial arguements
ism_sim = pss.ism.ISM()
```
## Defining Telescopes
Here we will show how to use the two predefined telescopes, Green Bank and Arecibo, and the systems accociated with them. We will also show how to define a `telescope` from scratch, so that any current or future telescopes and systems can be simulated.
### Predefined Telescopes
We start off by showing the two predefined telescopes.
```
# Define the Green Bank Telescope
tscope_GBT = pss.telescope.telescope.GBT()
# Define the Arecibo Telescope
tscope_AO = pss.telescope.telescope.Arecibo()
```
Each telescope is made up of one or more `systems` consisting of a `Reciever` and a `Backend`. For the predefined telescopes, the systems for the `GBT` are the L-band-GUPPI system or the 800 MHz-GUPPI system. For `Arecibo` these are the 430 MHz-PUPPI system or the L-band-PUPPI system. One can check to see what these systems and their parameters are as we show below.
```
# Information about the GBT systems
print(tscope_GBT.systems)
# We can also find out information about a receiver that has been defined here
rcvr_LGUP = tscope_GBT.systems['Lband_GUPPI'][0]
print(rcvr_LGUP.bandwidth, rcvr_LGUP.fcent, rcvr_LGUP.name)
```
### Defining a new system
One can also add a new system to one of these existing telescopes, similarly to what will be done when define a new telescope from scratch. Here we will add the 350 MHz receiver with the GUPPI backend to the Green Bank Telescope.
First we define a new `Receiver` and `Backend` object. The `Receiver` object needs a center frequency of the receiver in MHz, a bandwidth in MHz to be centered on that center frequency, and a name. The `Backend` object needs only a name and a sampling rate in MHz. This sampling rate should be the maximum sampling rate of the backend, as it will allow lower sampling rates, but not higher sampling rates.
```
# First we define a new receiver
rcvr_350 = pss.telescope.receiver.Receiver(fcent=350, bandwidth=100, name="350")
# And then we want to use the GUPPI backend
guppi = pss.telescope.backend.Backend(samprate=3.125, name="GUPPI")
# Now we add the new system. This needs just the receiver, backend, and a name
tscope_GBT.add_system(name="350_GUPPI", receiver=rcvr_350, backend=guppi)
# And now we check that it has been added
print(tscope_GBT.systems["350_GUPPI"])
```
### Defining a new telescope
We can also define a new telescope from scratch. In addition to needing the `Receiver` and `Backend` objects to define at least one system, the `telescope` also needs the aperture size in meters, the total area in meters^2, the system temperature in kelvin, and a name. Here we will define a small 3-meter aperture circular radio telescope that you might find at a University or somebody's backyard.
```
# We first need to define the telescope parameters
aperture = 3.0 # meters
area = (0.5*aperture)**2*np.pi # meters^2
Tsys = 250.0 # kelvin, note this is not a realistic system temperature for a backyard telescope
name = "Backyard_Telescope"
# Now we can define the telescope
tscope_bkyd = pss.telescope.Telescope(aperture, area=area, Tsys=Tsys, name=name)
```
Now similarly to defining a new system before, we must add a system to our new telescope by defining a receiver and a backend. Since this just represents a little telescope, the system won't be comparable to the previously defined telescope.
```
rcvr_bkyd = pss.telescope.receiver.Receiver(fcent=1400, bandwidth=20, name="Lband")
backend_bkyd = pss.telescope.backend.Backend(samprate=0.25, name="Laptop") # Note this is not a realistic sampling rate
# Add the system to our telecope
tscope_bkyd.add_system(name="bkyd", receiver=rcvr_bkyd, backend=backend_bkyd)
# And now we check that it has been added
print(tscope_bkyd.systems)
```
## Observing with different telescopes
Now that we have three different telescopes, we can observe our simulated pulsar with all three and compare the sensitivity of each telescope for the same initial `Signal` and `Pulsar`. Since the radiometer noise from the telescope is added directly to the signal though, we will need to define two additional `Signals` and create pulses for them before we can observe them with different telescopes.
```
# We define three new, similar, signals, one for each telescope
signal_1713_AO = pss.signal.FilterBankSignal(fcent = f0, bandwidth = bw, Nsubband=Nf, sample_rate = f_samp,
sublen = sublen, fold = True)
# Our backyard telescope will need slightly different parameters to be comparable to the other signals
f0_bkyd = 1400.0 # center frequency of our backyard telescope
bw_bkyd = 20.0 # Bandwidth of our backyard telescope
Nf_bkyd = 1 # only process one frequency channel 20 MHz wide for our backyard telescope
signal_1713_bkyd = pss.signal.FilterBankSignal(fcent = f0_bkyd, bandwidth = bw_bkyd, Nsubband=Nf_bkyd, \
sample_rate = f_samp, sublen = sublen, fold = True)
# Now we make pulses for all three signals
pulsar_J1713.make_pulses(signal_1713_GBT, tobs = obslen)
pulsar_J1713.make_pulses(signal_1713_AO, tobs = obslen)
pulsar_J1713.make_pulses(signal_1713_bkyd, tobs = obslen)
# And disperse them
ism_sim.disperse(signal_1713_GBT, dm)
ism_sim.disperse(signal_1713_AO, dm)
ism_sim.disperse(signal_1713_bkyd, dm)
# And now we observe with each telescope, note the only change is the system name. First the GBT
tscope_GBT.observe(signal_1713_GBT, pulsar_J1713, system="Lband_GUPPI", noise=True)
# Then Arecibo
tscope_AO.observe(signal_1713_AO, pulsar_J1713, system="Lband_PUPPI", noise=True)
# And finally our little backyard telescope
tscope_bkyd.observe(signal_1713_bkyd, pulsar_J1713, system="bkyd", noise=True)
```
Now we can look at the simulated data and compare the sensitivity of the different telescopes. We first plot the observation from the GBT, then Arecibo, and then our newly defined backyard telescope.
```
# We first plot the first two pulses in frequency-time space to show the undispersed pulses
time = np.linspace(0, obslen, len(signal_1713_GBT.data[0,:]))
# Since we know there are 2048 bins per pulse period, we can index the appropriate amount
plt.plot(time[:4096], signal_1713_GBT.data[0,:4096], label = signal_1713_GBT.dat_freq[0])
plt.plot(time[:4096], signal_1713_GBT.data[-1,:4096], label = signal_1713_GBT.dat_freq[-1])
plt.ylabel("Intensity")
plt.xlabel("Time [s]")
plt.legend(loc = 'best')
plt.title("L-band GBT Simulation")
plt.show()
plt.close()
# And the 2-D plot
plt.imshow(signal_1713_GBT.data[:,:4096], aspect = 'auto', interpolation='nearest', origin = 'lower', \
extent = [min(time[:4096]), max(time[:4096]), signal_1713_GBT.dat_freq[0].value, signal_1713_GBT.dat_freq[-1].value])
plt.ylabel("Frequency [MHz]")
plt.xlabel("Time [s]")
plt.colorbar(label = "Intensity")
plt.show()
plt.close()
# Since we know there are 2048 bins per pulse period, we can index the appropriate amount
plt.plot(time[:4096], signal_1713_AO.data[0,:4096], label = signal_1713_AO.dat_freq[0])
plt.plot(time[:4096], signal_1713_AO.data[-1,:4096], label = signal_1713_AO.dat_freq[-1])
plt.ylabel("Intensity")
plt.xlabel("Time [s]")
plt.legend(loc = 'best')
plt.title("L-band AO Simulation")
plt.show()
plt.close()
# And the 2-D plot
plt.imshow(signal_1713_AO.data[:,:4096], aspect = 'auto', interpolation='nearest', origin = 'lower', \
extent = [min(time[:4096]), max(time[:4096]), signal_1713_AO.dat_freq[0].value, signal_1713_AO.dat_freq[-1].value])
plt.ylabel("Frequency [MHz]")
plt.xlabel("Time [s]")
plt.colorbar(label = "Intensity")
plt.show()
plt.close()
# Since we know there are 2048 bins per pulse period, we can index the appropriate amount
plt.plot(time[:4096], signal_1713_bkyd.data[0,:4096], label = "1400.0 MHz")
plt.ylabel("Intensity")
plt.xlabel("Time [s]")
plt.legend(loc = 'best')
plt.title("L-band Backyard Telescope Simulation")
plt.show()
plt.close()
```
We can see that, as expected, the Arecibo telescope is more sensitive than the GBT when observing over the same timescale. We can also see that even though the simulated pulsar here is easily visible with these large telescopes, our backyard telescope is not able to see the pulsar over the same amount of time, since the output is pure noise. The `PsrSigSim` can be used to determine the approximate sensitivity of an observation of a simulated pulsar with any given telescope that can be defined.
### Note about randomly generated pulses and noise
`PsrSigSim` uses `numpy.random` under the hood in order to generate the radio pulses and various types of noise. If a user desires or requires that this randomly generated data is reproducible we recommend using a call to the seed generator native to `Numpy` before calling the function that produces the random noise/pulses. Newer versions of `Numpy` are moving toward slightly different [functionality/syntax](https://numpy.org/doc/stable/reference/random/index.html), but are essentially used in the same way.
```
numpy.random.seed(1776)
pulsar_1.make_pulses(signal_1, tobs=obslen)
```
| true |
code
| 0.75394 | null | null | null | null |
|
## FCLA/FNLA Fast.ai Numerical/Computational Linear Algebra
### Lecture 3: New Perspectives on NMF, Randomized SVD
Notes / In-Class Questions
WNixalo - 2018/2/8
Question on section: [Truncated SVD](http://nbviewer.jupyter.org/github/fastai/numerical-linear-algebra/blob/master/nbs/2.%20Topic%20Modeling%20with%20NMF%20and%20SVD.ipynb#More-Details)
Given A: `m` x `n` and Q: `m` x `r`; is Q the identity matrix?
A≈QQTA
```
import torch
import numpy as np
Q = np.eye(3)
print(Q)
print(Q.T)
print(Q @ Q.T)
# construct I matrix
Q = torch.eye(3)
# torch matrix multip
# torch.mm(Q, Q.transpose)
Q @ torch.t(Q)
```
So if A is *approx equal* to Q•Q.T•A .. but *not* equal.. then Q is **not** the identity, but is very close to it.
Oh, right. Q: m x r, **not** m x m...
If both the columns and rows of Q had been orthonormal, then it would have been the Identity, but only the columns (r) are orthonormal.
Q is a tall, skinny matrix.
---
AW gives range(A). AW has far more rows than columns ==> in practice these columns are approximately orthonormal (v.unlikely to get lin-dep cols when choosing random values).
QR decomposition is foundational to Numerical Linear Algebra.
Q consists of orthonormal columns, R is upper-triangular.
**Calculating Truncated-SVD:**
1\. Compute approximation to range(A). We want Q with r orthonormal columns such that $$A\approx QQ^TA$$
2\. Construct $B = Q^T A$, which is small ($r\times n$)
3\. Compute the SVD of $B$ by standard methods (fast since $B$ is smaller than $A$): $B = S\, Σ V^T$
4\. Since: $$A \approx QQ^TA = Q(S \, ΣV^T)$$ if we set $U = QS$, then we have a low rank approximation $A \approx UΣV^T$.
**How to choose $r$?**
If we wanted to get 5 cols from a matrix of 100 cols, (5 topics). As a rule of thumb, let's go for 15 instead. You don't want to explicitly pull exactly the amount you want due to the randomized component being present, so you add some buffer.
Since our projection is approximate, we make it a little bigger than we need.
**Implementing Randomized SVD:**
First we want a randomized range finder.
```
import numpy as np
from sklearn.datasets import fetch_20newsgroups
from sklearn import decomposition
from scipy import linalg
import matplotlib.pyplot as plt
%matplotlib inline
np.set_printoptions(suppress=True)
categories = ['alt.atheism', 'talk.religion.misc', 'comp.graphics', 'sci.space']
remove = ('headers', 'footers', 'quotes')
newsgroups_train = fetch_20newsgroups(subset='train', categories=categories, remove=remove)
# newsgroups_test = fetch_20newsgroups(subset='test', categories=categories, remove=remove)
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
vectorizer = CountVectorizer(stop_words='english')
vectors = vectorizer.fit_transform(newsgroups_train.data).todense() # (documents, vocab)
vocab = np.array(vectorizer.get_feature_names())
num_top_words=8
def show_topics(a):
top_words = lambda t: [vocab[i] for i in np.argsort(t)[:-num_top_words-1:-1]]
topic_words = ([top_words(t) for t in a])
return [' '.join(t) for t in topic_words]
# computes an orthonormal matrix whose range approximates the range of A
# power_iteration_normalizer can be safe_sparse_dot (fast but unstable), LU (imbetween), or QR (slow but most accurate)
def randomized_range_finder(A, size, n_iter=5):
# randomly init our Mat to our size; size: num_cols
Q = np.random.normal(size=(A.shape[1], size))
# LU decomp (lower triang * upper triang mat)
# improves accuracy & normalizes
for i in range(n_iter):
Q, _ = linalg.lu(A @ Q, permute_l=True)
Q, _ = linalg.lu(A.T @ Q, permute_l=True)
# QR decomp on A & Q
Q, _ = linalg.qr(A @ Q, mode='economic')
return Q
```
Randomized SVD method:
```
def randomized_svd(M, n_components, n_oversamples=10, n_iter=4):
# number of random columns we're going to create is the number of
# columns we want + number of oversamples (extra buffer)
n_random = n_components + n_oversamples
Q = randomized_range_finder(M, n_random, n_iter)
# project M to the (k + p) dimensional space using basis vectors
B = Q.T @ M
# compute SVD on the thin matrix: (k + p) wide
Uhat, s, V = linalg.svd(B, full_matrices=False)
del B
U = Q @ Uhat
# return the number of components we want from U, s, V
return U[:, :n_components], s[:n_components], V[:n_components, :]
%time u, s, v = randomized_svd(vectors, 5)
u.shape, s.shape, v.shape
show_topics(v)
```
Computational Complexity for a M`x`N matrix in SVD is $M^2N+N^3$, so Randomized (Truncated?) SVD is a *massive* improvement.
---
2018/3/7
Write a loop to calculate the error of your decomposition as your vary the # of topics. Plot the results.
```
# 1. how do I calculate decomposition error?:
# I guess I'll use MSE?
# # NumPy: # https://stackoverflow.com/questions/16774849/mean-squared-error-in-numpy
# def MSEnp(A,B):
# if type(A) == np.ndarray and type(B) == np.ndarray:
# return ((A - B) ** 2).mean()
# else:
# return np.square((A - B)).mean()
# Scikit-Learn:
from sklearn import metrics
MSE = metrics.mean_squared_error # usg: mse(A,B)
# 2. Now how to recompose my decomposition?:
%time B = vectors # original matrix
%time U, S, V = randomized_svd(B, 10) # num_topics = 10
# S is vector of Σ's singular values. Convert back to matrix:
%time Σ = S * np.eye(S.shape[0])
# from SVD formula: A ≈ U@Σ@V.T
%time A = U@Σ@V ## apparently randomized_svd returns V.T, not V ?
# 3. Finally calculated error I guess:
%time mse_error = MSE(A,B)
print(mse_error)
# Im putting way too much effort into this lol
def fib(n):
if n <= 1:
return n
else:
f1 = 1
f2 = 0
for i in range(n):
t = f1 + f2
tmp = f2
f2 += f1
f1 = tmp
return t
for i,e in enumerate(num_topics):
print(f'Topics: {num_topics[i]:>3} ',
f'Time: {num_topics[i]:>3}')
## Setup
import time
B = vectors
num_topics = [fib(i) for i in range(2,14)]
TnE = [] # time & error
## Loop:
for n_topics in num_topics:
t0 = time.time()
U, S, Vt = randomized_svd(B, n_topics)
Σ = S * np.eye(S.shape[0])
A = U@Σ@Vt
TnE.append([time.time() - t0, MSE(A,B)])
for i, tne in enumerate(TnE):
print(f'Topics: {num_topics[i]:>3} '
f'Time: {np.round(tne[0],3):>3} '
f'Error: {np.round(tne[1],12):>3}')
# https://matplotlib.org/users/pyplot_tutorial.html
plt.plot(num_topics, [tne[1] for tne in TnE])
plt.xlabel('No. Topics')
plt.ylabel('MSE Error')
plt.show()
## R.Thomas' class solution:
step = 20
n = 20
error = np.zeros(n)
for i in range(n):
U, s, V = randomized_svd(vectors, i * step)
reconstructed = U @ np.diag(s) @ V
error[i] = np.linalg.norm(vectors - reconstructed)
plt.plot(range(0,n*step,step), error)
```
Looks like she used the Norm instead of MSE. Same curve shape.
Here's why I used the fibonacci sequence for my topic numbers. This solution took much longer than mine (i=20 vs i=12) with more steps, yet mine appears smoother. Why? I figured this was the shape of curve I'd get: ie interesting bit is in the beginning, so I used a number sequence that spread out as you went so you'd get higher resolution early on. Yay.
---
**NOTE**: random magical superpower Machine Learning Data Analytics *thing*: ***Johnson-Lindenstrauss lemma***:
basically if you have a matrix with too many columns to work with (leading to overfitting or w/e else), multiple it by some random (square?) matrix and you'll preserve its properties but in a workable shape
https://en.wikipedia.org/wiki/Johnson-Lindenstrauss_lemma
| true |
code
| 0.607983 | null | null | null | null |
|
# USDA Unemployment
<hr>
```
import pandas as pd
import os
import matplotlib.pyplot as plt
import seaborn as sns
```
# Data
## US Unemployment data by county
Economic Research Service
U.S. Department of Agriculture
link:
### Notes
- Year 2020, Median Household Income (2019), & '% of State Median HH Income had 78 Nan Values that are all from Puerto Rico.
- I am going to drop all rows from Puerto Rico, Puerto Rico does not show up in any of the other USDA data. If we want it back in, it will be easy to re-add the Puerto Rico data.
## Contants
<hr>
```
stats_master_list = ['Vermont',
'Mississippi',
'Maine',
'Montana',
'Washington',
'District of Columbia',
'Texas',
'Alabama',
'Michigan',
'Maryland',
'Rhode Island',
'South Dakota',
'Nebraska',
'Virginia',
'Florida',
'Utah',
'Louisiana',
'Missouri',
'Massachusetts',
'South Carolina',
'Pennsylvania',
'Tennessee',
'Minnesota',
'Idaho',
'Alaska',
'Oklahoma',
'North Dakota',
'Arkansas',
'Georgia',
'New Hampshire',
'Indiana',
'Puerto Rico',
'New Jersey',
'Delaware',
'West Virginia',
'Colorado',
'New York',
'Kansas',
'Arizona',
'Ohio',
'Hawaii',
'Illinois',
'Oregon',
'North Carolina',
'California',
'Kentucky',
'Wyoming',
'Iowa',
'Nevada',
'Connecticut',
'Wisconsin',
'New Mexico']
# column Names
columns = [ 'FIPS ', 'Name',
'2012', 2013,
2014, 2015,
2016, 2017,
2018, 2019,
'2020', 'Median Household Income (2019)',
'% of State Median HH Income']
"""
Duplicate check 3
from
https://thispointer.com/python-3-ways-to-check-if-there-are-duplicates-in-a-list/
"""
def checkIfDuplicates_3(listOfElems):
''' Check if given list contains any duplicates '''
for elem in listOfElems:
if listOfElems.count(elem) > 1:
return True
return False
```
## File managment
<hr>
```
files = os.listdir("../data_raw/USDA_gov-unemplyment/")
# remove mac file
files.remove('.DS_Store')
#files
```
# Example of the csv files
<hr>
```
# random peek
df = pd.read_excel('../data_raw/USDA_gov-unemplyment/UnemploymentReport (14).xlsx', skiprows=2)
df.shape
df.head()
df.tail()
```
# Create master DataFrame
<hr>
```
# Concat
# create master file
master_df = pd.DataFrame(columns = columns)
state_name_list = []
# LOOP
for file in files:
# read excel file
_df = pd.read_excel('../data_raw/USDA_gov-unemplyment/'+file, skiprows=2)
# read state_name
state_name = _df.iloc[0,1]
# DROP
#drop row 0
_df.drop(0, inplace = True)
# Drop last 2 rows
_df.drop(_df.tail(1).index, inplace = True)
# work around to drop NaN column
_temp_df = _df.iloc[:,0:12]
# work around to drop NaN column
_temp_df['% of State Median HH Income'] = _df['% of State Median HH Income']
# add Column for STATE name
# add state column
_temp_df['state'] = state_name
state_name_list.append(state_name)
# Concat
master_df = pd.concat([master_df, _temp_df])
```
<br>
## Dataframe clean up
<hr>
```
# reset Index
master_df.reset_index(drop = True, inplace = True )
master_df.columns
# Rename columns
master_df.rename(columns = {'FIPS ':'FIPS'}, inplace = True)
# shape
master_df.shape
master_df.head()
```
## Remove rows with all nan's
<hr>
```
master_df.isna().sum()
master_df[ master_df['FIPS'].isnull()].head()
nan_rows = master_df[ master_df['FIPS'].isnull()].index
nan_rows
len(nan_rows)
# remove rows with all Nans
master_df.drop(nan_rows, inplace = True)
master_df.isna().sum()
master_df[ master_df['2020'].isnull()].iloc[20:25,:]
```
- There are 78 rows that do have nans for 2020,
- all of the Remaing rows with nan's are form Puerto Rico
- I am going to remove the Nans from Puerto Rico because the other USDA data sets do not have Puerto Rico
```
master_df[ master_df['state'] == 'Puerto Rico' ].index
# Drop all Rows with state as Puerto Rico
index_names = master_df[ master_df['state'] == 'Puerto Rico' ].index
master_df.drop(index_names, inplace = True)
master_df.drop([], inplace = True )
master_df.isna().sum()
master_df.shape
```
<br>
# Sanity Check
<hr>
```
# unique Count of stats
master_df['state'].nunique()
len(state_name_list)
# checks if there are duplicates in state list
checkIfDuplicates_3(state_name_list)
master_df['state'].nunique()
```
# Write to CSV
<hr>
```
master_df.to_csv('../data/USDA/USDA_unemployment.csv', index=False)
master_df.shape
```
<br>
# EDA
```
master_df.shape
master_df.head(2)
plt.figure(figsize = (17, 17))
sns.scatterplot(data = master_df, x = '2020', y = "Median Household Income (2019)", hue = 'state');
plt.xlabel("% of unemployment")
plt.title("% of Unemployment by Household Median income 2019")
set(master_df['FIPS'])
```
| true |
code
| 0.389314 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/modichirag/flowpm/blob/master/notebooks/flowpm_tutorial.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
%pylab inline
from flowpm import linear_field, lpt_init, nbody, cic_paint
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
from scipy.interpolate import InterpolatedUnivariateSpline as iuspline
klin = np.loadtxt('../flowpm/data/Planck15_a1p00.txt').T[0]
plin = np.loadtxt('../flowpm/data/Planck15_a1p00.txt').T[1]
ipklin = iuspline(klin, plin)
import flowpm
stages = np.linspace(0.1, 1.0, 10, endpoint=True)
initial_conditions = flowpm.linear_field(128, # size of the cube
100, # Physical size of the cube
ipklin, # Initial powerspectrum
batch_size=1)
# Sample particles
state = flowpm.lpt_init(initial_conditions, a0=0.1)
# Evolve particles down to z=0
final_state = flowpm.nbody(state, stages, 128)
# Retrieve final density field
final_field = flowpm.cic_paint(tf.zeros_like(initial_conditions), final_state[0])
with tf.Session() as sess:
sim = sess.run(final_field)
imshow(sim[0].sum(axis=0))
def _binomial_kernel(num_channels, dtype=tf.float32):
"""Creates a 5x5x5 b-spline kernel.
Args:
num_channels: The number of channels of the image to filter.
dtype: The type of an element in the kernel.
Returns:
A tensor of shape `[5, 5, 5, num_channels, num_channels]`.
"""
kernel = np.array((1., 4., 6., 4., 1.), dtype=dtype.as_numpy_dtype())
kernel = np.einsum('ij,k->ijk', np.outer(kernel, kernel), kernel)
kernel /= np.sum(kernel)
kernel = kernel[:, :, :, np.newaxis, np.newaxis]
return tf.constant(kernel, dtype=dtype) * tf.eye(num_channels, dtype=dtype)
def _downsample(cube, kernel):
"""Downsamples the image using a convolution with stride 2.
"""
return tf.nn.conv3d(
input=cube, filters=kernel, strides=[1, 2, 2, 2, 1], padding="SAME")
def _upsample(cube, kernel, output_shape=None):
"""Upsamples the image using a transposed convolution with stride 2.
"""
if output_shape is None:
output_shape = tf.shape(input=cube)
output_shape = (output_shape[0], output_shape[1] * 2, output_shape[2] * 2,
output_shape[3] * 2, output_shape[4])
return tf.nn.conv3d_transpose(
cube,
kernel * 2.0**3,
output_shape=output_shape,
strides=[1, 2, 2, 2, 1],
padding="SAME")
def _build_pyramid(cube, sampler, num_levels):
"""Creates the different levels of the pyramid.
"""
kernel = _binomial_kernel(1, dtype=cube.dtype)
levels = [cube]
for _ in range(num_levels):
cube = sampler(cube, kernel)
levels.append(cube)
return levels
def _split(cube, kernel):
"""Splits the image into high and low frequencies.
This is achieved by smoothing the input image and substracting the smoothed
version from the input.
"""
low = _downsample(cube, kernel)
high = cube - _upsample(low, kernel, tf.shape(input=cube))
return high, low
def downsample(cube, num_levels, name=None):
"""Generates the different levels of the pyramid (downsampling).
"""
with tf.name_scope(name, "pyramid_downsample", [cube]):
cube = tf.convert_to_tensor(value=cube)
return _build_pyramid(cube, _downsample, num_levels)
def merge(levels, name=None):
"""Merges the different levels of the pyramid back to an image.
"""
with tf.name_scope(name, "pyramid_merge", levels):
levels = [tf.convert_to_tensor(value=level) for level in levels]
cube = levels[-1]
kernel = _binomial_kernel(tf.shape(input=cube)[-1], dtype=cube.dtype)
for level in reversed(levels[:-1]):
cube = _upsample(cube, kernel, tf.shape(input=level)) + level
return cube
def split(cube, num_levels, name=None):
"""Generates the different levels of the pyramid.
"""
with tf.name_scope(name, "pyramid_split", [cube]):
cube = tf.convert_to_tensor(value=cube)
kernel = _binomial_kernel(tf.shape(input=cube)[-1], dtype=cube.dtype)
low = cube
levels = []
for _ in range(num_levels):
high, low = _split(low, kernel)
levels.append(high)
levels.append(low)
return levels
def upsample(cube, num_levels, name=None):
"""Generates the different levels of the pyramid (upsampling).
"""
with tf.name_scope(name, "pyramid_upsample", [cube]):
cube = tf.convert_to_tensor(value=cube)
return _build_pyramid(cube, _upsample, num_levels)
field = tf.expand_dims(final_field, -1)
# Split field into short range and large scale components
levels = split(field, 1)
levels
# Compute forces on both fields
def force(field):
shape = field.get_shape()
batch_size, nc = shape[1], shape[2].value
kfield = flowpm.utils.r2c3d(field)
kvec = flowpm.kernels.fftk((nc, nc, nc), symmetric=False)
lap = tf.cast(flowpm.kernels.laplace_kernel(kvec), tf.complex64)
fknlrange = flowpm.kernels.longrange_kernel(kvec, 0)
kweight = lap * fknlrange
pot_k = tf.multiply(kfield, kweight)
f = []
for d in range(3):
force_dc = tf.multiply(pot_k, flowpm.kernels.gradient_kernel(kvec, d))
forced = flowpm.utils.c2r3d(force_dc)
f.append(forced)
return tf.stack(f, axis=-1)
force_levels = [force(levels[0][...,0]), force(levels[1][...,0])*2]
force_levels
rec = merge(force_levels)
rec
# Direct force computation on input field
dforce = force(field[...,0])
with tf.Session() as sess:
sim, l0, l1, r, df = sess.run([final_field, force_levels[0], force_levels[1], rec, dforce])
figure(figsize=(15,5))
subplot(131)
imshow(sim[0].sum(axis=1))
title('Input')
subplot(132)
imshow(l0[0].sum(axis=1)[...,0])
title('short range forces')
subplot(133)
imshow(l1[0].sum(axis=1)[...,0]);
title('l2')
title('long range forces')
figure(figsize=(15,5))
subplot(131)
imshow(r[0].sum(axis=1)[...,0]);
title('Multi-Grid Force Computation')
subplot(132)
imshow(df[0].sum(axis=1)[...,0]);
title('Direct Force Computation')
subplot(133)
imshow((r - df)[0,8:-8,8:-8,8:-8].sum(axis=1)[...,0]);
title('Residuals');
levels = split(field, 4)
rec = merge(levels)
with tf.Session() as sess:
sim, l0, l1, l2, l3, r = sess.run([final_field, levels[0], levels[1], levels[2], levels[3], rec[...,0]])
figure(figsize=(25,10))
subplot(151)
imshow(sim[0].sum(axis=0))
title('Input')
subplot(152)
imshow(l0[0].sum(axis=0)[...,0])
title('l1')
subplot(153)
imshow(l1[0].sum(axis=0)[...,0]);
title('l2')
subplot(154)
imshow(l2[0].sum(axis=0)[...,0]);
title('l2')
subplot(155)
imshow(l3[0].sum(axis=0)[...,0]);
title('approximation')
figure(figsize=(25,10))
subplot(131)
imshow(sim[0].sum(axis=0))
title('Input')
subplot(132)
imshow(r[0].sum(axis=0))
title('Reconstruction')
subplot(133)
imshow((sim - r)[0].sum(axis=0));
title('Difference')
```
| true |
code
| 0.774402 | null | null | null | null |
|
# First Graph Convolutional Neural Network
This notebook shows a simple GCN learning using the KrasHras dataset from [Zamora-Resendiz and Crivelli, 2019](https://www.biorxiv.org/content/10.1101/610444v1.full).
```
import gcn_prot
import torch
import torch.nn.functional as F
from os.path import join, pardir
from random import seed
ROOT_DIR = pardir
seed = 8
```
## Table of contents
1. [Initialize Data](#Initialize-Data)
## Initialize Data
The data for this experiment is the one used for testing on the [CI of the repository](https://github.com/carrascomj/gcn-prot/blob/master/.travis.yml). Thus, it is already fetched.
The first step is to calculate the length of the largest protein (in number of aminoacids), since all the proteins will be zero padded to that value. That way, all the inputs fed to the model will have the same length.
```
largest = gcn_prot.data.get_longest(join(ROOT_DIR, "new_data", "graph"))
print(f"Largets protein has {largest} aminoacids")
```
However, for this particular dataset, it is known from the aforementioned publication that 185 is enough because the 4 terminal aminoacids were not well determined and would be later discarded by the mask.
```
largest = 185
data_path = join(ROOT_DIR, "new_data")
```
The split is performed with 70/10/20 for train/test/valid.
Note that the generated datasets (custom child classes of `torch.utils.data.Dataset`) doesn't stored the graphs in memory but their paths, generating the graph when accessed by an index.
```
train, test, valid = gcn_prot.data.get_datasets(
data_path=data_path,
nb_nodes=largest,
task_type="classification",
nb_classes=2,
split=[0.7, 0.2, 0.1],
seed=42,
)
print(f"Train: {len(train)}\nTest: {len(test)}\nValidation: {len(valid)}")
type(train)
```
## Define the neural network
Each instance in the dataset retrieves a list of four matrices:
1. **feature matrix**: 29 x 185. This corresponds to the aminoacid type (one-hot encoded vector of length 23), residue depth, residue orientation and 4 features encoding the positional index with a sinusoidal transformation.
2. **coordinates**: 3 x 185. x,y,z coordinates of every aminoacid in the crystal (centered).
3. **mask**: to be applied to the adjacency to discard ill-identified aminoacids.
4. **y**: 2 label, Kras/Hras.
The transformation of this list to the input of the neural network (feature matrix, adjacency matrix), is performed during training.
```
model = gcn_prot.models.GCN_simple(
feats=29, # features in feature matrix
hidden=[8, 8], # number of neurons in convolutional layers (3 in this case)
label=2, # features on y
nb_nodes=largest, # for last layer
dropout=0, # applied in the convolutional layers
bias=False, # default
act=F.relu, # default
cuda=True # required for sparsize and fit_network
).cuda()
```
Now, instantiate the criterion and the optimizer.
```
optimizer = torch.optim.Adam(model.parameters())
criterion = torch.nn.CrossEntropyLoss().cuda()
```
## Train the network
```
%matplotlib inline
save_path = join(ROOT_DIR, "models", "GCN_tiny_weigths.pt")
model_na = gcn_prot.models.fit_network(
model, train, test, optimizer, criterion,
batch_size=20, # a lot of batches per epoch
epochs=20,
debug=True, # will print progress of epochs
plot_every=5, # loss plot/epoch
save=save_path # best weights (test set) will be saved here
)
```
Debug with validation.
```
model.eval()
for batch in torch.utils.data.DataLoader(
valid, shuffle=True, batch_size=2, drop_last=False
):
print(gcn_prot.models.train.forward_step(batch, model, False))
```
| true |
code
| 0.701854 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/DingLi23/s2search/blob/pipelining/pipelining/exp-cscv/exp-cscv_cscv_1w_ale_plotting.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
### Experiment Description
> This notebook is for experiment \<exp-cscv\> and data sample \<cscv\>.
### Initialization
```
%load_ext autoreload
%autoreload 2
import numpy as np, sys, os
in_colab = 'google.colab' in sys.modules
# fetching code and data(if you are using colab
if in_colab:
!rm -rf s2search
!git clone --branch pipelining https://github.com/youyinnn/s2search.git
sys.path.insert(1, './s2search')
%cd s2search/pipelining/exp-cscv/
pic_dir = os.path.join('.', 'plot')
if not os.path.exists(pic_dir):
os.mkdir(pic_dir)
```
### Loading data
```
sys.path.insert(1, '../../')
import numpy as np, sys, os, pandas as pd
from getting_data import read_conf
from s2search_score_pdp import pdp_based_importance
sample_name = 'cscv'
f_list = [
'title', 'abstract', 'venue', 'authors',
'year',
'n_citations'
]
ale_xy = {}
ale_metric = pd.DataFrame(columns=['feature_name', 'ale_range', 'ale_importance', 'absolute mean'])
for f in f_list:
file = os.path.join('.', 'scores', f'{sample_name}_1w_ale_{f}.npz')
if os.path.exists(file):
nparr = np.load(file)
quantile = nparr['quantile']
ale_result = nparr['ale_result']
values_for_rug = nparr.get('values_for_rug')
ale_xy[f] = {
'x': quantile,
'y': ale_result,
'rug': values_for_rug,
'weird': ale_result[len(ale_result) - 1] > 20
}
if f != 'year' and f != 'n_citations':
ale_xy[f]['x'] = list(range(len(quantile)))
ale_xy[f]['numerical'] = False
else:
ale_xy[f]['xticks'] = quantile
ale_xy[f]['numerical'] = True
ale_metric.loc[len(ale_metric.index)] = [f, np.max(ale_result) - np.min(ale_result), pdp_based_importance(ale_result, f), np.mean(np.abs(ale_result))]
# print(len(ale_result))
print(ale_metric.sort_values(by=['ale_importance'], ascending=False))
print()
```
### ALE Plots
```
import matplotlib.pyplot as plt
import seaborn as sns
from matplotlib.ticker import MaxNLocator
categorical_plot_conf = [
{
'xlabel': 'Title',
'ylabel': 'ALE',
'ale_xy': ale_xy['title']
},
{
'xlabel': 'Abstract',
'ale_xy': ale_xy['abstract']
},
{
'xlabel': 'Authors',
'ale_xy': ale_xy['authors'],
# 'zoom': {
# 'inset_axes': [0.3, 0.3, 0.47, 0.47],
# 'x_limit': [89, 93],
# 'y_limit': [-1, 14],
# }
},
{
'xlabel': 'Venue',
'ale_xy': ale_xy['venue'],
# 'zoom': {
# 'inset_axes': [0.3, 0.3, 0.47, 0.47],
# 'x_limit': [89, 93],
# 'y_limit': [-1, 13],
# }
},
]
numerical_plot_conf = [
{
'xlabel': 'Year',
'ylabel': 'ALE',
'ale_xy': ale_xy['year'],
# 'zoom': {
# 'inset_axes': [0.15, 0.4, 0.4, 0.4],
# 'x_limit': [2019, 2023],
# 'y_limit': [1.9, 2.1],
# },
},
{
'xlabel': 'Citations',
'ale_xy': ale_xy['n_citations'],
# 'zoom': {
# 'inset_axes': [0.4, 0.65, 0.47, 0.3],
# 'x_limit': [-1000.0, 12000],
# 'y_limit': [-0.1, 1.2],
# },
},
]
def pdp_plot(confs, title):
fig, axes_list = plt.subplots(nrows=1, ncols=len(confs), figsize=(20, 5), dpi=100)
subplot_idx = 0
plt.suptitle(title, fontsize=20, fontweight='bold')
# plt.autoscale(False)
for conf in confs:
axes = axes if len(confs) == 1 else axes_list[subplot_idx]
sns.rugplot(conf['ale_xy']['rug'], ax=axes, height=0.02)
axes.axhline(y=0, color='k', linestyle='-', lw=0.8)
axes.plot(conf['ale_xy']['x'], conf['ale_xy']['y'])
axes.grid(alpha = 0.4)
# axes.set_ylim([-2, 20])
axes.xaxis.set_major_locator(MaxNLocator(integer=True))
axes.yaxis.set_major_locator(MaxNLocator(integer=True))
if ('ylabel' in conf):
axes.set_ylabel(conf.get('ylabel'), fontsize=20, labelpad=10)
# if ('xticks' not in conf['ale_xy'].keys()):
# xAxis.set_ticklabels([])
axes.set_xlabel(conf['xlabel'], fontsize=16, labelpad=10)
if not (conf['ale_xy']['weird']):
if (conf['ale_xy']['numerical']):
axes.set_ylim([-1.5, 1.5])
pass
else:
axes.set_ylim([-7, 19])
pass
if 'zoom' in conf:
axins = axes.inset_axes(conf['zoom']['inset_axes'])
axins.xaxis.set_major_locator(MaxNLocator(integer=True))
axins.yaxis.set_major_locator(MaxNLocator(integer=True))
axins.plot(conf['ale_xy']['x'], conf['ale_xy']['y'])
axins.set_xlim(conf['zoom']['x_limit'])
axins.set_ylim(conf['zoom']['y_limit'])
axins.grid(alpha=0.3)
rectpatch, connects = axes.indicate_inset_zoom(axins)
connects[0].set_visible(False)
connects[1].set_visible(False)
connects[2].set_visible(True)
connects[3].set_visible(True)
subplot_idx += 1
pdp_plot(categorical_plot_conf, f"ALE for {len(categorical_plot_conf)} categorical features")
# plt.savefig(os.path.join('.', 'plot', f'{sample_name}-1wale-categorical.png'), facecolor='white', transparent=False, bbox_inches='tight')
pdp_plot(numerical_plot_conf, f"ALE for {len(numerical_plot_conf)} numerical features")
# plt.savefig(os.path.join('.', 'plot', f'{sample_name}-1wale-numerical.png'), facecolor='white', transparent=False, bbox_inches='tight')
```
| true |
code
| 0.484868 | null | null | null | null |
|
# SAT Analysis
**We wish to answer the question whether SAT is a fairt test?**
## Read in the data
```
import pandas as pd
import numpy as np
import re
data_files = [
"ap_2010.csv",
"class_size.csv",
"demographics.csv",
"graduation.csv",
"hs_directory.csv",
"sat_results.csv"
]
data = {}
for file in data_files:
df = pd.read_csv("schools/{0}".format(file))
data[file.replace(".csv", "")] = df
```
# Read in the surveys
```
all_survey = pd.read_csv("schools/survey_all.txt", delimiter="\t", encoding='windows-1252')
d75_survey = pd.read_csv("schools/survey_d75.txt", delimiter="\t", encoding='windows-1252')
survey = pd.concat([all_survey, d75_survey], axis=0)
survey["DBN"] = survey["dbn"]
survey_fields = [
"DBN",
"rr_s",
"rr_t",
"rr_p",
"N_s",
"N_t",
"N_p",
"saf_p_11",
"com_p_11",
"eng_p_11",
"aca_p_11",
"saf_t_11",
"com_t_11",
"eng_t_11",
"aca_t_11",
"saf_s_11",
"com_s_11",
"eng_s_11",
"aca_s_11",
"saf_tot_11",
"com_tot_11",
"eng_tot_11",
"aca_tot_11",
]
survey = survey[survey_fields]
data["survey"] = survey
```
# Add DBN columns
```
data["hs_directory"]["DBN"] = data["hs_directory"]["dbn"]
def pad_csd(num):
str_rep = str(num)
if len(str_rep) > 1:
return str_rep
else:
return "0" + str_rep
data["class_size"]["padded_csd"] = data["class_size"]["CSD"].apply(pad_csd)
data["class_size"]["DBN"] = data["class_size"]["padded_csd"] + data["class_size"]["SCHOOL CODE"]
```
# Convert columns to numeric
```
cols = ['SAT Math Avg. Score', 'SAT Critical Reading Avg. Score', 'SAT Writing Avg. Score']
for c in cols:
data["sat_results"][c] = pd.to_numeric(data["sat_results"][c], errors="coerce")
data['sat_results']['sat_score'] = data['sat_results'][cols[0]] + data['sat_results'][cols[1]] + data['sat_results'][cols[2]]
def find_lat(loc):
coords = re.findall("\(.+, .+\)", loc)
lat = coords[0].split(",")[0].replace("(", "")
return lat
def find_lon(loc):
coords = re.findall("\(.+, .+\)", loc)
lon = coords[0].split(",")[1].replace(")", "").strip()
return lon
data["hs_directory"]["lat"] = data["hs_directory"]["Location 1"].apply(find_lat)
data["hs_directory"]["lon"] = data["hs_directory"]["Location 1"].apply(find_lon)
data["hs_directory"]["lat"] = pd.to_numeric(data["hs_directory"]["lat"], errors="coerce")
data["hs_directory"]["lon"] = pd.to_numeric(data["hs_directory"]["lon"], errors="coerce")
```
# Condense datasets
Condensing the datasets to remove any two rows having same **DBN** so that all the datasets can be easily joined on **"DBN"**
```
class_size = data["class_size"]
class_size = class_size[class_size["GRADE "] == "09-12"]
class_size = class_size[class_size["PROGRAM TYPE"] == "GEN ED"]
class_size = class_size.groupby("DBN").agg(np.mean)
class_size.reset_index(inplace=True)
data["class_size"] = class_size
data["demographics"] = data["demographics"][data["demographics"]["schoolyear"] == 20112012]
data["graduation"] = data["graduation"][data["graduation"]["Cohort"] == "2006"]
data["graduation"] = data["graduation"][data["graduation"]["Demographic"] == "Total Cohort"]
```
# Convert AP scores to numeric
```
cols = ['AP Test Takers ', 'Total Exams Taken', 'Number of Exams with scores 3 4 or 5']
for col in cols:
data["ap_2010"][col] = pd.to_numeric(data["ap_2010"][col], errors="coerce")
```
# Combine the datasets
Merging the dataset on **DBN** column
```
combined = data["sat_results"]
combined = combined.merge(data["ap_2010"], on="DBN", how="left")
combined = combined.merge(data["graduation"], on="DBN", how="left")
to_merge = ["class_size", "demographics", "survey", "hs_directory"]
for m in to_merge:
combined = combined.merge(data[m], on="DBN", how="inner")
combined = combined.fillna(combined.mean())
combined = combined.fillna(0)
```
# Add a school district column for mapping
```
def get_first_two_chars(dbn):
return dbn[0:2]
combined["school_dist"] = combined["DBN"].apply(get_first_two_chars)
```
# Find correlations
```
correlations = combined.corr()
correlations = correlations["sat_score"]
correlations
```
# Plotting survey correlations
```
# Remove DBN since it's a unique identifier, not a useful numerical value for correlation.
survey_fields.remove("DBN")
import matplotlib.pyplot as plt
import seaborn as sns
% matplotlib inline
fig, ax = plt.subplots(figsize = (8,5))
correlations[survey_fields].plot.bar()
plt.show()
```
#### Findings from above plot
There are high correlations between N_s, N_t, N_p and sat_score. Since these columns are correlated with total_enrollment, it makes sense that they would be high.
It is more interesting that rr_s, the student response rate, or the percentage of students that completed the survey, correlates with sat_score. This might make sense because students who are more likely to fill out surveys may be more likely to also be doing well academically.
How students and teachers percieved safety (saf_t_11 and saf_s_11) correlate with sat_score. This make sense, as it's hard to teach or learn in an unsafe environment.
The last interesting correlation is the aca_s_11, which indicates how the student perceives academic standards, correlates with sat_score, but this is not true for aca_t_11, how teachers perceive academic standards, or aca_p_11, how parents perceive academic standards.
## Investigating safety scores
```
combined.plot.scatter(x = "saf_s_11", y = "sat_score" )
plt.show()
```
There appears to be a correlation between SAT scores and safety, although it isn't thatstrong. It looks like there are a few schools with extremely high SAT scores and high safety scores. There are a few schools with low safety scores and low SAT scores. No school with a safety score lower than 6.5 has an average SAT score higher than 1500 or so.
## Plotting safety scores for districts in NYC
```
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
districts = combined.groupby("school_dist").agg(np.mean)
districts.reset_index(inplace=True)
m = Basemap(
projection='merc',
llcrnrlat=40.496044,
urcrnrlat=40.915256,
llcrnrlon=-74.255735,
urcrnrlon=-73.700272,
resolution='i'
)
m.drawmapboundary(fill_color='#85A6D9')
m.drawcoastlines(color='#6D5F47', linewidth=.4)
m.drawrivers(color='#6D5F47', linewidth=.4)
m.fillcontinents(color='#FFC58C',lake_color='#85A6D9')
longitudes = districts["lon"].tolist()
latitudes = districts["lat"].tolist()
m.scatter(longitudes, latitudes, s=50, zorder=2, latlon=True, c=districts["saf_s_11"], cmap="summer")
plt.show()
```
## Investigating racial differences
```
race_cols = ["white_per", "asian_per", "black_per", "hispanic_per"]
correlations[race_cols].plot.bar()
```
It shows higher percentage of white or asian students at a school correlates positively with sat score, whereas a higher percentage of black or hispanic students correlates negatively with sat score. This may be due to a lack of funding for schools in certain areas, which are more likely to have a higher percentage of black or hispanic students.
### Hispanic people vs SAT score
```
combined.plot.scatter(x = "hispanic_per", y = "sat_score")
plt.show()
bool_hispanic_95 = combined["hispanic_per"] > 95
combined[bool_hispanic_95]["SCHOOL NAME"]
```
The schools listed above appear to primarily be geared towards recent immigrants to the US. These schools have a lot of students who are learning English, which would explain the lower SAT scores.
```
bool_hispanic_10 = (combined["hispanic_per"] < 10) & (combined["sat_score"] > 1800)
combined[bool_hispanic_10]["SCHOOL NAME"]
```
Many of the schools above appear to be specialized science and technology schools that receive extra funding, and only admit students who pass an entrance exam. This doesn't explain the low hispanic_per, but it does explain why their students tend to do better on the SAT -- they are students from all over New York City who did well on a standardized test.
## Investigating gender differences
```
gender_cols = ["male_per", "female_per"]
correlations[gender_cols].plot.bar()
plt.show()
```
In the plot above, we can see that a high percentage of females at a school positively correlates with SAT score, whereas a high percentage of males at a school negatively correlates with SAT score. Neither correlation is extremely strong.
```
combined.plot.scatter(x = "female_per", y = "sat_score")
```
Based on the scatterplot, there doesn't seem to be any real correlation between sat_score and female_per. However, there is a cluster of schools with a high percentage of females (60 to 80), and high SAT scores.
```
bool_female = (combined["female_per"] > 60) & (combined["sat_score"] > 1700)
combined[bool_female]["SCHOOL NAME"]
```
These schools appears to be very selective liberal arts schools that have high academic standards.
## AP_test takers vs SAT
In the U.S., high school students take Advanced Placement (AP) exams to earn college credit. There are AP exams for many different subjects.
```
combined["ap_per"] = combined["AP Test Takers "]/ combined["total_enrollment"]
combined.plot.scatter(x = "ap_per", y = "sat_score")
```
It looks like there is a relationship between the percentage of students in a school who take the AP exam, and their average SAT scores. It's not an extremely strong correlation, though.
## potential next steps:
* Determing whether there's a correlation between class size and SAT scores
* Figuring out which neighborhoods have the best schools
* If we combine this information with a dataset containing property values, we could find the least expensive neighborhoods that have good schools.
* Investigating the differences between parent, teacher, and student responses to surveys.
* Assigning scores to schools based on sat_score and other attributes.
| true |
code
| 0.340705 | null | null | null | null |
|
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Peeker-Groups" data-toc-modified-id="Peeker-Groups-1"><span class="toc-item-num">1 </span>Peeker Groups</a></span></li></ul></div>
# Peeker Groups
`Peeker` objects are normally stored in a global list, but sometimes you might want
to create a group of `Peeker`s for a set of signals.
This is easily done using the `PeekerGroup` class.
Once again, I'll use the hierarchical adder example to illustrate the use of `PeekerGroup`s.
```
from myhdl import *
from myhdlpeek import Peeker, PeekerGroup
def adder_bit(a, b, c_in, sum_, c_out):
'''Single bit adder.'''
@always_comb
def adder_logic():
sum_.next = a ^ b ^ c_in
c_out.next = (a & b) | (a & c_in) | (b & c_in)
# Add some global peekers to monitor the inputs and outputs.
Peeker(a, 'a')
Peeker(b, 'b')
Peeker(c_in, 'c_in')
Peeker(sum_, 'sum')
Peeker(c_out, 'c_out')
return adder_logic
def adder(a, b, sum_):
'''Connect single-bit adders to create a complete adder.'''
c = [Signal(bool(0)) for _ in range(len(a)+1)] # Carry signals between stages.
s = [Signal(bool(0)) for _ in range(len(a))] # Sum bit for each stage.
stages = [] # Storage for adder bit instances.
# Create the adder bits and connect them together.
for i in range(len(a)):
stages.append( adder_bit(a=a(i), b=b(i), sum_=s[i], c_in=c[i], c_out=c[i+1]) )
# Concatenate the sum bits and send them out on the sum_ output.
@always_comb
def make_sum():
sum_.next = ConcatSignal(*reversed(s))
return instances() # Return all the adder stage instances.
# Create signals for interfacing to the adder.
a, b, sum_ = [Signal(intbv(0,0,8)) for _ in range(3)]
# Clear-out any existing peeker stuff before instantiating the adder.
Peeker.clear()
# Instantiate the adder.
add_1 = adder(a=a, b=b, sum_=sum_)
# Create a group of peekers to monitor the top-level buses.
# Each argument to PeekerGroup assigns a signal to a name for a peeker.
top_pkr = PeekerGroup(a_bus=a, b_bus=b, sum_bus=sum_)
# Create a testbench generator that applies random inputs to the adder.
from random import randrange
def test():
for _ in range(8):
a.next, b.next = randrange(0, a.max), randrange(0, a.max)
yield delay(1)
# Simulate the adder, testbench and peekers.
Simulation(add_1, test(), *Peeker.instances()).run()
# Display only the peekers for the top-level buses.
# The global peekers in the adder bits won't show up.
top_pkr.show_waveforms('a_bus b_bus sum_bus')
top_pkr.to_html_table('a_bus b_bus sum_bus')
```
| true |
code
| 0.327608 | null | null | null | null |
|
```
%matplotlib inline
```
Word Embeddings: Encoding Lexical Semantics
===========================================
Word embeddings are dense vectors of real numbers, one per word in your
vocabulary. In NLP, it is almost always the case that your features are
words! But how should you represent a word in a computer? You could
store its ascii character representation, but that only tells you what
the word *is*, it doesn't say much about what it *means* (you might be
able to derive its part of speech from its affixes, or properties from
its capitalization, but not much). Even more, in what sense could you
combine these representations? We often want dense outputs from our
neural networks, where the inputs are $|V|$ dimensional, where
$V$ is our vocabulary, but often the outputs are only a few
dimensional (if we are only predicting a handful of labels, for
instance). How do we get from a massive dimensional space to a smaller
dimensional space?
How about instead of ascii representations, we use a one-hot encoding?
That is, we represent the word $w$ by
\begin{align}\overbrace{\left[ 0, 0, \dots, 1, \dots, 0, 0 \right]}^\text{|V| elements}\end{align}
where the 1 is in a location unique to $w$. Any other word will
have a 1 in some other location, and a 0 everywhere else.
There is an enormous drawback to this representation, besides just how
huge it is. It basically treats all words as independent entities with
no relation to each other. What we really want is some notion of
*similarity* between words. Why? Let's see an example.
Suppose we are building a language model. Suppose we have seen the
sentences
* The mathematician ran to the store.
* The physicist ran to the store.
* The mathematician solved the open problem.
in our training data. Now suppose we get a new sentence never before
seen in our training data:
* The physicist solved the open problem.
Our language model might do OK on this sentence, but wouldn't it be much
better if we could use the following two facts:
* We have seen mathematician and physicist in the same role in a sentence. Somehow they
have a semantic relation.
* We have seen mathematician in the same role in this new unseen sentence
as we are now seeing physicist.
and then infer that physicist is actually a good fit in the new unseen
sentence? This is what we mean by a notion of similarity: we mean
*semantic similarity*, not simply having similar orthographic
representations. It is a technique to combat the sparsity of linguistic
data, by connecting the dots between what we have seen and what we
haven't. This example of course relies on a fundamental linguistic
assumption: that words appearing in similar contexts are related to each
other semantically. This is called the `distributional
hypothesis <https://en.wikipedia.org/wiki/Distributional_semantics>`__.
Getting Dense Word Embeddings
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
How can we solve this problem? That is, how could we actually encode
semantic similarity in words? Maybe we think up some semantic
attributes. For example, we see that both mathematicians and physicists
can run, so maybe we give these words a high score for the "is able to
run" semantic attribute. Think of some other attributes, and imagine
what you might score some common words on those attributes.
If each attribute is a dimension, then we might give each word a vector,
like this:
\begin{align}q_\text{mathematician} = \left[ \overbrace{2.3}^\text{can run},
\overbrace{9.4}^\text{likes coffee}, \overbrace{-5.5}^\text{majored in Physics}, \dots \right]\end{align}
\begin{align}q_\text{physicist} = \left[ \overbrace{2.5}^\text{can run},
\overbrace{9.1}^\text{likes coffee}, \overbrace{6.4}^\text{majored in Physics}, \dots \right]\end{align}
Then we can get a measure of similarity between these words by doing:
\begin{align}\text{Similarity}(\text{physicist}, \text{mathematician}) = q_\text{physicist} \cdot q_\text{mathematician}\end{align}
Although it is more common to normalize by the lengths:
\begin{align}\text{Similarity}(\text{physicist}, \text{mathematician}) = \frac{q_\text{physicist} \cdot q_\text{mathematician}}
{\| q_\text{\physicist} \| \| q_\text{mathematician} \|} = \cos (\phi)\end{align}
Where $\phi$ is the angle between the two vectors. That way,
extremely similar words (words whose embeddings point in the same
direction) will have similarity 1. Extremely dissimilar words should
have similarity -1.
You can think of the sparse one-hot vectors from the beginning of this
section as a special case of these new vectors we have defined, where
each word basically has similarity 0, and we gave each word some unique
semantic attribute. These new vectors are *dense*, which is to say their
entries are (typically) non-zero.
But these new vectors are a big pain: you could think of thousands of
different semantic attributes that might be relevant to determining
similarity, and how on earth would you set the values of the different
attributes? Central to the idea of deep learning is that the neural
network learns representations of the features, rather than requiring
the programmer to design them herself. So why not just let the word
embeddings be parameters in our model, and then be updated during
training? This is exactly what we will do. We will have some *latent
semantic attributes* that the network can, in principle, learn. Note
that the word embeddings will probably not be interpretable. That is,
although with our hand-crafted vectors above we can see that
mathematicians and physicists are similar in that they both like coffee,
if we allow a neural network to learn the embeddings and see that both
mathematicians and physicists have a large value in the second
dimension, it is not clear what that means. They are similar in some
latent semantic dimension, but this probably has no interpretation to
us.
In summary, **word embeddings are a representation of the *semantics* of
a word, efficiently encoding semantic information that might be relevant
to the task at hand**. You can embed other things too: part of speech
tags, parse trees, anything! The idea of feature embeddings is central
to the field.
Word Embeddings in Pytorch
~~~~~~~~~~~~~~~~~~~~~~~~~~
Before we get to a worked example and an exercise, a few quick notes
about how to use embeddings in Pytorch and in deep learning programming
in general. Similar to how we defined a unique index for each word when
making one-hot vectors, we also need to define an index for each word
when using embeddings. These will be keys into a lookup table. That is,
embeddings are stored as a $|V| \times D$ matrix, where $D$
is the dimensionality of the embeddings, such that the word assigned
index $i$ has its embedding stored in the $i$'th row of the
matrix. In all of my code, the mapping from words to indices is a
dictionary named word\_to\_ix.
The module that allows you to use embeddings is torch.nn.Embedding,
which takes two arguments: the vocabulary size, and the dimensionality
of the embeddings.
To index into this table, you must use torch.LongTensor (since the
indices are integers, not floats).
```
# Author: Robert Guthrie
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
torch.manual_seed(1)
word_to_ix = {"hello": 0, "world": 1}
embeds = nn.Embedding(2, 5) # 2 words in vocab, 5 dimensional embeddings
lookup_tensor_hello = torch.tensor([word_to_ix["hello"]], dtype=torch.long)
hello_embed = embeds(lookup_tensor_hello)
print("hello_embed: ", hello_embed)
lookup_tensor_world = torch.tensor([word_to_ix["world"]], dtype=torch.long)
world_embed = embeds(lookup_tensor_world)
print("worlds_embed: ", world_embed)
```
An Example: N-Gram Language Modeling
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Recall that in an n-gram language model, given a sequence of words
$w$, we want to compute
\begin{align}P(w_i | w_{i-1}, w_{i-2}, \dots, w_{i-n+1} )\end{align}
Where $w_i$ is the ith word of the sequence.
In this example, we will compute the loss function on some training
examples and update the parameters with backpropagation.
```
CONTEXT_SIZE = 5
EMBEDDING_DIM = 10
# We will use Shakespeare Sonnet 2
test_sentence = """When forty winters shall besiege thy brow,
And dig deep trenches in thy beauty's field,
Thy youth's proud livery so gazed on now,
Will be a totter'd weed of small worth held:
Then being asked, where all thy beauty lies,
Where all the treasure of thy lusty days;
To say, within thine own deep sunken eyes,
Were an all-eating shame, and thriftless praise.
How much more praise deserv'd thy beauty's use,
If thou couldst answer 'This fair child of mine
Shall sum my count, and make my old excuse,'
Proving his beauty by succession thine!
This were to be new made when thou art old,
And see thy blood warm when thou feel'st it cold.""".split()
# we should tokenize the input, but we will ignore that for now
# build a list of tuples. Each tuple is ([ word_i-2, word_i-1 ], target word)
ngrams = [([test_sentence[i + j] for j in range(CONTEXT_SIZE)], test_sentence[i + CONTEXT_SIZE])
for i in range(len(test_sentence) - CONTEXT_SIZE)]
# trigrams = [([test_sentence[i], test_sentence[i + 1]], test_sentence[i + 2])
# for i in range(len(test_sentence) - 2)]
print("the first 3 ngrams, just so you can see what they look like: ")
print(ngrams[:3])
print("the last 3 ngrams: ")
print(ngrams[-3:])
vocab = set(test_sentence)
word_to_ix = {word: i for i, word in enumerate(vocab)}
class NGramLanguageModeler(nn.Module):
def __init__(self, vocab_size, embedding_dim, context_size):
super(NGramLanguageModeler, self).__init__()
self.embeddings = nn.Embedding(vocab_size, embedding_dim)
self.linear1 = nn.Linear(context_size * embedding_dim, 128)
self.linear2 = nn.Linear(128, vocab_size)
def forward(self, inputs):
embeds = self.embeddings(inputs).view((1, -1))
out = F.relu(self.linear1(embeds))
out = self.linear2(out)
# print("out: ", out)
log_probs = F.log_softmax(out, dim=1)
# print("log probs: ", log_probs)
return log_probs
losses = []
loss_function = nn.NLLLoss()
model = NGramLanguageModeler(len(vocab), EMBEDDING_DIM, CONTEXT_SIZE)
optimizer = optim.SGD(model.parameters(), lr=0.001)
for epoch in range(1):
total_loss = 0
for context, target in ngrams:
# Step 1. Prepare the inputs to be passed to the model (i.e, turn the words
# into integer indices and wrap them in tensors)
context_idxs = torch.tensor([word_to_ix[w] for w in context], dtype=torch.long)
# Step 2. Recall that torch *accumulates* gradients. Before passing in a
# new instance, you need to zero out the gradients from the old
# instance
model.zero_grad()
# Step 3. Run the forward pass, getting log probabilities over next
# words
log_probs = model(context_idxs)
# Step 4. Compute your loss function. (Again, Torch wants the target
# word wrapped in a tensor)
loss = loss_function(log_probs, torch.tensor([word_to_ix[target]], dtype=torch.long))
# Step 5. Do the backward pass and update the gradient
loss.backward()
optimizer.step()
# Get the Python number from a 1-element Tensor by calling tensor.item()
total_loss += loss.item()
losses.append(total_loss)
print("losses: ", losses)
print("The loss decreased every iteration over the training data!")
```
Exercise: Computing Word Embeddings: Continuous Bag-of-Words
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The Continuous Bag-of-Words model (CBOW) is frequently used in NLP deep
learning. It is a model that tries to predict words given the context of
a few words before and a few words after the target word. This is
distinct from language modeling, since CBOW is not sequential and does
not have to be probabilistic. Typcially, CBOW is used to quickly train
word embeddings, and these embeddings are used to initialize the
embeddings of some more complicated model. Usually, this is referred to
as *pretraining embeddings*. It almost always helps performance a couple
of percent.
The CBOW model is as follows. Given a target word $w_i$ and an
$N$ context window on each side, $w_{i-1}, \dots, w_{i-N}$
and $w_{i+1}, \dots, w_{i+N}$, referring to all context words
collectively as $C$, CBOW tries to minimize
\begin{align}-\log p(w_i | C) = -\log \text{Softmax}(A(\sum_{w \in C} q_w) + b)\end{align}
where $q_w$ is the embedding for word $w$.
Implement this model in Pytorch by filling in the class below. Some
tips:
* Think about which parameters you need to define.
* Make sure you know what shape each operation expects. Use .view() if you need to
reshape.
```
CONTEXT_SIZE = 2 # 2 words to the left, 2 to the right
EMBEDDING_DIM = 10
raw_text = """We are about to study the idea of a computational process.
Computational processes are abstract beings that inhabit computers.
As they evolve, processes manipulate other abstract things called data.
The evolution of a process is directed by a pattern of rules
called a program. People create programs to direct processes. In effect,
we conjure the spirits of the computer with our spells.""".split()
# By deriving a set from `raw_text`, we deduplicate the array
vocab = set(raw_text)
vocab_size = len(vocab)
word_to_ix = {word: i for i, word in enumerate(vocab)}
data = []
for i in range(2, len(raw_text) - 2):
context = [raw_text[i - 2], raw_text[i - 1],
raw_text[i + 1], raw_text[i + 2]]
target = raw_text[i]
data.append((context, target))
print(data[:5])
class CBOW(nn.Module):
def __init__(self, vocab_size, embedding_dim):
super(CBOW, self).__init__()
self.embeddings = nn.Embedding(vocab_size, embedding_dim)
self.linear = nn.Linear(embedding_dim, vocab_size)
def forward(self, inputs):
embeds = self.embeddings(inputs)
# print("embeds: ", embeds)
qsum = torch.sum(embeds, dim=0)
# print("qsum: ", qsum)
out = self.linear(qsum)
# print("out: ", out)
log_probs = F.log_softmax(out, dim=0)
# print("log probs: ", log_probs)
return log_probs
# create your model and train. here are some functions to help you make
# the data ready for use by your module
def make_context_vector(context, word_to_ix):
idxs = [word_to_ix[w] for w in context]
return torch.tensor(idxs, dtype=torch.long)
context_vector = make_context_vector(data[0][0], word_to_ix) # example
print("context vector: ", context_vector)
losses = []
loss_function = nn.NLLLoss()
model = CBOW(len(vocab), EMBEDDING_DIM)
optimizer = optim.SGD(model.parameters(), lr=0.001)
for epoch in range(10):
total_loss = 0
for context, target in data:
# Step 1. Prepare the inputs to be passed to the model (i.e, turn the words
# into integer indices and wrap them in tensors)
# context_idxs = torch.tensor([word_to_ix[w] for w in context], dtype=torch.long)
context_idxs = make_context_vector(context, word_to_ix)
# Step 2. Recall that torch *accumulates* gradients. Before passing in a
# new instance, you need to zero out the gradients from the old
# instance
model.zero_grad()
# Step 3. Run the forward pass, getting log probabilities over next
# words
log_probs = model(context_idxs)
# Step 4. Compute your loss function. (Again, Torch wants the target
# word wrapped in a tensor)
# loss_function requires a minibatch index - here we have only 1
loss = loss_function(log_probs.unsqueeze(0), torch.tensor([word_to_ix[target]], dtype=torch.long))
# Step 5. Do the backward pass and update the gradient
loss.backward()
optimizer.step()
# Get the Python number from a 1-element Tensor by calling tensor.item()
total_loss += loss.item()
losses.append(total_loss)
print(losses) # The loss decreased every iteration over the training data!
```
| true |
code
| 0.737991 | null | null | null | null |
|
# Online prediction for radon-small
In online mode, the model is learning as soon as a new data arrives.
It means that when we want our prediction we don't need to provide feature vector,
since all data was already processed by the model.
Explore the following models:
* Constant model - The same value for all future points
* Previous day model - Next day is the same like previous day
* Daily Pattern model - Calculate daily pattern from historical data. Use it as next day prediction.
```
import datetime
import calendar
import pprint
import json
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib import rcParams
rcParams['figure.figsize'] = 12, 4
```
# Load project
```
project_folder = '../../datasets/radon-small/'
with open(project_folder + 'project.json', 'r') as file:
project = json.load(file)
pprint.pprint(project)
print('Flow1')
flow = pd.read_csv(project_folder + 'flow1.csv', parse_dates=['time'])
flow = flow.set_index('time')['flow'].fillna(0)
flow = flow.resample('5T').pad()
flow.head()
```
## Helper functions
Helper functions for building training and test sets and calculating score
```
class PredictionModel:
def fit(self, data_points):
pass
def predict(self, prediction_day):
pass
def mae(y_hat, y):
"""
Calculate Mean Absolute Error
This metric is better here since serries have quite big outliers
"""
return np.sum(np.absolute(y_hat-y))/y.shape[0]
def split_data(split_day):
"""Get all data up to given day"""
end_day = split_day - pd.Timedelta('1 min')
return flow[:end_day]
def evaluate_day(model, split_day):
"""Evaluate data for single day"""
xs = split_data(split_day)
next_day = split_day + pd.Timedelta(1, 'D')
y = flow[next_day: next_day+pd.Timedelta('1439 min')]
model.fit(xs)
y_hat = model.predict(next_day)
return mae(y_hat, y)
def evaluate_model(model, start_day):
"""
Evaluate model on all days starting from split_day.
Returns 90th percentile error as model score
"""
last_day = pd.Timestamp(project['end-date'])
split_day = start_day
costs = []
while split_day < last_day:
cost = evaluate_day(model, split_day)
costs.append(cost)
split_day += pd.Timedelta(1, 'D')
return np.percentile(costs, 90), costs
split_data(pd.Timestamp('2016-11-10')).tail()
```
# Models
# ConstMeanModel
```
class ConstantMeanModel(PredictionModel):
def __init__(self):
self.mu = 0
def fit(self, xs):
self.mu = np.mean(xs)
def predict(self, day):
return np.ones(12*24) * self.mu
score, costs = evaluate_model(ConstantMeanModel(), pd.Timestamp('2016-11-11'))
print('ConstantMeanModel score: {:.2f}'.format(score))
```
## Previous Day Model
Uses values from last day
```
class LastDayModel(PredictionModel):
def fit(self, xs):
self.y = xs.values[-288:]
def predict(self, day):
return self.y
score, costs = evaluate_model(LastDayModel(), pd.Timestamp('2016-11-11'))
print('LastDayModel score: {:.2f}'.format(score))
```
Model for single day. Easy case
```
evaluate_day(LastDayModel(), pd.Timestamp('2016-11-11'))
```
And when next day is kind of outlier
```
evaluate_day(LastDayModel(), pd.Timestamp('2017-05-01'))
```
## Daily Pattern model
Create pattern of daily usage based on historical data. Use this pattern to predict next values
(This can take up to 10 minutes to calculate)
```
class DailyPatternModel(PredictionModel):
def fit(self, xs):
df = flow.to_frame().reset_index()
self.daily_pattern = df.groupby(by=[df.time.map(lambda x : (x.hour, x.minute))]).flow.mean().values
def predict(self, day):
return self.daily_pattern
score, costs = evaluate_model(DailyPatternModel(), pd.Timestamp('2016-11-11'))
print('DailyPatternModel score: {:.2f}'.format(score))
```
### Daily Pattern Median Model
Calculate median value for each time. Use it as a prediction for the next day.
```
class DayMedianModel(PredictionModel):
def fit(self, xs):
df = flow.to_frame().reset_index()
self.daily_pattern = df.groupby(by=[df.time.map(lambda x : (x.hour, x.minute))]).flow.median().values
def predict(self, day):
return self.daily_pattern
score, costs = evaluate_model(DayMedianModel(), pd.Timestamp('2016-11-11'))
print('DayModel score: {:.2f}'.format(score))
```
## Daily pattern with last value correction
This model calculates daily pattern, but also corrects it based on previous value
$$ x_{t} = \alpha (x_{t-1} - dp(t-1)) + dp(t)$$
where
- dp - daily pattern
| true |
code
| 0.642012 | null | null | null | null |
|
## Exploratory data analysis of Dranse discharge data
Summary: The data is stationary even without differencing, but ACF and PACF plots show that an hourly first order difference and a periodic 24h first order difference is needed for SARIMA fitting.
Note: Final fitting done in Google Colab due to memory constraints - this notebook will throw some errors
## SARIMAX model fitting
### 1.) Loading the river flow (discharge) data
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
from river_forecast.training_data_access import get_combined_flow
flow_df = get_combined_flow()
plt.plot(flow_df.index, flow_df)
```
### Exploratory Analysis
```
subset_df = flow_df.loc[:]
subset_df['year'] = subset_df.index.year
subset_df['offset_datetime'] = subset_df.index + pd.DateOffset(year=2019)
sns.set(style="whitegrid")
sns.set(rc={'figure.figsize':(15, 8)})
ax = sns.lineplot(x='offset_datetime', y='discharge', hue='year', data=subset_df, markers='')
import matplotlib.dates as mdates
myFmt = mdates.DateFormatter('%b')
ax.get_xaxis().set_major_formatter(myFmt)
ax.set_xlabel('Month')
ax.set_ylabel('Discharge (m^3/s)')
```
### train-test split
```
import statsmodels.api as sm
train = flow_df.loc[flow_df.index < pd.to_datetime('2019-01-01 00:00:00')]
test = flow_df.loc[(flow_df.index >= pd.to_datetime('2019-01-01 00:00:00')) & (flow_df.index < pd.to_datetime('2019-07-01 00:00:00'))]
fig, ax = plt.subplots()
train.plot(ax=ax, label='train')
test.plot(ax=ax, label='test')
plt.legend()
plt.show()
```
### Time series stationarity analysis
```
import statsmodels.formula.api as smf
import statsmodels.tsa.api as smt
import statsmodels.api as sm
import scipy.stats as scs
def tsplot(y, lags=None, figsize=(12, 7), style='bmh'):
"""
Plot time series, its ACF and PACF, calculate Dickey–Fuller test
-> Adapted from https://gist.github.com/DmitrySerg/14c1af2c1744bb9931d1eae6d9713b21
y - timeseries
lags - how many lags to include in ACF, PACF calculation
"""
if not isinstance(y, pd.Series):
y = pd.Series(y)
with plt.style.context(style):
fig = plt.figure(figsize=figsize)
layout = (2, 2)
ts_ax = plt.subplot2grid(layout, (0, 0), colspan=2)
acf_ax = plt.subplot2grid(layout, (1, 0))
pacf_ax = plt.subplot2grid(layout, (1, 1))
y.plot(ax=ts_ax)
t_statistic, p_value = sm.tsa.stattools.adfuller(y)[:2]
ts_ax.set_title('Time Series Analysis Plots\n Dickey-Fuller: p={0:.5f}'.format(p_value))
smt.graphics.plot_acf(y, lags=lags, ax=acf_ax)
smt.graphics.plot_pacf(y, lags=lags, ax=pacf_ax)
plt.tight_layout()
```
#### Augmenteded Dicky-Fuller to check for stationarity
```
flow = flow_df['discharge']
flow_diff_1 = (flow - flow.shift(1)).dropna()
flow_diff_1_24 = (flow_diff_1 - flow_diff_1.shift(24)).dropna()
flow_diff_24 = (flow - flow.shift(24)).dropna()
tsplot(flow, lags=24*5, figsize=(12, 7))
tsplot(flow_diff_1, lags=24*5, figsize=(12, 7))
tsplot(flow_diff_1_24, lags=24*7, figsize=(12, 7))
tsplot(flow_diff_1_24, lags=12, figsize=(12, 7))
```
#### Fitting SARIMAX
```
train['discharge'].plot()
from statsmodels.tsa.statespace.sarimax import SARIMAX
### Crashed again upon completion, make sure the time series is ok -> computation moved to Colab
# Create a SARIMAX model
model = SARIMAX(train['discharge'], order=(4,1,1), seasonal_order=(0,1,1,24))
# p - try 0, 1, 2, 3, 4; q is cleary one. Q is clearly 1, P is tapering off: 0.
# Fit the model
results = model.fit()
import pickle
pickle.dump(results.params, open('../models/sarimax_211_011-24_model-parameters.pkl', 'wb'))
### # load model
### loaded = ARIMAResults.load('model.pkl')
results = pickle.load(open('../models/sarimax_211_011-24_model.pkl', 'rb'))
pwd
# Print the results summary
print(results.summary())
results
```
#### Plotting the forecast
```
# Generate predictions
one_step_forecast = results.get_prediction(start=-48)
# Extract prediction mean
mean_forecast = one_step_forecast.predicted_mean
# Get confidence intervals of predictions
confidence_intervals = one_step_forecast.conf_int()
# Select lower and upper confidence limits
lower_limits = confidence_intervals.loc[:, 'lower discharge']
upper_limits = confidence_intervals.loc[:, 'upper discharge']
# plot the dranse data
# plot your mean predictions
plt.plot(mean_forecast.index, mean_forecast, color='r', label='forecast')
# shade the area between your confidence limits
plt.fill_between(lower_limits.index, lower_limits,
upper_limits, color='pink')
# set labels, legends and show plot
plt.xlabel('Date')
plt.ylabel('Discharge')
plt.title('hourly forecaset')
plt.legend()
plt.show()
# Generate predictions
dynamic_forecast = results.get_prediction(start=-6, dynamic=True)
# Extract prediction mean
mean_forecast = dynamic_forecast.predicted_mean
# Get confidence intervals of predictions
confidence_intervals = dynamic_forecast.conf_int(alpha=0.32) # 95 percent confidence interval
# Select lower and upper confidence limits
lower_limits = confidence_intervals.loc[:,'lower discharge']
upper_limits = confidence_intervals.loc[:,'upper discharge']
# plot your mean predictions
plt.plot(mean_forecast.index, mean_forecast, color='r', label='forecast')
# shade the area between your confidence limits
plt.fill_between(lower_limits.index, lower_limits,
upper_limits, color='pink', alpha=0.5)
# set labels, legends and show plot
plt.xlabel('Date')
plt.ylabel('Discharge')
plt.title('dynamic forecast')
plt.legend()
```
#### Finding the best model manually
```
# Create empty list to store search results
order_aic_bic=[]
# Loop over p values from 0-2
for p in range(0, 5):
print(p)
# create and fit ARMA(p,q) model
model = SARIMAX(train['discharge'], order=(p,1,1), seasonal_order=(0,1,1,24))
# p - try 0, 1, 2, 3, 4; q is cleary one. Q is clearly 1, P is tapering off: 0.
results = model.fit()
# Append order and results tuple
order_aic_bic.append((p,results.aic, results.bic))
# Construct DataFrame from order_aic_bic
order_df = pd.DataFrame(order_aic_bic,
columns=['p', 'AIC', 'BIC'])
# Print order_df in order of increasing AIC
print(order_df.sort_values('AIC'))
# Print order_df in order of increasing BIC
print(order_df.sort_values('BIC'))
# Create the 4 diagostics plots
results.plot_diagnostics()
plt.show()
# Print summary
print(results.summary())
```
### Forecasting
```
results.forecast(steps=6)
resB.forecast(steps=6)
import river_forecast.api_data_access
import importlib, sys
importlib.reload(river_forecast.api_data_access)
rivermap_data = river_forecast.api_data_access.RivermapDataRetriever()
recent_flow_df = rivermap_data.get_latest_river_flow(n_days=3, station='Dranse')
recent_flow_df
modelB = SARIMAX(recent_flow_df.iloc[:2].asfreq('h'), order=(4,1,1), seasonal_order=(0,1,1,24))
resB = modelB.smooth(results.params)
resB.forecast(steps=6)
from river_forecast.api_data_access import RivermapDataRetriever
data = RivermapDataRetriever().get_standard_dranse_data()
data
import importlib
import river_forecast.forecast
importlib.reload(river_forecast.forecast)
sf = river_forecast.forecast.SARIMAXForecast()
sf.generate_prediction_plot(data)
sf.dynamic_forecast(data)
```
| true |
code
| 0.607197 | null | null | null | null |
|
```
import pandas as pd
import numpy as np
from pathlib import Path
import matplotlib.pyplot as plt
from sklearn.preprocessing import LabelEncoder
from sklearn.feature_extraction import DictVectorizer
from sklearn.ensemble import RandomForestRegressor
from sklearn.impute import SimpleImputer
from sklearn.inspection import plot_partial_dependence
from dtreeviz.trees import *
import scipy as sp
from scipy.cluster import hierarchy as hc
import sys
sys.path.append('..')
from fpl_predictor.util import *
# path to project directory
path = Path('../')
# read in training dataset
train_df = pd.read_csv(path/'fpl_predictor/data/train_v8.csv',
index_col=0,
dtype={'season':str,
'squad':str,
'comp':str})
```
## Random Forest
Random Forest is an ensemble tree-based predictive algorithm. In this case we will be using it for regression - we want to predict a continuous number, predicted points, for each player each game. It works by training many separate decision trees, each using a subset of the training data, and outputs the average prediction across all trees.
Applying it to a time series problem, where metrics from recent time periods can be predicitve, requires us to add in window features (e.g. points scored last gameweek). These are created using the player_lag_features function from 00_fpl_features.
```
# add a bunch of player lag features
lag_train_df, team_lag_vars = team_lag_features(train_df, ['total_points'], ['all', 1, 2, 3, 4, 5, 10])
lag_train_df, player_lag_vars = player_lag_features(lag_train_df, ['total_points'], ['all', 1, 2, 3, 4, 5, 10])
```
Similar to the simple model, we'll set the validation period to be gameweeks 20-25 of the 2019/20 season - the model will be trained on all data prior to that period. This time however, we'll be using some additional features: the season, gameweek, player position, home/away, and both teams, as well as all the lagging features we created above.
```
# set validaton point/length and categorical/continuous variables
valid_season = '2021'
valid_gw = 20
valid_len = 6
cat_vars = ['season', 'position', 'was_home', 'team', 'opponent_team']
cont_vars = ['gw']#, 'minutes']
dep_var = ['total_points']
```
Some of the features have an order (2019/20 season is after 2019 season) whereas others do not (position). We can set this in the data where appropriate using an ordered category (e.g. 1617 < 1718 < 1819 < 1920 < 2021).
```
# we want to set gw and season as ordered categorical variables
# need lists with ordered categories
ordered_gws = list(range(1,39))
ordered_seasons = ['1617', '1718', '1819', '1920', '2021']
# set as categories with correct order
lag_train_df['gw'] = lag_train_df['gw'].astype('category')
lag_train_df['season'] = lag_train_df['season'].astype('category')
lag_train_df['gw'].cat.set_categories(ordered_gws, ordered=True, inplace=True)
lag_train_df['season'].cat.set_categories(ordered_seasons, ordered=True, inplace=True)
lag_train_df['season']
```
And now we can go ahead and create our training and validation sets using the function we defined in the last notebook.
```
# create dataset with adjusted post-validation lag numbers
train_valid_df, train_idx, valid_idx = create_lag_train(lag_train_df,
cat_vars, cont_vars,
player_lag_vars, team_lag_vars, dep_var,
valid_season, valid_gw, valid_len)
```
The way we calculate our lag features means that there will be null values in our dataset. This will cause an error when using random forest in scikit learn, so we will set them all to zero for now (although note that this may not be the best fill strategy).
```
lag_train_df[~np.isfinite(lag_train_df['total_points_pg_last_1'])]
# imp = SimpleImputer(missing_values=np.nan, strategy='mean')
# need to think about imputing NaN instead of setting to zero
# imp.fit(X_train[team_lag_vars + player_lag_vars])
train_valid_df[team_lag_vars + player_lag_vars] = train_valid_df[team_lag_vars + player_lag_vars].fillna(0)
```
The random forest regressor will only take numbers as inputs, so we need to transform our caterogical features into a format that the random forest regressor object will be able to use, numbers instead of strings in one or more columns.
```
# split out dependent variable
X, y = train_valid_df[cat_vars + cont_vars + team_lag_vars + player_lag_vars].copy(), train_valid_df[dep_var].copy()
# since position is categorical, it should be a string
X['position'] = X['position'].apply(str)
# need to transform season
enc = LabelEncoder()
X['season'] = enc.fit_transform(X['season'])
X_dict = X.to_dict("records")
# Create the DictVectorizer object: dv
dv = DictVectorizer(sparse=False, separator='_')
# Apply dv on df: df_encoded
X_encoded = dv.fit_transform(X_dict)
X_df = pd.DataFrame(X_encoded, columns=dv.feature_names_)
```
For example, season is now represented by a number (0 -> 2016/17, 1 -> 2017/18, etc.) in a single column, and position is represented by a 1 or 0 in multiple columns.
```
X_df[['season', 'position_1', 'position_2', 'position_3', 'position_4']]
X_df.columns
```
Let's now split out our training (everything prior to the validation gameweek) and validation (6 gameweeks from the validation gameweek, only rows with >0 minutes)
```
# split out training and validation sets
X_train = X_df.loc[train_idx]
y_train = y.loc[train_idx]
X_test = X_df.loc[valid_idx]
# we only want look at rows with >0 minutes (i.e. the player played)
# test_mask = (X_test['minutes'] > 0)
# X_test = X_test[test_mask]
# y_test = y.loc[valid_idx][test_mask]
y_test = y.loc[valid_idx]
# X_train = X_train.drop('minutes', axis=1)
# X_test = X_test.drop('minutes', axis=1)
```
We can now create the RandomForestRegessor with set parameters, train using the training data, and look at the error on the validation set.
```
# def rf(xs, y, n_estimators=40, max_samples=50_000,
# max_features=0.5, min_samples_leaf=5, **kwargs):
# return RandomForestRegressor(n_jobs=-1, n_estimators=n_estimators,
# max_samples=max_samples, max_features=max_features,
# min_samples_leaf=min_samples_leaf, oob_score=True).fit(xs, y)
def rf(xs, y, max_depth=7, **kwargs):
return RandomForestRegressor(n_jobs=-1, max_depth=max_depth, oob_score=True).fit(xs, y)
# fit training data
m = rf(X_train, y_train.values.ravel())
# predict validation set and output metrics
preds = m.predict(X_test)
print("RMSE: %f" % (r_mse(preds, y_test.values.ravel())))
print("MAE: %f" % mae(preds, y_test.values.ravel()))
```
Right away this looks like it's a significant improvement on the simple model, good to see. Let's go ahead and use the same approach with validation across the whole of the 2019/20 season.
```
def rf_season(df, valid_season='2021'):
# empty list for scores
scores = []
valid_len = 6
for valid_gw in range(1,40-valid_len):
# create dataset with adjusted post-validation lag numbers
train_valid_df, train_idx, valid_idx = create_lag_train(df, cat_vars, cont_vars,
player_lag_vars, team_lag_vars, dep_var,
valid_season, valid_gw, valid_len)
train_valid_df[team_lag_vars + player_lag_vars] = train_valid_df[team_lag_vars + player_lag_vars].fillna(0)
# split out dependent variable
X, y = train_valid_df[cat_vars + cont_vars + team_lag_vars + player_lag_vars].copy(), train_valid_df[dep_var].copy()
# since position is categorical, it should be a string
X['position'] = X['position'].apply(str)
# need to transform season
enc = LabelEncoder()
X['season'] = enc.fit_transform(X['season'])
X_dict = X.to_dict("records")
# Create the DictVectorizer object: dv
dv = DictVectorizer(sparse=False, separator='_')
# Apply dv on df: df_encoded
X_encoded = dv.fit_transform(X_dict)
X_df = pd.DataFrame(X_encoded, columns=dv.feature_names_)
# split out training and validation sets
X_train = X_df.loc[train_idx]
y_train = y.loc[train_idx]
X_test = X_df.loc[valid_idx]
# we only want look at rows with >0 minutes (i.e. the player played)
# test_mask = (X_test['minutes'] > 0)
# X_test = X_test[test_mask]
# y_test = y.loc[valid_idx][test_mask]
y_test = y.loc[valid_idx]
m = rf(X_train, y_train.values.ravel())
preds, targs = m.predict(X_test), y_test.values.ravel()
gw_mae = mae(preds, targs)
print("GW%d MAE: %f" % (valid_gw, gw_mae))
scores.append(gw_mae)
return scores
scores = rf_season(lag_train_df)
plt.plot(scores)
plt.ylabel('GW MAE')
plt.xlabel('GW')
plt.text(15, 1.55, 'Season Avg MAE: %.2f' % np.mean(scores), bbox={'facecolor':'white', 'alpha':1, 'pad':5})
plt.show()
```
Looking across the whole season we see about a 10% improvement versus the simple model. Also interesting is that the performance again improves as the season progresses - this makes sense, more data about each of teams and players (particularly new ones) means improved ability to predict the next 6 gameweeks.
Let's add these validation scores to our comparison dataset.
```
model_validation_scores = pd.read_csv(path/'charts/model_validation_scores.csv', index_col=0)
model_validation_scores['random_forest'] = scores
model_validation_scores.to_csv(path/'charts/model_validation_scores.csv')
```
A feature of the random forest algorithm is that we can see how often features are being used in trees. This will give us an indication of how important each feature is i.e. is it predictive of todal points scored. Simple models are usually better, so this also gives us a way of seeing if there are any features that are not particularly useful, and can therefore be removed.
```
def rf_feat_importance(m, df):
return pd.DataFrame({'cols':df.columns, 'imp':m.feature_importances_}
).sort_values('imp', ascending=False)
fi = rf_feat_importance(m, X_train)
fi[:32]
def plot_fi(fi):
return fi.plot('cols', 'imp', 'barh', figsize=(12,7), legend=False).invert_yaxis()
plot_fi(fi[:30]);
```
At the moment this algorithm is given minutes played in the gameweek so it's unsurprising that this is by far the most important feature - the more minutes a player plays, the more opportunity to score points. But strictly speaking we don't actually have this information prior to a gameweek (in practice it is estimated using previous minutes and injury status), so we can ignore it for now.
Below that the top features are:
1. minutes_last_1 - number of minutes in the last fixture for the player
2. minutes_last_2 - number of minutes in the last two fixtures for the player
3. total_points_pg_last_all - the player's average points per game in all of history (since start of 2016/17 season)
4. total_points_team_pg_last_all_opponent - the opposition's average points per game in all of history
5. minutes_last_3 - number of minutes in the last three fixtures for the player
6. total_points_team_pg_last_all - the player's team's average points per game in all of history
7. total_points_pg_last_10 - the player's average points per game in the last 10 fixtures
8. total_points_pg_last_1 - the player's average points per game in the last fixture
This is interesting. It seems to be saying that the amount of minutes a player has played recently and their underlying ability to score points in all of history, along with their team's and opponent team's points scoring in all of history, is most important.
Recent performance (i.e. 'form') is also important, but to a lesser extent.
It also shows that the lag features are far more useful than the categorical features such as team, opponent and position. Again not too surprising since information on these categories are already captured in the lag features.
Let's test this... we can remove anything with a feature importance of less than 0.005 and see how the model performs on the original 2019/20 week 20 validation point (going from 94 features to just 32).
```
to_keep = fi[fi.imp>0.005].cols
len(to_keep)
len(X_train.columns)
X_train_imp = X_train[to_keep]
X_test_imp = X_test[to_keep]
m = rf(X_train_imp, y_train.values.ravel())
mae(m.predict(X_test_imp), y_test.values.ravel())
# mae(m.predict(X_train_imp), y_train.values.ravel())
```
Very similar albeit slightly higher error (less than 1% worse performance) than previously, and still a long way ahead of the simple model.
Continuing our thinking about improving/simplifying the model features, we can also look to see if there are any similar features - quite often we will find that some features are so similar that some of them may be redundant.
The following function determines the similarity between columns in a dataset and visualises it using a dendrogram.
```
def cluster_columns(df, figsize=(10,6), font_size=12):
corr = np.round(sp.stats.spearmanr(df).correlation, 4)
corr_condensed = hc.distance.squareform(1-corr)
z = hc.linkage(corr_condensed, method='average')
fig = plt.figure(figsize=figsize)
hc.dendrogram(z, labels=df.columns, orientation='left', leaf_font_size=font_size)
plt.show()
cluster_columns(X_train_imp)
```
We can see that our lagging features are somewhat similar - absolutely expected since, for example, minutes_last_5 is equal to minutes_last_4 + minutes 5 games ago. They are still different enough to be of value separately, but it does make me wonder whether separating out each historic game in some way (up to a point) would be valuable.
A final useful tool we can use is partial dependency plots. These try to look at the impact of single features on the dependent variable (points scored).
```
fig,ax = plt.subplots(figsize=(12, 3))
plot_partial_dependence(m, X_test_imp, ['total_points_pg_last_all',
'total_points_team_pg_last_all_opponent',
'total_points_pg_last_1'],
grid_resolution=20, ax=ax);
```
Again, these make sense. The higher a player's historic points per game (defined as 90 minutes) is, the higher we predict their score will be. Conversely, the higher their opposition's historic points per game, the harder they are as an opponent and the lower their predicted score will be.
Looking at the player's most recent game, again the higher their score, the more it will push up our prediction (the impact of their 'form'), but the relationship is far weaker than the player's underlying per minute scoring stats.
Here we just try to look at features in isolation, there will lots of interactions going on between features that improve performance. For example, a player may have a high 'total_points_pg_last_1' from the previous fixture but only played 5 minutes in total - in this case the algorithm is likely to have learned that a high 'total_points_pg_last_1' coupled with a low 'minutes_last_1' is not an indicator that the player will score higher in the next fixture.
Ok, now we can move onto the next algorithm - xgboost.
| true |
code
| 0.417776 | null | null | null | null |
|
# Heikin-Ashi PSAR Strategy
_Roshan Mahes_
In this tutorial, we implement the so-called _Parabolic Stop and Reverse (PSAR)_ strategy. Given any stock, currency or commodity, this indicator tells us whether to buy or sell the stock at any given time. The momentum strategy is based on the open, high, low and close price for each time period. This can be represented with a traditional Japanese candlestick chart. Later on, we apply the PSAR strategy on so-called Heikin-Ashi ('average bar') data, which reduces some noise, making it easier to identify trends.
The following packages are required:
```
%pip install pandas
%pip install yfinance
%pip install plotly
```
Now we can import the following modules:
```
import os
import pandas as pd
import yfinance as yf
import plotly.graph_objects as go
```
This strategy works on any stock. In this notebook, we take the stock of Apple, represented by the ticker symbol AAPL. Let's download the pricing data and plot a (Japanese) candlestick chart:
```
symbol = 'AAPL'
df = yf.download(symbol, start='2020-01-01')
df.index = df.index.strftime('%Y-%m-%d') # format index as dates only
candles = go.Candlestick(x=df.index, open=df.Open, high=df.High, low=df.Low, close=df.Close)
# plot figure
fig = go.Figure(candles)
fig.layout.xaxis.type = 'category' # remove weekend days
fig.layout.xaxis.dtick = 20 # show x-axis ticker once a month
fig.layout.xaxis.rangeslider.visible = False
fig.layout.title = f'Japanese Candlestick Chart ({symbol})'
fig.layout.template = 'plotly_white'
fig.show()
```
## The PSAR Indicator
The _Parabolic Stop and Reverse (PSAR) indicator,_ developed by J. Wells Wilder, is a momentum indicator used by traders to determine trend direction and potential reversals in price. It is a trend-following (lagging) indicator that uses a trailing stop and reverse method called SAR (Stop and Reverse), to identify suitable exit and entry points. The concept draws on the idea that 'time is the enemy', i.e., unless a security can continue to generate more profits over time, it should be liquidated.
The PSAR indicator appears on a chart as a series of dots, either above or below an asset's price, depending on the direction the price is moving. A dot is placed below the price when it is trending upward, and above the price when it is trending downward. There is a dot for every price bar, hence the indicator is always producing information.
The parabolic SAR is calculated almost independently for each trend in the price. When the price is in an uptrend, the SAR emerges below the price and converges upwards towards it. Similarly, on a downtrend, the SAR emerges above the price and converges downwards. At each step within a trend, the SAR is calculated one period in advance, i.e., tomorrow's SAR value is built using data available today. The general formula used for this is:
\begin{align*}
SAR_t = SAR_{t-1} + \alpha_t (EP_t - SAR_{t-1}),
\end{align*}
where $SAR_t$ is the SAR value at time $t$.
The _extreme point_ $EP$ is a record kept during each trend that represents the highest value reached by the price during the current uptrend, or lowest value during a downtrend. During each period, if a new maximum (or minimum) is observed, the EP is updated with that value.
The $\alpha$ value is the _acceleration factor._ Usually, this is initially set to a value of $0.02$. The factor is increased by $0.02$ each time a new EP is recorded. The rate will then quicken to a point where the SAR converges towards the price. To prevent it from getting too large, a maximum value for the acceleration factor is normally set to $0.20$. Generally, it is preferable in stocks to set the acceleration factor to $0.01$ so that it is not too sensitive to local decreases, whereas for commodity or currency trading the preferred value is $0.02$.
There are special cases that modify the SAR value:
1. If the next period's SAR value is inside (or beyond) the current period or the previous period's price range, the SAR must be set to the closest price bound. For example, if in an upward trend, the new SAR value is calculated and if it results to be more than today's or yesterday's lowest price, it must be set equal to that lower boundary.
2. If the next period's SAR value is inside (or beyond) the next period's price range, a new trend direction is then signaled. The SAR must then switch sides.
3. Upon a trend switch, the first SAR value for this new trend is set to the last $EP$ recorded on the prior trend. Then, the $EP$ is reset accordingly to this period's maximum, and the acceleration factor is reset to its initial value of $0.01$ (stocks) or $0.02$ (commodities/currencies).
As we can see, it's quite a difficult strategy as the formulas are not that straightforward. We have implemented it in the following function:
```
def PSAR(df, alpha_start=0.01):
"""
Returns the dataframe with the given PSAR indicator for each time period.
"""
trend = 0
alpha = alpha_start
SAR = [df['Open'][0]] + [0] * (len(df) - 1)
isUpTrend = lambda x: x > 0
trendSwitch = lambda x: abs(x) == 1
# initialisation
if df['Close'][1] > df['Close'][0]:
trend = 1
SAR[1] = df['High'][0]
EP = df['High'][1]
else:
trend = -1
SAR[1] = df['Low'][0]
EP = df['Low'][1]
# recursion
for t in range(2,len(df)):
# general formula
SAR_new = SAR[t-1] + alpha * (EP - SAR[t-1])
# case 1 & 2
if isUpTrend(trend):
SAR[t] = min(SAR_new, df['Low'][t-1], df['Low'][t-2])
if SAR[t] > df['Low'][t]:
trend = -1
else:
trend += 1
else:
SAR[t] = max(SAR_new, df['High'][t-1], df['High'][t-2])
if SAR[t] < df['High'][t]:
trend = 1
else:
trend -= 1
# case 3
if trendSwitch(trend):
SAR[t] = EP
alpha = alpha_start
if isUpTrend(trend):
EP_new = df['High'][t]
else:
EP_new = df['Low'][t]
else:
if isUpTrend(trend):
EP_new = max(df['High'][t], EP)
else:
EP_new = min(df['Low'][t], EP)
if EP != EP_new:
alpha = min(alpha + 0.02, 0.20)
# update EP
EP = EP_new
# store values
df['SAR'] = SAR
df['Signal'] = (df['SAR'] < df['Close']).apply(int).diff() # records trend switches
return df
```
After applying the PSAR strategy on Apple's stock, we end up with the following trading decisions:
```
# apply PSAR
df = PSAR(df)
# extract trend switches (buying/selling advice)
buy = df.loc[df['Signal'] == 1]
sell = df.loc[df['Signal'] == -1]
# candles & psar
candles = go.Candlestick(x=df.index, open=df.Open, high=df.High, low=df.Low, close=df.Close, name='candles')
psar = go.Scatter(x=df.index, y=df['SAR'], mode='markers', name='PSAR', line={'width': 10, 'color': 'midnightblue'})
# buy & sell symbols
buys = go.Scatter(x=buy.index, y=buy.Close, mode='markers', marker_size=15, marker_symbol=5,
marker_color='green', name='Buy', marker_line_color='black', marker_line_width=1)
sells = go.Scatter(x=sell.index, y=sell.Close, mode='markers', marker_size=15, marker_symbol=6,
marker_color='red', name='Sell', marker_line_color='black', marker_line_width=1)
# plot figure
fig = go.Figure(data=[candles, psar, buys, sells])
fig.layout.xaxis.type = 'category' # remove weekend days
fig.layout.xaxis.dtick = 20 # show x-axis ticker once a month
fig.layout.xaxis.rangeslider.visible = False
fig.layout.title = f'PSAR indicator ({symbol})'
fig.layout.template = 'plotly_white'
fig.show()
```
We see that most of the times our indicator predicted a correct trend! Instead of using the open, high, low and close data, represented by this traditional candlestick chart, we can also apply the PSAR strategy on so-called _Heikin-Ashi charts_.
## Heikin-Ashi Charts
_Heikin-Ashi_ means 'average bar' in Japanese. Heikin-Ashi charts, developed by Munehisa Homma in the 1700s, display prices that, at a glance, look similar to a traditional Japanese chart. The Heikin-Ashi technique averages price data to create a Japanese candlestick chart that filters out market noise. Instead of using the open, high, low, and close like standard candlestick charts, the Heikin-Ashi technique uses a modified formula based on two-period averages. This gives the chart a smoother appearance, making it easier to spots trends and reversals, but also obscures gaps and some price data.
The formulas are as follows:
\begin{align*}
H_{open,t} &= \frac{H_{open,t-1} + H_{close,t-1}}{2}, \\
H_{close,t} &= \frac{C_{open,t} + C_{high,t} + C_{low,t} + C_{close,t}}{4}, \\
H_{high,t} &= \max\{H_{open,t}, H_{close,t}, C_{high,t}\}, \\
H_{low,t} &= \min\{H_{open,t}, H_{close,t}, C_{low,t}\},
\end{align*}
with initial condition $H_{open, 0} = C_{open,0}$. In here, $H_{open,t}$ is the opening value in the Heikin-Ashi chart at time $t \in \mathbb{N}_0$, and $C_{open,t}$ is the opening value of the stock, which is used in the traditional Japanese candlestick chart etc.
In the following function we transform a given dataframe of stock prices to a Heikin-Ashi one.
```
def heikin_ashi(df):
"""
Converts a dataframe according to the Heikin-Ashi.
"""
df_HA = pd.DataFrame(index=df.index, columns=['Open', 'High', 'Low', 'Close'])
df_HA['Open'][0] = df['Open'][0]
df_HA['Close'] = (df['Open'] + df['High'] + df['Low'] + df['Close']) / 4
for t in range(1,len(df)):
df_HA.iat[t,0] = (df_HA['Open'][t-1] + df_HA['Close'][t-1]) / 2 # change H_open without warnings
df_HA['High'] = df_HA[['Open', 'Close']].join(df['High']).max(axis=1)
df_HA['Low'] = df_HA[['Open', 'Close']].join(df['Low']).min(axis=1)
return df_HA
```
Let's convert the Apple's (Japanese) candlestick chart to a Heikin-Ashi chart:
```
df_HA = heikin_ashi(df)
candle = go.Candlestick(x=df_HA.index, open=df_HA['Open'], high=df_HA['High'], low=df_HA['Low'], close=df_HA['Close'])
# plot figure
fig = go.Figure(candle)
fig.layout.xaxis.type = 'category' # remove weekend days
fig.layout.xaxis.dtick = 20 # show x-axis ticker once a month
fig.layout.xaxis.rangeslider.visible = False
fig.layout.title = f'Heikin-Ashi Chart ({symbol})'
fig.layout.template = 'plotly_white'
fig.show()
```
As we can see, the Heikin-Ashi technique can be used to identify a trend more easily. Because the Heikin-Ashi technique smooths price information over two periods, it makes trends, price patterns, and reversal points easier to spot. Candles on a traditional candlestick chart frequently change from up to down, which can make them difficult to interpret. Heikin-Ashi charts typically have more consecutive colored candles, helping traders to identify past price movements easily.
The Heikin-Ashi technique reduces false trading signals in sideways and choppy markets to help traders avoid placing trades during these times. For example, instead of getting two false reversal candles before a trend commences, a trader who uses the Heikin-Ashi technique is likely only to receive the valid signal.
## Heikin-Ashi PSAR indicator
It is straightforward to apply the PSAR strategy on our Heikin-Ashi data:
```
# apply PSAR
df = PSAR(df_HA)
# extract trend switches (buying/selling advice)
buy = df.loc[df['Signal'] == 1]
sell = df.loc[df['Signal'] == -1]
# candles & psar
candles = go.Candlestick(x=df.index, open=df.Open, high=df.High, low=df.Low, close=df.Close, name='candles')
psar = go.Scatter(x=df.index, y=df['SAR'], mode='markers', name='PSAR', line={'width': 10, 'color': 'midnightblue'})
# buy & sell symbols
buys = go.Scatter(x=buy.index, y=buy.Close, mode='markers', marker_size=15, marker_symbol=5,
marker_color='green', name='Buy', marker_line_color='black', marker_line_width=1)
sells = go.Scatter(x=sell.index, y=sell.Close, mode='markers', marker_size=15, marker_symbol=6,
marker_color='red', name='Sell', marker_line_color='black', marker_line_width=1)
# plot figure
fig = go.Figure(data=[candles, psar, buys, sells])
fig.layout.xaxis.type = 'category' # remove weekend days
fig.layout.xaxis.dtick = 20 # show x-axis ticker once a month
fig.layout.xaxis.rangeslider.visible = False
fig.layout.title = f'Heikin-Ashi PSAR indicator on Heikin-Ashi ({symbol})'
fig.layout.template = 'plotly_white'
fig.show()
```
In this case, there are small differences. In fact, only on one date the Heikin-Ashi SAR value is different from the traditional SAR value. This might change when clear trends are less visible, so feel free to try other stocks!
| true |
code
| 0.565299 | null | null | null | null |
|
n=b
```
# Binary representation ---> Microsoft
# Difficulty: School Marks: 0
'''
Write a program to print Binary representation of a given number N.
Input:
The first line of input contains an integer T, denoting the number of test cases. Each test case contains an integer N.
Output:
For each test case, print the binary representation of the number N in 14 bits.
Constraints:
1 ≤ T ≤ 100
1 ≤ N ≤ 5000
Example:
Input:
2
2
5
Output:
00000000000010
00000000000101
'''
for _ in range(int(input())):
n=int(input())
x=bin(n).split('b')[1]
print('0'*(14-len(x))+x)
# Alone in couple ---> Ola Cabs
# Difficulty: School Marks: 0
'''
In a party everyone is in couple except one. People who are in couple have same numbers. Find out the person who is not in couple.
Input:
The first line contains an integer 'T' denoting the total number of test cases. In each test cases, the first line contains an integer 'N' denoting the size of array. The second line contains N space-separated integers A1, A2, ..., AN denoting the elements of the array. (N is always odd)
Output:
In each seperate line print number of the person not in couple.
Constraints:
1<=T<=30
1<=N<=500
1<=A[i]<=500
N%2==1
Example:
Input:
1
5
1 2 3 2 1
Output:
3
'''
for _ in range(int(input())):
n=int(input())
s=input()
a=''
for i in s:
if s.count(i)%2==1 and i not in a:
a=i
print(i,end=' ')
# Count total set bits ---> Amazon,Adobe
# Difficulty: Basic Marks: 1
'''
You are given a number N. Find the total count of set bits for all numbers from 1 to N(both inclusive).
Input:
The first line of input contains an integer T denoting the number of test cases. T testcases follow. The first line of each test case is N.
Output:
For each testcase, in a new line, print the total count of all bits.
Constraints:
1 ≤ T ≤ 100
1 ≤ N ≤ 103
Example:
Input:
2
4
17
Output:
5
35
Explanation:
Testcase1:
An easy way to look at it is to consider the number, n = 4:
0 0 0 = 0
0 0 1 = 1
0 1 0 = 1
0 1 1 = 2
1 0 0 = 1
Therefore , the total number of bits is 5.
'''
for _ in range(int(input())):
n=int(input())
s=0
for i in range(n+1):
s+=bin(i).split('b')[1].count('1')
print(s)
```
***IMP***
```
# ------------------------------------------IMP---------------------------------------
"https://practice.geeksforgeeks.org/problems/toggle-bits-given-range/0/?track=sp-bit-magic&batchId=152"
# Toggle bits given range
# Difficulty: Basic Marks: 1
'''
Given a non-negative number N and two values L and R. The problem is to toggle the bits in the range L to R in the binary representation of N, i.e, to toggle bits from the rightmost Lth bit to the rightmost Rth bit. A toggle operation flips a bit 0 to 1 and a bit 1 to 0.
Input:
First line of input contains a single integer T which denotes the number of test cases. Then T test cases follows. First line of each test case contains three space separated integers N, L and R.
Output:
For each test case , print the number obtained by toggling bits from the rightmost Lth bit to the rightmost Rth bit in binary representation of N.
Constraints:
1<=T<=100
1<=N<=1000
1<=L<=R
L<=R<= Number of bits(N)
Example:
Input:
2
17 2 3
50 2 5
Output:
23
44
'''
for _ in range(int(input())):
l=list(map(int,input().split()))
c=0
s1=''
s=bin(l[0])[2:]
n=len(s)
for i in s:
if c>=(n-l[2]) and c<=(n-l[1]):
if i=='0':
s1+='1'
else:
s1+='0'
else:
s1+=i
c+=1
print(int(s1,base=2))
"https://practice.geeksforgeeks.org/problems/set-kth-bit/0/?track=sp-bit-magic&batchId=152"
# Set kth bit ---> Cisco, Qualcomm
# Difficulty: Basic Marks: 1
'''
Given a number N and a value K. From the right, set the Kth bit in the binary representation of N. The position of LSB(or last bit) is 0, second last bit is 1 and so on. Also, 0 <= K < X, where X is the number of bits in the binary representation of N.
Input:
First line of input contains a single integer T, which denotes the number of test cases. T test cases follows. First line of each testcase contains two space separated integers N and K.
Output:
For each test case, print the new number after setting the Kth bit of N.
Constraints:
1 <= T <= 100
1 <= N <= 1000
Example:
Input:
2
10 2
15 3
Output:
14
15
Explanation:
Testcase 1: Binary representation of the given number 10 is: 1 0 1 0, number of bits in the binary reprsentation is 4. Thus 2nd bit from right is 0. The number after changing this bit to 1 is: 14(1 1 1 0).
'''
for _ in range(int(input())):
l=list(map(int,input().split()))
s=bin(l[0])[2:]
s1=''
c=0
if (l[1]+1)>len(s):
s1='0'*(l[1]+1-len(s))+s
s=s1
s1=''
for i in s:
if c==(len(s)-(l[1]+1)):
s1+='1'
else:
s1+=i
c+=1
print(int(s1,2))
"https://practice.geeksforgeeks.org/problems/bit-difference/0/?track=sp-bit-magic&batchId=152"
# Bit Difference ---> Amazon Qualcomm, Samsung
# Difficulty: Basic Marks: 1
'''
You are given two numbers A and B. Write a program to count number of bits needed to be flipped to convert A to B.
Input:
The first line of input contains an integer T denoting the number of test cases. T testcases follow. The first line of each test case is A and B separated by a space.
Output:
For each testcase, in a new line, print the number of bits needed to be flipped.
Constraints:
1 ≤ T ≤ 100
1 ≤ A, B ≤ 103
Example:
Input:
1
10 20
Output:
4
Explanation:
Testcase1:
A = 01010
B = 10100
Number of bits need to flipped = 4
'''
for _ in range(int(input())):
a,c=input().split()
a=bin(int(a))[2:]
c=bin(int(c))[2:]
an=len(a)
cn=len(c)
if an!=cn:
if (an-cn)>0:
c='0'*(an-cn)+c
else:
a='0'*(cn-an)+a
count=0
for i,j in zip(a,c):
if i !=j:
count+=1
print(count)
"https://practice.geeksforgeeks.org/problems/swap-two-nibbles-in-a-byte/0/?track=sp-bit-magic&batchId=152"
# Swap two nibbles in a byte ---> Accolite, Cisco, Amazon, Qualcomm
# Difficulty: Basic Marks: 1
'''
Given a byte, swap the two nibbles in it. For example 100 is be represented as 01100100 in a byte (or 8 bits).
The two nibbles are (0110) and (0100). If we swap the two nibbles, we get 01000110 which is 70 in decimal.
Input:
The first line contains 'T' denoting the number of testcases. Each testcase contains a single positive integer X.
Output:
In each separate line print the result after swapping the nibbles.
Constraints:
1 ≤ T ≤ 70
1 ≤ X ≤ 255
Example:
Input:
2
100
129
Output:
70
24
'''
for _ in range(int(input())):
a=bin(int(input()))[2:]
if len(a)%4!=0:
a='0'*(4-len(a)%4)+a
c=[]
for i in range(1,(len(a)//4)+1):
c.append(a[4*(i-1):4*i])
c=c[::-1]
print(int(''.join(c),2))
```
### [Check whether K-th bit is set or not](https://practice.geeksforgeeks.org/problems/check-whether-k-th-bit-is-set-or-not/0/?track=sp-bit-magic&batchId=152)
- Company Tag: Cisco
- Difficulty: Basic
- Marks: 1
***Given a number N and a bit number K, check if Kth bit of N is set or not. A bit is called set if it is 1. Position of set bit '1' should be indexed starting with 0 from RSB side in binary representation of the number. Consider N = 4(100): 0th bit = 0, 1st bit = 0, 2nd bit = 1.***
***Input:***\
The first line of input contains an integer T denoting the number of test cases. Then T test cases follow.\
Each test case consists of two lines. The first line of each test case contain an integer N. \
The second line of each test case contains an integer K.\
\
***Output:***\
Corresponding to each test case, print "Yes" (without quotes) if Kth bit is set else print "No" (without quotes) in a new line.\
\
***Constraints:***\
1 ≤ T ≤ 200\
1 ≤ N ≤ 109\
0 ≤ K ≤ floor(log2(N) + 1)\
\
***Example:***\
***Input:***\
3\
4\
0\
4\
2\
500\
3\
\
***Output:***\
No\
Yes\
No\
\
***Explanation:***\
***Testcase 1:*** Binary representation of 4 is 100, in which 0th bit from LSB is not set. So, answer is No.\
```
for _ in range(int(input())):
a=bin(int(input()))[2:]
k=int(input())
if a[(len(a)-1)-k]=='1':
print('Yes')
else:
print('No')
```
### [Rightmost different bit](https://practice.geeksforgeeks.org/problems/rightmost-different-bit/0/?track=sp-bit-magic&batchId=152)
- Difficulty: Basic
- Marks: 1
***Given two numbers M and N. The task is to find the position of rightmost different bit in binary representation of numbers.***
***Input:***\
The input line contains T, denoting the number of testcases. Each testcase follows. First line of each testcase contains two space separated integers M and N.
***Output:***\
For each testcase in new line, print the position of rightmost different bit in binary representation of numbers. If both M and N are same then print -1 in this case.
***Constraints:***\
1 <= T <= 100\
1 <= M <= 103\
1 <= N <= 103
***Example:***\
***Input:***\
2\
11 9\
52 4
***Output:***\
2\
5
***Explanation:***\
***Tescase 1:*** Binary representaion of the given numbers are: 1011 and 1001, 2nd bit from right is different.
```
for _ in range(int(input())):
a,c=input().split()
a=bin(int(a))[2:]
c=bin(int(c))[2:]
an=len(a)
cn=len(c)
if an!=cn:
if (an-cn)>0:
c='0'*(an-cn)+c
else:
a='0'*(cn-an)+a
k=len(a)
for i in range(k):
if a[k-1-i]!=c[k-1-i]:
print(i+1)
break
else:
print(-1)
```
### [Number is sparse or not](https://practice.geeksforgeeks.org/problems/number-is-sparse-or-not/0/?track=sp-bit-magic&batchId=152)
- Difficulty: Basic
- Marks: 1
***Given a number N, check whether it is sparse or not. A number is said to be a sparse number if in the binary representation of the number no two or more consecutive bits are set.***
***Input:***\
The first line of input contains an integer T denoting the number of test cases. The first line of each test case is number 'N'.
***Output:***\
Print '1' if the number is sparse and '0' if the number is not sparse.
***Constraints:***\
1 <= T <= 100\
1 <= N <= 103
***Example:***\
***Input:***\
2\
2\
3
***Output:***\
1\
0
***Explanation:***\
***Testcase 1:*** Binary Representation of 2 is 10, which is not having consecutive set bits. So, it is sparse number.\
***Testcase 2:*** Binary Representation of 3 is 11, which is having consecutive set bits in it. So, it is not a sparse number.
```
for _ in range(int(input())):
a=bin(int(input()))[2:]
if a.count('11')>0:
print(0)
else:
print(1)
```
### [Gray Code](https://practice.geeksforgeeks.org/problems/gray-code/0/?track=sp-bit-magic&batchId=152)
- Difficulty: Basic
- Marks: 1
***You are given a decimal number n. You need to find the gray code of the number n and convert it into decimal.
To see how it's done, refer here.***
***Input:***\
The first line contains an integer T, the number of test cases. For each test case, there is an integer n denoting the number
***Output:***\
For each test case, the output is gray code equivalent of n.
***Constraints:***\
1 <= T <= 100\
0 <= n <= 108
***Example:***\
***Input***\
2\
7\
10
***Output***\
4\
15
***Explanation:***\
***Testcase1:*** 7 is represented as 111 in binary form. The gray code of 111 is 100, in the binary form whose decimal equivalent is 4.
***Testcase2:*** 10 is represented as 1010 in binary form. The gray code of 1010 is 1111, in the binary form whose decimal equivalent is 15.
```
for _ in range(int(input())):
a=bin(int(input()))[2:]
c=a[0]
for i in range(1,len(a)):
k=(int(a[i])+int(a[i-1]))
if k==0 or k==1:
c+=str(k)
else:
c+='0'
print(int(c,2))
```
### [Gray to Binary equivalent](https://practice.geeksforgeeks.org/problems/gray-to-binary-equivalent/0/?track=sp-bit-magic&batchId=152)
- Difficulty: Basic
- Marks: 1
***Given N in Gray code equivalent. Find its binary equivalent.***
***Input:***\
The first line contains an integer T, number of test cases. For each test cases, there is an integer N denoting the number in gray equivalent.
***Output:***\
For each test case, in a new line, the output is the decimal equivalent number N of binary form.
***Constraints:***\
1 <= T <= 100\
0 <= n <= 108
***Example:***\
***Input***\
2\
4\
15
***Output***\
7\
10
***Explanation:***\
***Testcase1.*** 4 is represented as 100 and its binary equivalent is 111 whose decimal equivalent is 7.\
***Testcase2.*** 15 is represented as 1111 and its binary equivalent is 1010 i.e. 10 in decimal.
```
for _ in range(int(input())):
a=bin(int(input()))[2:]
c=a[0]
for i in range(1,len(a)):
k=(int(a[i])+int(c[i-1]))
if k==0 or k==1:
c+=str(k)
else:
c+='0'
print(int(c,2))
```
### [Check if a Integer is power of 8 or not](https://practice.geeksforgeeks.org/problems/check-if-a-integer-is-power-of-8-or-not/0/?track=sp-bit-magic&batchId=152)
- Difficulty: Easy
- Marks: 2
***Given a positive integer N, The task is to find if it is a power of eight or not.***
***Input:***\
The first line of input contains an integer T denoting the number of test cases. Then T test cases follow. Each test case contains an integer N.
***Output:***\
In new line print "Yes" if it is a power of 8, else print "No".
***Constraints:***\
1<=T<=100\
1<=N<=1018
***Example:***\
***Input:***\
2\
64\
75
***Output:***\
Yes\
No
```
for _ in range(int(input())):
n=int(input())
i=1
while 8**i<=n:
i+=1
if 8**(i-1)==n:
print('Yes')
else:
print('No')
```
### [Is Binary Number Multiple of 3](https://practice.geeksforgeeks.org/problems/is-binary-number-multiple-of-3/0/?track=sp-bit-magic&batchId=152)
- Company Tags : Adobe, Amazon, Microsoft
- Difficulty: Medium
- Marks: 4
***Given a binary number, write a program that prints 1 if given binary number is a multiple of 3. Else prints 0. The given number can be big upto 2^100. It is recommended to finish the task using one traversal of input binary string.***
***Input:***\
The first line contains T denoting the number of testcases. Then follows description of testcases.
Each case contains a string containing 0's and 1's.
***Output:***\
For each test case, output a 1 if string is multiple of 3, else 0.
***Constraints:***\
1<=T<=100\
1<=Length of Input String<=100
***Example:***\
***Input:***\
2\
011\
100
***Output:***\
1\
0
```
for _ in range(int(input())):
n=int(input(),2)
if n%3==0:
print(1)
else:
print(0)
```
### [Reverse Bits](https://practice.geeksforgeeks.org/problems/reverse-bits/0/?track=sp-bit-magic&batchId=152)
- Company Tags : Amazon, Cisco, HCL, Nvidia, Qualcomm
- Difficulty: Easy
- Marks: 2
***Given a 32 bit number x, reverse its binary form and print the answer in decimal.***
***Input:***\
The first line of input consists T denoting the number of test cases. T testcases follow. Each test case contains a single 32 bit integer
***Output:***\
For each test case, in a new line, print the reverse of integer.
***Constraints:***\
1 <= T <= 100\
0 <= x <= 4294967295
***Example:***\
***Input:***\
2\
1\
5
***Output:***\
2147483648\
2684354560
***Explanation:***\
***Testcase1:***\
00000000000000000000000000000001 =1\
10000000000000000000000000000000 =2147483648
```
for _ in range(int(input())):
a=bin(int(input()))[2:][::-1]
a+='0'*(32-len(a))
print(int(a,2))
```
### [Swap all odd and even bits](https://practice.geeksforgeeks.org/problems/swap-all-odd-and-even-bits/0/?track=sp-bit-magic&batchId=152)
- Difficulty: Easy
- Marks: 2
***Given an unsigned integer N. The task is to swap all odd bits with even bits. For example, if the given number is 23 (00010111), it should be converted to 43(00101011). Here, every even position bit is swapped with adjacent bit on right side(even position bits are highlighted in binary representation of 23), and every odd position bit is swapped with adjacent on left side.***
***Input:***\
The first line of input contains T, denoting the number of testcases. Each testcase contains single line.
***Output:***\
For each testcase in new line, print the converted number.
***Constraints:***\
1 ≤ T ≤ 100\
1 ≤ N ≤ 100
***Example:***\
***Input:***\
2\
23\
2
***Output:***\
43\
1
***Explanation:***\
***Testcase 1:*** BInary representation of the given number; 00010111 after swapping 00101011.
```
for _ in range(int(input())):
a=bin(int(input()))[2:]
if len(a)%4!=0:
a='0'*(4-len(a)%4)+a
s=''
for i,j in zip(a[1::2],a[::2]):
s=s+i+j
print(int(s,2))
def f(a,c):
a=bin(a)[2:]
c=bin(c)[2:]
an=len(a)
cn=len(c)
if an!=cn:
if (an-cn)>0:
c='0'*(an-cn)+c
else:
a='0'*(cn-an)+a
count=0
for i,j in zip(a,c):
if i !=j:
count+=1
return count
for _ in range(int(input())):
count=0
n=int(input())
a=list(map(int,input().split()))
for i in a:
for j in a:
count+=f(i,j)
print(count)
if __name__ == '__main__':
n = int(input())
while n != 0:
p = int(input())
lis = [int(x) for x in input().split()]
bits = 0
for i in range(0, 32):
k = 0
for j in range(0, len(lis)):
if lis[j] & (1 << i):
k = k + 1
bits += k * (len(lis) - k)
print(2 * bits % 1000000007)
n = n-1
```
### [Bleak Numbers](https://practice.geeksforgeeks.org/problems/bleak-numbers/0/?track=sp-bit-magic&batchId=152)
- Company Tags : SAP Labs
- Difficulty: Medium
- Marks: 4
***Given an integer, check whether it is Bleak or not.***
***A number ‘n’ is called Bleak if it cannot be represented as sum of a positive number x and set bit count in x, i.e., x + [countSetBits(x)](http://www.geeksforgeeks.org/count-set-bits-in-an-integer/) is not equal to n for any non-negative number x.***
***Examples :***
3 is not Bleak as it can be represented
as 2 + countSetBits(2).
4 is t Bleak as it cannot be represented
as sum of a number x and countSetBits(x)
for any number x.
***Input:***\
The first line of input contains an integer T denoting the number of test cases. Then T test cases follow. Each test case consists of a single line. The first line of each test case contains a single integer N to be checked for Bleak.
***Output:***\
Print "1" or "0" (without quotes) depending on whether the number is Bleak or not.
***Constraints:***\
1 <= T <= 1000\
1 <= N <= 10000
***Example:***\
***Input:***\
3\
4\
167\
3
***Output:***\
1\
0\
0
```
for _ in range(int(input())):
n=int(input())
for i in range(0,n+1,2):
if (i+bin(i).count('1'))==n:
print(0)
break
else:
print(1)
a
a[1::2]
''+'2'
a=bin(-2)
a
int('1b10',2)
a=list(map(int,input().split()))
xor=0
for i in range(len(a)):
for j in range(i+1,len(a)):
if a[i]^a[j]>xor:
xor=a[i]^a[j]
print(xor)
a[::2]
32-len(a)
a=bin(52)[2:]
a
k=0
a[(len(a)-1)-k]
```
| true |
code
| 0.512205 | null | null | null | null |
|
# Generative models - variational auto-encoders
### Author: Philippe Esling ([email protected])
In this course we will cover
1. A [quick recap](#recap) on simple probability concepts (and in TensorFlow)
2. A formal introduction to [Variational Auto-Encoders](#vae) (VAEs)
3. An explanation of the [implementation](#implem) of VAEs
4. Some [modifications and tips to improve the reconstruction](#improve) of VAEs **(exercise)**
<a id="recap"> </a>
## Quick recap on probability
The field of probability aims to model random or uncertain events. Hence, a random variable $X$ denotes a quantity that is uncertain, such as the result of an experiment (flipping a coin) or the measurement of an uncertain property (measuring the temperature). If we observe several occurrences of the variable $\{\mathbf{x}_{i}\}_{i=1}$, it might take different values on each occasion, but some values may occur more often than others. This information is captured by the _probability distribution_ $p(\mathbf{x})$ of the random variable.
To understand these concepts graphically, we will rely on the `Tensorflow Probability` package.
```
import tensorflow_probability as tfp
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
```
### Probability distributions
#### Discrete distributions
Let $\mathbf{x}$ be a discrete random variable with range $R_{X}=\{x_1,\cdots,x_n\}$ (finite or countably infinite). The function
\begin{equation}
p_{X}(x_{i})=p(X=x_{i}), \forall i\in\{1,\cdots,n\}
\end{equation}
is called the probability mass function (PMF) of $X$.
Hence, the PMF defines the probabilities of all possible values for a random variable. The above notation allows to express that the PMF is defined for the random variable $X$, so that $p_{X}(1)$ gives the probability that $X=1$. For discrete random variables, the PMF is also called the \textit{probability distribution}. The PMF is a probability measure, therefore it satisfies all the corresponding properties
- $0 \leq p_{X}(x_i) < 1, \forall x_i$
- $\sum_{x_i\in R_{X}} p_{X}(x_i) = 1$
- $\forall A \subset R_{X}, p(X \in A)=\sum_{x_a \in A}p_{X}(x_a)$
A very simple example of discrete distribution is the `Bernoulli` distribution. With this distribution, we can model a coin flip. If we throw the coin a very large number of times, we hope to see on average an equal amount of _heads_ and _tails_.
```
bernoulli = tfp.distributions.Bernoulli(probs=0.5)
samples = bernoulli.sample(10000)
sns.distplot(samples)
plt.title("Samples from a Bernoulli (coin toss)")
plt.show()
```
However, we can also _sample_ from the distribution to have individual values of a single throw. In that case, we obtain a series of separate events that _follow_ the distribution
```
vals = ['heads', 'tails']
samples = bernoulli.sample(10)
for s in samples:
print('Coin is tossed on ' + vals[s])
```
#### Continuous distributions
The same ideas apply to _continuous_ random variables, which can model for instance the height of human beings. If we try to guess the height of someone that we do not know, there is a higher probability that this person will be around 1m70, instead of 20cm or 3m. For the rest of this course, we will use the shorthand notation $p(\mathbf{x})$ for the distribution $p(\mathbf{x}=x_{i})$, which expresses for a real-valued random variable $\mathbf{x}$, evaluated at $x_{i}$, the probability that $\mathbf{x}$ takes the value $x_i$.
One notorious example of such distributions is the Gaussian (or Normal) distribution, which is defined as
\begin{equation}
p(x)=\mathcal{N}(\mu,\sigma)=\frac{1}{\sqrt{2\pi\sigma^{2}}}e^{-\frac{(x-\mu)^{2}}{2\sigma^{2}}}
\end{equation}
Similarly as before, we can observe the behavior of this distribution with the following code
```
normal = tfp.distributions.Normal(loc=0., scale=1.)
samples = normal.sample(10000)
sns.distplot(samples)
plt.title("Samples from a standard Normal")
plt.show()
```
### Comparing distributions (KL divergence)
$
\newcommand{\R}{\mathbb{R}}
\newcommand{\bb}[1]{\mathbf{#1}}
\newcommand{\bx}{\bb{x}}
\newcommand{\by}{\bb{y}}
\newcommand{\bz}{\bb{z}}
\newcommand{\KL}[2]{\mathcal{D}_{\text{KL}}\left[#1 \| #2\right]}$
Originally defined in the field of information theory, the _Kullback-Leibler (KL) divergence_ (usually noted $\KL{p(\bx)}{q(\bx)}$) is a dissimilarity measure between two probability distributions $p(\bx)$ and $q(\bx)$. In the view of information theory, it can be understood as the cost in number of bits necessary for coding samples from $p(\bx)$ by using a code optimized for $q(\bx)$ rather than the code optimized for $p(\bx)$. In the view of probability theory, it represents the amount of information lost when we use $q(\bx)$ to approximate the true distribution $p(\bx)$. %that explicit the cost incurred if events were generated by $p(\bx)$ but charged under $q(\bx)$
Given two probability distributions $p(\bx)$ and $q(\bx)$, the Kullback-Leibler divergence of $q(\bx)$ _from_ $p(\bx)$ is defined to be
\begin{equation}
\KL{p(\bx)}{q(\bx)}=\int_{\R} p(\bx) \log \frac{p(\bx)}{q(\bx)}d\bx
\end{equation}
Note that this dissimilarity measure is \textit{asymmetric}, therefore, we have
\begin{equation}
\KL{p(\bx)}{q(\bx)}\neq \KL{q(\bx)}{p(\bx)}
\end{equation}
This asymmetry also describes an interesting behavior of the KL divergence, depending on the order to which it is evaluated. The KL divergence can either be a _mode-seeking_ or _mode-coverage} measure.
<a id="vae"></a>
## Variational auto-encoders
As we have seen in the previous AE course, VAEs are also a form generative models. However, they are defined from a more sound probabilistic perspective. to find the underlying probability distribution of the data $p(\mathbf{x})$ based on a set of examples in $\mathbf{x}\in\mathbb{R}^{d_{x}}$. To do so, we consider *latent variables* defined in a lower-dimensional space $\mathbf{z}\in\mathbb{R}^{d_{z}}$ ($d_{z} \ll d_{x}$) with the joint probability distribution $p(\mathbf{x}, \mathbf{z}) = p(\mathbf{x} \vert \mathbf{z})p(\mathbf{z})$. Unfortunately, for complex distributions this integral is too complex and cannot be found in closed form.
### Variational inference
The idea of *variational inference* (VI) allows to solve this problem through *optimization* by assuming a simpler approximate distribution $q_{\phi}(\mathbf{z}\vert\mathbf{x})\in\mathcal{Q}$ from a family $\mathcal{Q}$ of approximate densities. Hence, the goal is to minimize the difference between this approximation and the real distribution. Therefore, this turns into the optimization problem of minimizing the Kullback-Leibler (KL) divergence between the parametric approximation and the original density
$$
q_{\phi}^{*}(\mathbf{z}\vert \mathbf{x})=\text{argmin}_{q_{\phi}(\mathbf{z} \vert \mathbf{x})\in\mathcal{Q}} \mathcal{D}_{KL} \big[ q_{\phi}\left(\mathbf{z} \vert \mathbf{x}\right) \parallel p\left(\mathbf{z} \vert \mathbf{x}\right) \big]
\tag{2}
$$
By developing this KL divergence and re-arranging terms (the detailed development can be found in [3](#reference1)), we obtain
$$
\log{p(\mathbf{x})} - D_{KL} \big[ q_{\phi}(\mathbf{z} \vert \mathbf{x}) \parallel p(\mathbf{z} \vert \mathbf{x}) \big] =
\mathbb{E}_{\mathbf{z}} \big[ \log{p(\mathbf{x} \vert \mathbf{z})}\big] - D_{KL} \big[ q_{\phi}(\mathbf{z} \vert \mathbf{x}) \parallel p(\mathbf{z}) \big]
\tag{3}
$$
This formulation describes the quantity we want to maximize $\log p(\mathbf{x})$ minus the error we make by using an approximate $q$ instead of $p$. Therefore, we can optimize this alternative objective, called the *evidence lower bound* (ELBO)
$$
\begin{equation}
\mathcal{L}_{\theta, \phi} = \mathbb{E} \big[ \log{ p_\theta (\mathbf{x|z}) } \big] - \beta \cdot D_{KL} \big[ q_\phi(\mathbf{z|x}) \parallel p_\theta(\mathbf{z}) \big]
\end{equation}
\tag{4}
$$
We can see that this equation involves $q_{\phi}(\mathbf{z} \vert \mathbf{x})$ which *encodes* the data $\mathbf{x}$ into the latent representation $\mathbf{z}$ and a *decoder* $p(\mathbf{x} \vert \mathbf{z})$, which allows generating a data vector $\mathbf{x}$ given a latent configuration $\mathbf{z}$. Hence, this structure defines the *Variational Auto-Encoder* (VAE).
The VAE objective can be interpreted intuitively. The first term increases the likelihood of the data generated given a configuration of the latent, which amounts to minimize the *reconstruction error*. The second term represents the error made by using a simpler posterior distribution $q_{\phi}(\mathbf{z} \vert \mathbf{x})$ compared to the true prior $p_{\theta}(\mathbf{z})$. Therefore, this *regularizes* the choice of approximation $q$ so that it remains close to the true posterior distribution [3].
### Reparametrization trick
Now, while this formulation has some very interesting properties, it involves sampling operations, where we need to draw the latent point $\mathbf{z}$ from the distribution $q_{\phi}(\mathbf{z}\vert\mathbf{x})$. The simplest choice for this variational approximate posterior is a multivariate Gaussian with a diagonal covariance structure (which leads to independent Gaussians on every dimension, called the *mean-field* family) so that
$$
\text{log}q_\phi(\mathbf{z}\vert\mathbf{x}) = \text{log}\mathcal{N}(\mathbf{z};\mathbf{\mu}^{(i)},\mathbf{\sigma}^{(i)})
\tag{5}
$$
where the mean $\mathbf{\mu}^{(i)}$ and standard deviation $\mathbf{\sigma}^{(i)}$ of the approximate posterior are different for each input point and are produced by our encoder parametrized by its variational parameters $\phi$. Now the KL divergence between this distribution and a simple prior $\mathcal{N}(\mathbf{0}, \mathbf{I})$ can be very simply obtained with
$$
D_{KL} \big[ q_\phi(\mathbf{z|x}) \parallel \mathcal{N}(\mathbf{0}, \mathbf{I}) \big] = \frac{1}{2}\sum_{j=1}^{D}\left(1+\text{log}((\sigma^{(i)}_j)^2)+(\mu^{(i)}_j)^2+(\sigma^{(i)}_j)^2\right)
\tag{6}
$$
While this looks convenient, we will still have to perform gradient descent through a sampling operation, which is non-differentiable. To solve this issue, we can use the *reparametrization trick*, which takes the sampling operation outside of the gradient flow by considering $\mathbf{z}^{(i)}=\mathbf{\mu}^{(i)}+\mathbf{\sigma}^{(i)}\odot\mathbf{\epsilon}^{(l)}$ with $\mathbf{\epsilon}^{(l)}\sim\mathcal{N}(\mathbf{0}, \mathbf{I})$
<a id="implem"> </a>
## VAE implementation
As we have seen, VAEs can be simply implemented by decomposing the above series of operations into an `encoder` which represents the distribution $q_\phi(\mathbf{z}\vert\mathbf{x})$, from which we will sample some values $\tilde{\mathbf{z}}$ (using the reparametrization trick) and compute the Kullback-Leibler (KL) divergence. Then, we use these values as input to a `decoder` which represents the distribution $p_\theta(\mathbf{x}\vert\mathbf{z})$ so that we can produce a reconstruction $\tilde{\mathbf{x}}$ and compute the reconstruction error.
Therefore, we can define the VAE based on our previous implementation of the AE that we recall here
```
import tensorflow as tf
from sklearn.metrics import accuracy_score, precision_score, recall_score
from sklearn.model_selection import train_test_split
from tensorflow.keras import layers, losses
from tensorflow.keras.datasets import fashion_mnist
from tensorflow.keras.models import Model
class AE(Model):
def __init__(self, encoder, decoder, encoding_dim):
super(AE, self).__init__()
self.encoding_dim = encoding_dim
self.encoder = encoder
self.decoder = decoder
def call(self, x):
encoded = self.encoder(x)
decoded = self.decoder(encoded)
return decoded
```
In order to move to a probabilistic version, we need to add the latent space sampling mechanism, and change the behavior of our `call` function. This process is implemented in the following `VAE` class.
Note that we purposedly rely on an implementation of the `encode` function where the `encoder` first produces an intermediate representation of size `encoder_dims`. Then, this representation goes through two separate functions for encoding $\mathbf{\mu}$ and $\mathbf{\sigma}$. This provides a clearer implementation but also the added bonus that we can ensure that $\mathbf{\sigma} > 0$
```
class VAE(AE):
def __init__(self, encoder, decoder, encoding_dims, latent_dims):
super(VAE, self).__init__(encoder, decoder, encoding_dims)
self.latent_dims = latent_dims
self.mu = layers.Dense(self.latent_dims, activation='relu')
self.sigma = layers.Dense(self.latent_dims, activation='softplus')
def encode(self, x):
x = self.encoder(x)
mu = self.mu(x)
sigma = self.sigma(x)
return mu, sigma
def decode(self, z):
return self.decoder(z)
def call(self, x):
# Encode the inputs
z_params = self.encode(x)
# Obtain latent samples and latent loss
z_tilde, kl_div = self.latent(x, z_params)
# Decode the samples
x_tilde = self.decode(z_tilde)
return x_tilde, kl_div
def latent(self, x, z_params):
n_batch = x.shape[0]
# Retrieve mean and var
mu, sigma = z_params
# Re-parametrize
q = tfp.distributions.Normal(np.zeros(mu.shape[1]), np.ones(sigma.shape[1]))
z = (sigma * tf.cast(q.sample(n_batch), 'float32')) + mu
# Compute KL divergence
kl_div = -0.5 * tf.reduce_sum(1 + sigma - tf.pow(mu, 2) - tf.exp(sigma))
kl_div = kl_div / n_batch
return z, kl_div
```
Now the interesting aspect of VAEs is that we can define any parametric function as `encoder` and `decoder`, as long as we can optimize them. Here, we will rely on simple feed-forward neural networks, but these can be largely more complex (with limitations that we will discuss later in the tutorial).
```
def construct_encoder_decoder(nin, n_latent = 16, n_hidden = 512, n_classes = 1):
# Encoder network
encoder = tf.keras.Sequential([
layers.Flatten(),
layers.Dense(n_hidden, activation='relu'),
layers.Dense(n_hidden, activation='relu'),
layers.Dense(n_hidden, activation='relu'),
])
# Decoder network
decoder = tf.keras.Sequential([
layers.Dense(n_hidden, activation='relu'),
layers.Dense(n_hidden, activation='relu'),
layers.Dense(nin * n_classes, activation='sigmoid'),
layers.Reshape((28, 28))
])
return encoder, decoder
```
### Evaluating the error
In the definition of the `VAE` class, we directly included the computation of the $D_{KL}$ term to regularize our latent space. However, remember that the complete loss of equation (4) also contains a *reconstruction loss* which compares our reconstructed output to the original data.
While there are several options to compare the error between two elements, there are usually two preferred choices among the generative literature depending on how we consider our problem
1. If we consider each dimension (pixel) to be a binary unit (following a Bernoulli distribution), we can rely on the `binary cross entropy` between the two distributions
2. If we turn our problem to a set of classifications, where each dimension can belong to a given set of *intensity classes*, then we can compute the `multinomial loss` between the two distributions
In the following, we define both error functions and regroup them in the `reconstruction_loss` call (depending on the `num_classes` considered). However, as the `multinomial loss` requires a large computational overhead, and for the sake of simplicity, we will train all our first models by relying on the `binary cross entropy`
```
optimizer = tf.keras.optimizers.Adam(1e-4)
def compute_loss(model, x):
x_tilde, kl_div = model(x)
cross_ent = tf.nn.sigmoid_cross_entropy_with_logits(logits=x_tilde, labels=x)
logpx_z = -tf.reduce_sum(cross_ent, axis=[1, 2])
return -tf.reduce_mean(logpx_z + kl_div)
@tf.function
def train_step(model, x, optimizer):
"""Executes one training step and returns the loss."""
with tf.GradientTape() as tape:
loss = compute_loss(model, x)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
```
### Optimizing a VAE on a real dataset
For this tutorial, we are going to take a quick shot at a real-life problem by trying to train our VAEs on the `FashionMNIST` dataset. This dataset can be natively used in PyTorch by relying on the `torchvision.datasets` classes as follows
```
# Load (and eventually download) the dataset
(x_train, _), (x_test, _) = fashion_mnist.load_data()
# Normalize the dataset in the [0, 1] range]
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
```
The `FashionMNIST` dataset is composed of simple 28x28 black and white images of different items of clothings (such as shoes, bags, pants and shirts). We put a simple function here to display one batch of the test set (note that we keep a fixed batch from the test set in order to evaluate the different variations that we will try in this tutorial).
```
def plot_batch(batch, nslices=8):
# Create one big image for plot
img = np.zeros(((batch.shape[1] + 1) * nslices, (batch.shape[2] + 1) * nslices))
for b in range(batch.shape[0]):
row = int(b / nslices); col = int(b % nslices)
r_p = row * batch.shape[1] + row; c_p = col * batch.shape[2] + col
img[r_p:(r_p+batch.shape[1]),c_p:(c_p+batch.shape[2])] = batch[b]
im = plt.imshow(img, cmap='Greys', interpolation='nearest'),
return im
# Select a random set of fixed data
fixed_batch = x_test[:64]
print(x_test.shape)
plt.figure(figsize=(10, 10))
plot_batch(fixed_batch);
```
Now based on our proposed implementation, the optimization aspects are defined in a very usual way
```
# Using Bernoulli or Multinomial loss
num_classes = 1
# Number of hidden and latent
n_hidden = 512
n_latent = 2
# Compute input dimensionality
nin = fixed_batch.shape[1] * fixed_batch.shape[2]
# Construct encoder and decoder
encoder, decoder = construct_encoder_decoder(nin, n_hidden = n_hidden, n_latent = n_latent, n_classes = num_classes)
# Build the VAE model
model = VAE(encoder, decoder, n_hidden, n_latent)
```
Now all that is left to do is train the model. We define here a `train_vae` function that we will reuse along the future implementations and variations of VAEs and flows. Note that this function is set to run for only a very few number of `epochs` and also most importantly, *only considers a subsample of the full dataset at each epoch*. This option is just here so that you can test the different models very quickly on any CPU or laptop.
```
def generate_and_save_images(model, epoch, test_sample):
predictions, _ = model(test_sample)
fig = plt.figure(figsize=(4, 4))
for i in range(predictions.shape[0]):
plt.subplot(4, 4, i + 1)
plt.imshow(predictions[i, :, :], cmap='gray')
plt.axis('off')
# tight_layout minimizes the overlap between 2 sub-plots
plt.savefig('image_at_epoch_{:04d}.png'.format(epoch))
plt.show()
epochs=50
test_sample = x_test[0:16, :, :]
for epoch in range(1, epochs + 1):
for train_x in x_train:
train_step(model, tf.expand_dims(train_x, axis=0), optimizer)
loss = tf.keras.metrics.Mean()
for test_x in x_test:
loss(compute_loss(model, tf.expand_dims(test_x, axis=0)))
elbo = -loss.result()
print('Epoch: {}, Test set ELBO: {}'.format(epoch, elbo))
generate_and_save_images(model, epoch, test_sample)
```
### Evaluating generative models
In order to evaluate our upcoming generative models, we will rely on the computation of the Negative Log-Likelihood. This code for the following `evaluate_nll_bpd` is inspired by the [Sylvester flow repository](https://github.com/riannevdberg/sylvester-flows)
```
from scipy.special import logsumexp
def evaluate_nll_bpd(data_loader, model, batch = 500, R = 5):
# Set of likelihood tests
likelihood_test = []
# Go through dataset
for batch_idx, (x, _) in enumerate(data_loader):
for j in range(x.shape[0]):
a = []
for r in range(0, R):
cur_x = x[j].unsqueeze(0)
# Repeat it as batch
x = cur_x.expand(batch, *cur_x.size()[1:]).contiguous()
x = x.view(batch, -1)
x_tilde, kl_div = model(x)
rec = reconstruction_loss(x_tilde, x, average=False)
a_tmp = (rec + kl_div)
a.append(- a_tmp.cpu().data.numpy())
# calculate max
a = np.asarray(a)
a = np.reshape(a, (a.shape[0] * a.shape[1], 1))
likelihood_x = logsumexp(a)
likelihood_test.append(likelihood_x - np.log(len(a)))
likelihood_test = np.array(likelihood_test)
nll = - np.mean(likelihood_test)
# Compute the bits per dim (but irrelevant for binary data)
bpd = nll / (np.prod(nin) * np.log(2.))
return nll, bpd
```
Now we can evaluate our VAE model more formally as follows.
```
# Plot final loss
plt.figure()
plt.plot(losses_kld[:, 0].numpy());
# Evaluate log-likelihood and bits per dim
nll, _ = evaluate_nll_bpd(test_loader, model)
print('Negative Log-Likelihood : ' + str(nll))
```
### Limitations of VAEs - (**exercise**)
Although VAEs are extremely powerful tools, they still have some limitations. Here we list the three most important and known limitations (all of them are still debated and topics of active research).
1. **Blurry reconstructions.** As can be witnessed directly in the results of the previous vanilla VAE implementation, the reconstructions appear to be blurry. The precise origin of this phenomenon is still debated, but the proposed explanation are
1. The use of the KL regularization
2. High variance regions of the latent space
3. The reconstruction criterion (expectation)
4. The use of simplistic latent distributions
2. **Posterior collapse.** The previous *blurry reconstructions* issue can be mitigated by using a more powerful decoder. However, relying on a decoder with a large capacity causes the phenomenon of *posterior collapse* where the latent space becomes useless. A nice intuitive explanation can be found [here](https://ermongroup.github.io/blog/a-tutorial-on-mmd-variational-autoencoders/)
3. **Simplistic Gaussian approximation**. In the derivation of the VAE objective, recall that the KL divergence term needs to be computed analytically. Therefore, this forces us to rely on quite simplistic families. However, the Gaussian family might be too simplistic to model real world data
In the present tutorial, we show how normalizing flows can be used to mostly solve the third limitation, while also adressing the two first problems. Indeed, we will see that normalizing flows also lead to sharper reconstructions and also act on preventing posterior collapse
<a id="improve"></a>
## Improving the quality of VAEs
As we discussed in the previous section, several known issues have been reported when using the vanilla VAE implementation. We listed some of the major issues as being
1. **Blurry reconstructions.**
2. **Posterior collapse.**
3. **Simplistic Gaussian approximation**.
Here, we discuss some recent developments that were proposed in the VAE literature and simple adjustments that can be made to (at least partly) alleviate these issues. However, note that some more advanced proposals such as PixelVAE [5](#reference1) and VQ-VAE [6](#reference1) can lead to wider increases in quality
### Reducing the bluriness of reconstructions
In this tutorial, we relied on extremely simple decoder functions, to show how we could easily define VAEs and normalizing flows together. However, the capacity of the decoder obviously directly influences the quality of the final reconstruction. Therefore, we could address this issue naively by using deep networks and of course convolutional layers as we are currently dealing with images.
First you need to construct a more complex encoder and decoder
```
def construct_encoder_decoder_complex(nin, n_latent = 16, n_hidden = 512, n_params = 0, n_classes = 1):
# Encoder network
encoder = ...
# Decoder network
decoder = ...
return encoder, decoder
```
### Preventing posterior collapse with Wasserstein-VAE-MMD (InfoVAE)
As we discussed earlier, the reason behind posterior collapse mostly relates to the KL divergence criterion (a nice intuitive explanation can be found [here](https://ermongroup.github.io/blog/a-tutorial-on-mmd-variational-autoencoders/). This can be mitigated by relying on a different criterion, such as regularizing the latent distribution by using the *Maximum Mean Discrepancy* (MMD) instead of the KL divergence. This model was independently proposed as the *InfoVAE* and later also as the *Wasserstein-VAE*.
Here we provide a simple implementation of the `InfoVAEMMD` class based on our previous implementations.
```
def compute_kernel(x, y):
return ...
def compute_mmd(x, y):
return ...
class InfoVAEMMD(VAE):
def __init__(self, encoder, decoder):
super(InfoVAEMMD, self).__init__(encoder, decoder)
def latent(self, x, z_params):
return ...
```
### Putting it all together
Here we combine all these ideas (except for the MMD, which is not adequate as the flow definition already regularizes the latent space without the KL divergence) to perform a more advanced optimization of the dataset. Hence, we will rely on the complex encoder and decoder with gated convolutions, the multinomial loss and the normalizing flows in order to improve the overall quality of our reconstructions.
```
# Size of latent space
n_latent = 16
# Number of hidden units
n_hidden = 256
# Rely on Bernoulli or multinomial
num_classes = 128
# Construct encoder and decoder
encoder, decoder = ...
# Create VAE or (InfoVAEMMD - WAE) model
model_flow_p = ...
# Create optimizer algorithm
optimizer = ...
# Add learning rate scheduler
scheduler = ...
# Launch our optimization
losses_flow_param = ...
```
*NB*: It seems that the multinomial version have a hard time converging. Although I only let this run for 200 epochs and only for a subsampling of 5000 examples, it might need more time, but this might also come from a mistake somewhere in my code ... If you spot something odd please let me know :)
### References
<a id="reference1"></a>
[1] Rezende, Danilo Jimenez, and Shakir Mohamed. "Variational inference with normalizing flows." _arXiv preprint arXiv:1505.05770_ (2015). [link](http://arxiv.org/pdf/1505.05770)
[2] Kingma, Diederik P., Tim Salimans, and Max Welling. "Improving Variational Inference with Inverse Autoregressive Flow." _arXiv preprint arXiv:1606.04934_ (2016). [link](https://arxiv.org/abs/1606.04934)
[3] Kingma, D. P., & Welling, M. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114. (2013). [link](https://arxiv.org/pdf/1312.6114)
[4] Rezende, D. J., Mohamed, S., & Wierstra, D. Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082. (2014). [link](https://arxiv.org/pdf/1401.4082)
[5] Gulrajani, I., Kumar, K., Ahmed, F., Taiga, A. A., Visin, F., Vazquez, D., & Courville, A. (2016). Pixelvae: A latent variable model for natural images. arXiv preprint arXiv:1611.05013. [link](https://arxiv.org/pdf/1611.05013)
[6] Van den Oord, A., & Vinyals, O. (2017). Neural discrete representation learning. In NIPS 2017 (pp. 6306-6315). [link](http://papers.nips.cc/paper/7210-neural-discrete-representation-learning.pdf)
### Inspirations and resources
https://blog.evjang.com/2018/01/nf1.html
https://github.com/ex4sperans/variational-inference-with-normalizing-flows
https://akosiorek.github.io/ml/2018/04/03/norm_flows.html
https://github.com/abdulfatir/normalizing-flows
https://github.com/riannevdberg/sylvester-flows
| true |
code
| 0.860164 | null | null | null | null |
|
```
!wget https://datahack-prod.s3.amazonaws.com/train_file/train_LZdllcl.csv -O train.csv
!wget https://datahack-prod.s3.amazonaws.com/test_file/test_2umaH9m.csv -O test.csv
!wget https://datahack-prod.s3.amazonaws.com/sample_submission/sample_submission_M0L0uXE.csv -O sample_submission.csv
# Import the required packages
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
# Read the train and test data
train=pd.read_csv("train.csv")
train.drop('employee_id',inplace=True,axis = 1)
test=pd.read_csv("test.csv")
# Check the variables in train data
train.columns
# Print datatype of each variable
train.dtypes
# Dimension of the train dataset
train.shape
# Print the head of train dataset
train.head()
# Unique values in each variable of train dataset
train.nunique()
```
### Univariate Analysis
#### Target Variable
```
train['is_promoted'].value_counts(normalize=True)
# Around 91% trainee have promoted
# Unbalanced dataset
```
#### Categorical Independent Variables
```
plt.figure(1)
plt.subplot(221)
train['department'].value_counts(normalize=True).plot.bar(figsize=(20,10), title= 'Department')
plt.subplot(222)
train['awards_won?'].value_counts(normalize=True).plot.bar(title= 'Awards won')
plt.subplot(223)
train['education'].value_counts(normalize=True).plot.bar(title= 'Education')
plt.subplot(224)
train['gender'].value_counts(normalize=True).plot.bar(title= 'Gender')
plt.show()
# Most of the trainee are enrolled for Y and T program_type.
# More number of trainee enrolment for offline test than online test.
# Most of the test are easy in terms of difficulty level.
train['KPIs_met >80%'].value_counts(normalize=True).plot.bar(title= 'KPI met greater than 80')
plt.figure(1)
plt.subplot(221)
train['region'].value_counts(normalize=True).plot.bar(figsize=(20,10), title= 'Region')
plt.subplot(222)
train['recruitment_channel'].value_counts(normalize=True).plot.bar(title='Recruitment Channels')
plt.subplot(223)
train['no_of_trainings'].value_counts(normalize=True).plot.bar(title= 'No of Trainings')
plt.subplot(224)
train['previous_year_rating'].value_counts(normalize=True).plot.bar(title= 'Previous year ratings')
plt.show()
# More male trainee as compared to female trainee
# Most of the trainee have diploma
# Most of the trainee belongs to tier 3 city
# 10% of the trainee are handicapped
```
#### Numerical Independent Variables
```
sns.distplot(train['age']);
# Most of the trainee are in the age range of 20-30 and 40-50
sns.distplot(train['length_of_service']);
sns.distplot(train['avg_training_score']);
```
### Bivariate Analysis
```
# Correlation between numerical variables
matrix = train.corr()
f, ax = plt.subplots(figsize=(9, 6))
sns.heatmap(matrix, vmax=.8, square=True, cmap="BuPu");
# Not much correlation between the variables
# program_id vs is_pass
plt.figure(figsize=(12,4))
sns.barplot(train['department'], train['is_promoted'])
plt.figure(figsize=(20,8))
# program_type vs is_pass
sns.barplot(train['region'], train['is_promoted'])
# Trainee in X and Y program type have higher chances to pass the test
# test_type vs is_pass
sns.barplot(train['recruitment_channel'], train['is_promoted'])
# Trainee attending online mode of test have higher chances to pass the test
# difficulty_level vs is_pass
sns.barplot(train['no_of_trainings'], train['is_promoted'])
# If the difficulty level of the test is easy, chances to pass the test are higher
# Gender vs is_pass
sns.barplot(train['previous_year_rating'], train['is_promoted'])
# Gender does not affect the chances to pass the test
# education vs is_pass
plt.figure(figsize=(12,4))
sns.barplot(train['education'], train['is_promoted'])
# Trainee with Masters education level have more chances to pass the test
plt.figure(figsize=(20,8))
# is_handicapped vs is_pass
sns.barplot(train['length_of_service'], train['is_promoted'])
# Handicapped trainee have less chances to pass the test
# city_tier vs is_pass
sns.barplot(train['KPIs_met >80%'], train['is_promoted'])
# Trainee from city tier 1 have higher chances to pass the test
# trainee_engagement_rating vs is_pass
sns.barplot(train['awards_won?'], train['is_promoted'])
# As the trainee engagement rating increases, chances to pass the test also increases
```
### Missing Values Treatment
```
# Check the number of missing values in each variable
train.isnull().sum()
# age and trainee_engagement_rating variables have missing values in it.
test = pd.read_csv('test.csv')
test.drop('employee_id',inplace=True,axis = 1)
test.head()
test['education'].fillna('other',inplace=True)
test['previous_year_rating'].fillna(99,inplace=True)
train['education'].fillna('other',inplace=True)
train['previous_year_rating'].fillna(99,inplace=True)
```
### Logistic Regression
```
train.head()
# Save target variable in separate dataset
X = train.drop('is_promoted',axis=1)
y = train.is_promoted
test.head()
# Apply dummies to the dataset
X=pd.get_dummies(X)
test=pd.get_dummies(test)
from sklearn.ensemble import GradientBoostingClassifier
from sklearn import cross_validation, metrics #Additional scklearn functions
from sklearn.grid_search import GridSearchCV #Perforing grid search
#same function as xgboost tuning one!
def modelfit(alg, dtrain, predictors, performCV=True, printFeatureImportance=True, cv_folds=5):
#Fit the algorithm on the data
alg.fit(dtrain[predictors],y)
#Predict training set:
dtrain_predictions = alg.predict(dtrain[predictors])
dtrain_predprob = alg.predict_proba(dtrain[predictors])[:,1]
#Perform cross-validation:
if performCV:
cv_score = cross_validation.cross_val_score(alg, dtrain[predictors],y, cv=cv_folds, scoring='f1')
#Print model report:
print("\nModel Report")
print("F1 Score :",metrics.f1_score(y, dtrain_predictions))
if performCV:
print("CV Score : Mean - %.7g | Std - %.7g | Min - %.7g | Max - %.7g" % (np.mean(cv_score),np.std(cv_score),np.min(cv_score),np.max(cv_score)))
#Print Feature Importance:
if printFeatureImportance:
feat_imp = pd.Series(alg.feature_importances_, predictors).sort_values(ascending=False)
feat_imp.plot(kind='bar', title='Feature Importances')
plt.ylabel('Feature Importance Score')
#Choose all predictors except target & IDcols
predictors = [x for x in X.columns]
gbm0 = GradientBoostingClassifier(random_state=42,verbose = 1)
modelfit(gbm0,X, predictors)
param_test1 = {'n_estimators':np.arange(180,400,20)}
gsearch1 = GridSearchCV(estimator = GradientBoostingClassifier(learning_rate=0.1,verbose = 1, min_samples_split=500,min_samples_leaf=50,max_depth=5,max_features='sqrt',subsample=0.8,random_state=10),
param_grid = param_test1, scoring='f1',n_jobs=-1,iid=False, cv=3,verbose=1)
gsearch1.fit(X,y)
gsearch1.grid_scores_, gsearch1.best_params_, gsearch1.best_score_
#tuning max depth and min samples split
param_test2 = {'max_depth':np.arange(5,10,2),'min_samples_split':np.arange(500,1001,100)}
gsearch2 = GridSearchCV(estimator = GradientBoostingClassifier(learning_rate=0.1,verbose = 1, n_estimators=600, max_features='sqrt', subsample=0.8, random_state=10),
param_grid = param_test2, scoring='f1',n_jobs=-1,iid=False, cv=3,verbose =1)
gsearch2.fit(X,y)
gsearch2.grid_scores_, gsearch2.best_params_, gsearch2.best_score_
#Tuning min_samples_leaf after updating the latest hyperparameter values i.e max_depth and min_samples_split
param_test3 = {'min_samples_leaf':np.arange(50,100,10)}
gsearch3 = GridSearchCV(estimator = GradientBoostingClassifier(learning_rate=0.1, n_estimators=600,min_samples_split=600,max_depth=7,max_features='sqrt',verbose = 1, subsample=0.8, random_state=10),
param_grid = param_test3, scoring='f1',n_jobs=-1,iid=False, cv=3,verbose = 1)
gsearch3.fit(X,y)
gsearch3.grid_scores_, gsearch3.best_params_, gsearch3.best_score_
param_test5 = {'subsample':[0.6,0.7,0.75,0.8,0.85,0.9]}
gsearch5 = GridSearchCV(estimator = GradientBoostingClassifier(learning_rate=0.1, verbose = 1 , n_estimators=600,max_depth=7,min_samples_split=600, min_samples_leaf=60, subsample=0.8, random_state=10,max_features=7),
param_grid = param_test5, scoring='f1',n_jobs=-1,iid=False, cv=3,verbose = 1)
gsearch5.fit(X,y)
gsearch5.grid_scores_, gsearch5.best_params_, gsearch5.best_score_
gbm_tuned_1 = GradientBoostingClassifier(learning_rate=0.1, n_estimators=600,max_depth=7, min_samples_split=600,min_samples_leaf=60, subsample=0.8, random_state=10, max_features=7,verbose=1 )
modelfit(gbm_tuned_1,X,predictors)
pred = gbm_tuned_1.predict(test)
# Read the submission file
submission=pd.read_csv("sample_submission.csv")
submission.head()
# Fill the is_pass variable with the predictions
submission['is_promoted']=pred
submission['is_promoted'] = submission['is_promoted'].astype(np.int64)
submission.head()
submission['is_promoted'].value_counts()
# Converting the submission file to csv format
submission.to_csv('logistic_submission.csv', index=False)
```
score on leaderboard - 0.71145
| true |
code
| 0.586937 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/unicamp-dl/IA025_2022S1/blob/main/ex07/Guilherme_Pereira/Aula_7_Guilherme_Pereira.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
nome = 'Guilherme Pereira'
print(f'Meu nome é {nome}')
```
# Exercício: Modelo de Linguagem (Bengio 2003) - MLP + Embeddings
Neste exercício iremos treinar uma rede neural simples para prever a proxima palavra de um texto, data as palavras anteriores como entrada. Esta tarefa é chamada de "Modelagem da Língua".
Este dataset já possui um tamanho razoável e é bem provável que você vai precisar rodar seus experimentos com GPU.
Alguns conselhos úteis:
- **ATENÇÃO:** o dataset é bem grande. Não dê comando de imprimí-lo.
- Durante a depuração, faça seu dataset ficar bem pequeno, para que a depuração seja mais rápida e não precise de GPU. Somente ligue a GPU quando o seu laço de treinamento já está funcionando
- Não deixe para fazer esse exercício na véspera. Ele é trabalhoso.
```
# iremos utilizar a biblioteca dos transformers para ter acesso ao tokenizador do BERT.
!pip install transformers
```
## Importação dos pacotes
```
import collections
import itertools
import functools
import math
import random
import torch
import torch.nn as nn
import numpy as np
from torch.utils.data import DataLoader
from tqdm import tqdm_notebook
# Check which GPU we are using
!nvidia-smi
if torch.cuda.is_available():
dev = "cuda:0"
else:
dev = "cpu"
device = torch.device(dev)
print('Using {}'.format(device))
```
## Implementação do MyDataset
```
from typing import List
def tokenize(text: str, tokenizer):
return tokenizer(text, return_tensors=None, add_special_tokens=False).input_ids
class MyDataset():
def __init__(self, texts: List[str], tokenizer, context_size: int):
# Escreva seu código aqui
self.tokens, self.target = [], []
for text in texts:
ids = tokenize(text, tokenizer)
for i in range(len(ids)-context_size):
self.tokens.append(ids[i:i + context_size])
self.target.append(ids[i + context_size])
self.tokens = torch.tensor(self.tokens)
self.target = torch.tensor(self.target)
def __len__(self):
# Escreva seu código aqui
return len(self.target)
def __getitem__(self, idx):
# Escreva seu código aqui
return self.tokens[idx], self.target[idx]
```
## Teste se sua implementação do MyDataset está correta
```
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("neuralmind/bert-base-portuguese-cased")
dummy_texts = ['Eu gosto de correr', 'Ela gosta muito de comer pizza']
dummy_dataset = MyDataset(texts=dummy_texts, tokenizer=tokenizer, context_size=3)
dummy_loader = DataLoader(dummy_dataset, batch_size=6, shuffle=False)
assert len(dummy_dataset) == 5
print('passou no assert de tamanho do dataset')
first_batch_input, first_batch_target = next(iter(dummy_loader))
correct_first_batch_input = torch.LongTensor(
[[ 3396, 10303, 125],
[ 1660, 5971, 785],
[ 5971, 785, 125],
[ 785, 125, 1847],
[ 125, 1847, 13779]])
correct_first_batch_target = torch.LongTensor([13239, 125, 1847, 13779, 15616])
assert torch.equal(first_batch_input, correct_first_batch_input)
print('Passou no assert de input')
assert torch.equal(first_batch_target, correct_first_batch_target)
print('Passou no assert de target')
```
# Carregamento do dataset
Iremos usar uma pequena amostra do dataset [BrWaC](https://www.inf.ufrgs.br/pln/wiki/index.php?title=BrWaC) para treinar e avaliar nosso modelo de linguagem.
```
!wget -nc https://storage.googleapis.com/unicamp-dl/ia025a_2022s1/aula7/sample_brwac.txt
# Load datasets
context_size = 9
valid_examples = 100
test_examples = 100
texts = open('sample_brwac.txt').readlines()
# print('Truncating for debugging purposes.')
# texts = texts[:500]
training_texts = texts[:-(valid_examples + test_examples)]
valid_texts = texts[-(valid_examples + test_examples):-test_examples]
test_texts = texts[-test_examples:]
training_dataset = MyDataset(texts=training_texts, tokenizer=tokenizer, context_size=context_size)
valid_dataset = MyDataset(texts=valid_texts, tokenizer=tokenizer, context_size=context_size)
test_dataset = MyDataset(texts=test_texts, tokenizer=tokenizer, context_size=context_size)
print(f'training examples: {len(training_dataset)}')
print(f'valid examples: {len(valid_dataset)}')
print(f'test examples: {len(test_dataset)}')
class LanguageModel(torch.nn.Module):
def __init__(self, vocab_size, context_size, embedding_dim, hidden_size):
"""
Implements the Neural Language Model proposed by Bengio et al."
Args:
vocab_size (int): Size of the input vocabulary.
context_size (int): Size of the sequence to consider as context for prediction.
embedding_dim (int): Dimension of the embedding layer for each word in the context.
hidden_size (int): Size of the hidden layer.
"""
# Escreva seu código aqui.
super(LanguageModel, self).__init__()
self.context_size = context_size
self.embeddings_dim = embedding_dim
self.embeddings = nn.Embedding(vocab_size, embedding_dim)
self.hidden_layer1 = nn.Linear(self.context_size*self.embeddings_dim, hidden_size*4)
self.hidden_layer2 = nn.Linear(hidden_size*4, hidden_size*2)
self.hidden_layer3 = nn.Linear(hidden_size*2, hidden_size)
self.output_layer = nn.Linear(hidden_size, vocab_size, bias=False)
self.relu = nn.ReLU()
def forward(self, inputs):
"""
Args:
inputs is a LongTensor of shape (batch_size, context_size)
"""
# Escreva seu código aqui.
out = self.embeddings(inputs).view(-1, self.context_size*self.embeddings_dim)
out = self.relu(self.hidden_layer1(out))
out = self.relu(self.hidden_layer2(out))
out = self.relu(self.hidden_layer3(out))
return self.output_layer(out)
```
## Teste o modelo com um exemplo
```
model = LanguageModel(
vocab_size=tokenizer.vocab_size,
context_size=context_size,
embedding_dim=64,
hidden_size=128,
).to(device)
sample_train, _ = next(iter(DataLoader(training_dataset)))
sample_train_gpu = sample_train.to(device)
model(sample_train_gpu).shape
num_params = sum(p.numel() for p in model.parameters() if p.requires_grad)
print(f'Number of model parameters: {num_params}')
```
## Assert da Perplexidade
```
random.seed(123)
np.random.seed(123)
torch.manual_seed(123)
def perplexity(logits, target):
"""
Computes the perplexity.
Args:
logits: a FloatTensor of shape (batch_size, vocab_size)
target: a LongTensor of shape (batch_size,)
Returns:
A float corresponding to the perplexity.
"""
# Escreva seu código aqui.
return torch.exp(nn.functional.cross_entropy(logits,target))
n_examples = 1000
sample_train, target_token_ids = next(iter(DataLoader(training_dataset, batch_size=n_examples)))
sample_train_gpu = sample_train.to(device)
target_token_ids = target_token_ids.to(device)
logits = model(sample_train_gpu)
my_perplexity = perplexity(logits=logits, target=target_token_ids)
print(f'my perplexity: {int(my_perplexity)}')
print(f'correct initial perplexity: {tokenizer.vocab_size}')
assert math.isclose(my_perplexity, tokenizer.vocab_size, abs_tol=2000)
print('Passou o no assert da perplexidade')
```
## Laço de Treinamento e Validação
```
max_examples = 200_000_000
eval_every_steps = 5000
lr = 3.5e-5
batch_size = 1024
model = LanguageModel(
vocab_size=tokenizer.vocab_size,
context_size=context_size,
embedding_dim=128,
hidden_size=256,
).to(device)
train_loader = DataLoader(training_dataset, batch_size=batch_size, shuffle=True, drop_last=True)
validation_loader = DataLoader(valid_dataset, batch_size=batch_size)
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
def train_step(input, target):
model.train()
model.zero_grad()
logits = model(input.to(device))
loss = nn.functional.cross_entropy(logits, target.to(device))
loss.backward()
optimizer.step()
return loss.item()
def validation_step(input, target):
model.eval()
logits = model(input)
loss = nn.functional.cross_entropy(logits, target)
return loss.item()
train_losses = []
n_examples = 0
step = 0
ver = 0
while n_examples < max_examples:
for input, target in train_loader:
loss = train_step(input.to(device), target.to(device))
train_losses.append(loss)
if step % eval_every_steps == 0:
train_ppl = np.exp(np.average(train_losses))
with torch.no_grad():
valid_ppl = np.exp(np.average([
validation_step(input.to(device), target.to(device))
for input, target in validation_loader]))
print(f'{step} steps; {n_examples} examples so far; train ppl: {train_ppl:.2f}, valid ppl: {valid_ppl:.2f}')
train_losses = []
n_examples += len(input) # Increment of batch size
step += 1
if n_examples >= max_examples:
break
```
## Avaliação final no dataset de teste
Bonus: o modelo com menor perplexidade no dataset de testes ganhará 0.5 ponto na nota final.
```
test_loader = DataLoader(test_dataset, batch_size=64)
with torch.no_grad():
test_ppl = np.exp(np.average([
validation_step(input.to(device), target.to(device))
for input, target in test_loader
]))
print(f'test perplexity: {test_ppl}')
```
## Teste seu modelo com uma sentença
Escolha uma sentença gerada pelo modelo que ache interessante.
```
prompt = 'Eu estou sozinho, sinto muita falta da minha namorada'
max_output_tokens = 10
for _ in range(max_output_tokens):
input_ids = tokenize(text=prompt, tokenizer=tokenizer)
input_ids_truncated = input_ids[-context_size:] # Usamos apenas os últimos <context_size> tokens como entrada para o modelo.
logits = model(torch.LongTensor([input_ids_truncated]).to(device))
# Ao usarmos o argmax, a saída do modelo em cada passo é token de maior probabilidade.
# Isso se chama decodificação gulosa (greedy decoding).
predicted_id = torch.argmax(logits).item()
input_ids += [predicted_id] # Concatenamos a entrada com o token escolhido nesse passo.
prompt = tokenizer.decode(input_ids)
print(prompt)
```
| true |
code
| 0.744993 | null | null | null | null |
|
# COMP90051 Workshop 3
## Logistic regression
***
In this workshop we'll be implementing L2-regularised logistic regression using `scipy` and `numpy`.
Our key objectives are:
* to become familiar with the optimisation problem that sits behind L2-regularised logistic regression;
* to apply polynomial basis expansion and recognise when it's useful; and
* to experiment with the effect of L2 regularisation.
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
```
### 1. Binary classification data
Let's begin by generating some binary classification data.
To make it easy for us to visualise the results, we'll stick to a two-dimensional feature space.
```
from sklearn.datasets import make_circles
X, Y = make_circles(n_samples=300, noise=0.1, factor=0.7, random_state=90051)
plt.plot(X[Y==0,0], X[Y==0,1], 'o', label = "y=0")
plt.plot(X[Y==1,0], X[Y==1,1], 's', label = "y=1")
plt.legend()
plt.xlabel("$x_0$")
plt.ylabel("$x_1$")
plt.show()
```
**Question:** What's interesting about this data? Do you think logistic regression will perform well?
**Answer:** *This question is answered in section 3.*
In preparation for fitting and evaluating a logistic regression model, we randomly partition the data into train/test sets. We use the `train_test_split` function from `sklearn`.
```
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.33, random_state=90051)
print("Training set has {} instances. Test set has {} instances.".format(X_train.shape[0], X_test.shape[0]))
```
### 2. Logistic regression objective function
Recall from lectures, that logistic regression models the distribution of the binary class $y$ *conditional* on the feature vector $\mathbf{x}$ as
$$
y | \mathbf{x} \sim \mathrm{Bernoulli}[\sigma(\mathbf{w}^T \mathbf{x} + b)]
$$
where $\mathbf{w}$ is the weight vector, $b$ is the bias term and $\sigma(z) = 1/(1 + e^{-z})$ is the logistic function.
To simplify the notation, we'll collect the model parameters $\mathbf{w}$ and $b$ in a single vector $\mathbf{v} = [b, \mathbf{w}]$.
Fitting this model amounts to choosing $\mathbf{v}$ that minimises the sum of cross-entropies over the instances ($i = 1,\ldots,n$) in the training set
$$
f_\mathrm{cross-ent}(\mathbf{v}; \mathbf{X}, \mathbf{Y}) = - \sum_{i = 1}^{n} \left\{ y_i \log \sigma(\mathbf{w}^T \mathbf{x}_i + b) + (1 - y_i) \log (1 - \sigma(\mathbf{w}^T \mathbf{x}_i + b)) \right\}
$$
Often a regularisation term of the form $f_\mathrm{reg}(\mathbf{w}; \lambda) = \frac{1}{2} \lambda \mathbf{w}^T \mathbf{w}$ is added to the objective to penalize large weights (this can help to prevent overfitting). Note that $\lambda \geq 0$ controls the strength of the regularisation term.
Putting this together, our goal is to minimise the following objective function with respect to $\mathbf{w}$ and $b$:
$$
f(\mathbf{v}; \mathbf{X}, \mathbf{Y}, \lambda) = f_\mathrm{reg}(\mathbf{w}; \lambda) + f_\mathrm{cross-ent}(\mathbf{v}; \mathbf{X}, \mathbf{Y})
$$
**Question:** Why aren't we regularising the entire parameter vector $\mathbf{v}$? Notice that only $\mathbf{w}$ is included in $f_\mathrm{reg}$—in other words $b$ is excluded from regularisation.
**Answer:** *If we were to replace $\mathbf{w}$ with $\mathbf{v}$ in the regularisation term, we'd be penalising large $b$. This is not a good idea, because a large bias may be required for some data sets—and restricting the bias doesn't help with generalisation.*
We're going to find a solution to this minimisation problem using the BFGS algorithm (named after the inventors Broyden, Fletcher, Goldfarb and Shanno). BFGS is a "hill-climbing" algorithm like gradient descent, however it additionally makes use of second-order derivative information (by approximating the Hessian). It converges in fewer iterations than gradient descent (it's convergence rate is *superlinear* whereas gradient descent is only *linear*).
We'll use an implementation of BFGS provided in `scipy` called `fmin_bfgs`. The algorithm requires two functions as input: (i) a function that evaluates the objective $f(\mathbf{v}; \ldots)$ and (ii) a function that evalutes the gradient $\nabla_{\mathbf{v}} f(\mathbf{v}; \ldots)$.
Let's start by writing a function to compute $f(\mathbf{v}; \ldots)$.
```
from scipy.special import expit # this is the logistic function
# v: parameter vector
# X: feature matrix
# Y: class labels
# Lambda: regularisation constant
def obj_fn(v, X, Y, Lambda):
prob_1 = expit(np.dot(X,v[1::]) + v[0])
reg_term = 0.5 * Lambda * np.dot(v[1::],v[1::]) # fill in
cross_entropy_term = - np.dot(Y, np.log(prob_1)) - np.dot(1. - Y, np.log(1. - prob_1))
return reg_term + cross_entropy_term # fill in
```
Now for the gradient, we use the following result (if you're familiar with vector calculus, you may wish to derive this yourself):
$$
\nabla_{\mathbf{v}} f(\mathbf{v}; \ldots) = \left[\frac{\partial f(\mathbf{w}, b;\ldots)}{\partial b}, \nabla_{\mathbf{w}} f(\mathbf{w}, b; \ldots) \right] = \left[\sum_{i = 1}^{n} \sigma(\mathbf{w}^T \mathbf{x}_i + b) - y_i, \lambda \mathbf{w} + \sum_{i = 1}^{n} (\sigma(\mathbf{w}^T \mathbf{x}_i + b) - y_i)\mathbf{x}_i\right]
$$
The function below implements $\nabla_{\mathbf{v}} f(\mathbf{v}; \ldots)$.
```
# v: parameter vector
# X: feature matrix
# Y: class labels
# Lambda: regularisation constant
def grad_obj_fn(v, X, Y, Lambda):
prob_1 = expit(np.dot(X, v[1::]) + v[0])
grad_b = np.sum(prob_1 - Y)
grad_w = Lambda * v[1::] + np.dot(prob_1 - Y, X)
return np.insert(grad_w, 0, grad_b)
```
### 3. Solving the minimization problem using BFGS
Now that we've implemented functions to compute the objective and the gradient, we can plug them into `fmin_bfgs`.
Specifically, we define a function `my_logistic_regression` which calls `fmin_bfgs` and returns the optimal weight vector.
```
from scipy.optimize import fmin_bfgs
# X: feature matrix
# Y: class labels
# Lambda: regularisation constant
# v_initial: initial guess for parameter vector
def my_logistic_regression(X, Y, Lambda, v_initial, disp=True):
# Function for displaying progress
def display(v):
print('v is', v, 'objective is', obj_fn(v, X, Y, Lambda))
return fmin_bfgs(f=obj_fn, fprime=grad_obj_fn,
x0=v_initial, args=(X, Y, Lambda), disp=disp,
callback=display)
```
Let's try it out!
```
Lambda = 1
v_initial = np.zeros(X_train.shape[1] + 1) # fill in a vector of zeros of appropriate length
v_opt = my_logistic_regression(X_train, Y_train, Lambda, v_initial)
# Function to plot the data points and decision boundary
def plot_results(X, Y, v, trans_func = None):
# Scatter plot in feature space
plt.plot(X[Y==0,0], X[Y==0,1], 'o', label = "y=0")
plt.plot(X[Y==1,0], X[Y==1,1], 's', label = "y=1")
# Compute axis limits
x0_lower = X[:,0].min() - 0.1
x0_upper = X[:,0].max() + 0.1
x1_lower = X[:,1].min() - 0.1
x1_upper = X[:,1].max() + 0.1
# Generate grid over feature space
x0, x1 = np.mgrid[x0_lower:x0_upper:.01, x1_lower:x1_upper:.01]
grid = np.c_[x0.ravel(), x1.ravel()]
if (trans_func is not None):
grid = trans_func(grid) # apply transformation to features
arg = (np.dot(grid, v[1::]) + v[0]).reshape(x0.shape)
# Plot decision boundary (where w^T x + b == 0)
plt.contour(x0, x1, arg, levels=[0], cmap="Greys", vmin=-0.2, vmax=0.2)
plt.legend()
plt.show()
plot_results(X, Y, v_opt)
```
**Question:** Is the solution what you expected? Is it a good fit for the data?
**Answer:** *It's not a good fit because logistic regression is a linear classifier, and the data is not linearly seperable.*
**Question:** What's the accuracy of this model? Fill in the code below assuming the following decision function
$$
\hat{y} = \begin{cases}
1, &\mathrm{if} \ p(y = 1|\mathbf{x}) \geq \tfrac{1}{2}, \\
0, &\mathrm{otherwise}.
\end{cases}
$$
```
from sklearn.metrics import accuracy_score
Y_test_pred = ((np.dot(X_test, v_opt[1::]) + v_opt[0]) >= 0)*1 # fill in
accuracy_score(Y_test, Y_test_pred)
```
### 4. Adding polynomial features
We've seen that ordinary logistic regression does poorly on this data set, because the data is not linearly separable in the $x_0,x_1$ feature space.
We can get around this problem using basis expansion. In this case, we'll augment the feature space by adding polynomial features of degree 2. In other words, we replace the original feature matrix $\mathbf{X}$ by a transformed feature matrix $\mathbf{\Phi}$ which contains additional columns corresponding to $x_0^2$, $x_0 x_1$ and $x_1^2$. This is done using the function `add_quadratic_features` defined below.
**Note:** There's a built-in function in `sklearn` for adding polynomial features located at `sklearn.preprocessing.PolynomialFeatures`.
```
# X: original feature matrix
def add_quadratic_features(X):
return np.c_[X, X[:,0]**2, X[:,0]*X[:,1], X[:,1]**2]
Phi_train = add_quadratic_features(X_train)
Phi_test = add_quadratic_features(X_test)
```
Let's apply our custom logistic regression function again on the augmented feature space.
```
Lambda = 1
v_initial = np.zeros(Phi_train.shape[1] + 1) # fill in a vector of zeros of appropriate length
v_opt = my_logistic_regression(Phi_train, Y_train, Lambda, v_initial)
plot_results(X, Y, v_opt, trans_func=add_quadratic_features)
```
This time we should get a better result for the accuracy on the test set.
```
from sklearn.metrics import accuracy_score
Y_test_pred = ((np.dot(Phi_test, v_opt[1::]) + v_opt[0]) >= 0)*1 # fill in
accuracy_score(Y_test, Y_test_pred)
```
### 5. Effect of regularisation
So far, we've fixed the regularisation constant so that $\lambda = 1$. (Note it's possible to choose an "optimal" value for $\lambda$ by applying cross-validation.)
**Question:** What do you think will happen if we switch the regularisation off? Try setting $\lambda$ to a small value (say $10^{-3}$) and check whether the accuracy of the model is affected.
**Answer:** *Generally speaking, we risk overfitting if the regularisation constant is too small (or switched off entirely). You should observe that the accuracy on the test set reduces slightly with $\lambda = 10^{-3}$ vs. $\lambda = 1$.*
### 6. Logistic regression using sklearn
Now that you have some insight into the optimisation problem behind logistic regression, you should feel confident in using the built-in implementation in `sklearn` (or other packages).
Note that the `sklearn` implementation handles floating point underflow/overflow more carefully than we have done, and uses faster numerical optimisation algorithms.
```
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression(C=1)
clf.fit(Phi_train, Y_train)
from sklearn.metrics import accuracy_score
Y_test_pred = clf.predict(Phi_test)
accuracy_score(Y_test, Y_test_pred)
```
| true |
code
| 0.725047 | null | null | null | null |
|
# Accessing the Trigger
In ATLAS all access to event trigger decision is via the Trigger Decision Tool (TDT). There is quite a bit of information attached to the trigger, and its layout is quite complex - for that reason one should use the TDT to access the data. It is not really possible for a human to navigate the data structures quickly!
```
import matplotlib.pyplot as plt
from config import ds_zee as ds
from func_adl_servicex_xaodr21 import tdt_chain_fired, tmt_match_object
```
## Looking for events that fired a chain
Lets look at $Z \rightarrow ee$ Monte Carlo for a single electron trigger in the event.
```
n_electrons = (ds.Select(lambda e:
{
"n_ele": e.Electrons().Where(lambda e: abs(e.eta()) < 2.5).Count(),
"fired": tdt_chain_fired("HLT_e60_lhmedium_nod0"),
})
.AsAwkwardArray()
.value()
)
plt.hist(n_electrons.n_ele, bins=4, range=(0, 4), label='All Events')
plt.hist(n_electrons.n_ele[n_electrons.fired], bins=4, range=(0, 4), label='Fired Events')
plt.xlabel('Number of Electrons')
plt.ylabel('Number of Events')
plt.title('Electron Trigger and Number of Electrons in the Event')
_ = plt.legend()
```
## Trigger Matching
Next, let's find the electrons that matched that trigger that fired above. We'll do this by looking only at events where the trigger has fired, and then asking each electron if it matches withing a $\Delta R$.
```
matched_electrons = (
ds.Where(lambda e: tdt_chain_fired("HLT_e60_lhmedium_nod0"))
.SelectMany(lambda e: e.Electrons())
.Select(
lambda e: {
"pt": e.pt() / 1001.0,
"eta": e.eta(),
"is_trig": tmt_match_object("HLT_e60_lhmedium_nod0", e, 0.7),
}
)
.AsAwkwardArray()
.value()
)
```
To know the `tnt_match_object` arguments, you'll need to look up its definition below on the atlas twiki.
```
plt.hist(matched_electrons.pt, bins=100, range=(0, 100), label='All Electrons')
trigger_electrons = matched_electrons[matched_electrons.is_trig]
plt.hist(trigger_electrons.pt, bins=100, range=(0, 100), label='Trigger Electrons')
plt.xlabel('Electron $p_T$ [GeV]')
plt.ylabel('Number of Electrons')
_ = plt.legend()
```
## Further Information
* Tutorial on [trigger for analysis](https://indico.cern.ch/event/860971/contributions/3626403/attachments/1973400/3283452/200122_TriggerTutorial.pdf).
* Trigger Group's [Trigger Analysis Tool](https://twiki.cern.ch/twiki/bin/view/Atlas/TriggerAnalysisTools) twiki page (with a [page devoted to the TDT](https://twiki.cern.ch/twiki/bin/view/Atlas/TrigDecisionTool)).
* [Lowest un-prescaled triggers](https://twiki.cern.ch/twiki/bin/view/Atlas/LowestUnprescaled) per data-taking period twiki.
| true |
code
| 0.661212 | null | null | null | null |
|
# Optimization of a Dissipative Quantum Gate
```
# NBVAL_IGNORE_OUTPUT
%load_ext watermark
import sys
import os
import qutip
import numpy as np
import scipy
import matplotlib
import matplotlib.pylab as plt
import krotov
import copy
from functools import partial
from itertools import product
%watermark -v --iversions
```
$\newcommand{tr}[0]{\operatorname{tr}}
\newcommand{diag}[0]{\operatorname{diag}}
\newcommand{abs}[0]{\operatorname{abs}}
\newcommand{pop}[0]{\operatorname{pop}}
\newcommand{aux}[0]{\text{aux}}
\newcommand{int}[0]{\text{int}}
\newcommand{opt}[0]{\text{opt}}
\newcommand{tgt}[0]{\text{tgt}}
\newcommand{init}[0]{\text{init}}
\newcommand{lab}[0]{\text{lab}}
\newcommand{rwa}[0]{\text{rwa}}
\newcommand{bra}[1]{\langle#1\vert}
\newcommand{ket}[1]{\vert#1\rangle}
\newcommand{Bra}[1]{\left\langle#1\right\vert}
\newcommand{Ket}[1]{\left\vert#1\right\rangle}
\newcommand{Braket}[2]{\left\langle #1\vphantom{#2}\mid{#2}\vphantom{#1}\right\rangle}
\newcommand{ketbra}[2]{\vert#1\rangle\!\langle#2\vert}
\newcommand{op}[1]{\hat{#1}}
\newcommand{Op}[1]{\hat{#1}}
\newcommand{dd}[0]{\,\text{d}}
\newcommand{Liouville}[0]{\mathcal{L}}
\newcommand{DynMap}[0]{\mathcal{E}}
\newcommand{identity}[0]{\mathbf{1}}
\newcommand{Norm}[1]{\lVert#1\rVert}
\newcommand{Abs}[1]{\left\vert#1\right\vert}
\newcommand{avg}[1]{\langle#1\rangle}
\newcommand{Avg}[1]{\left\langle#1\right\rangle}
\newcommand{AbsSq}[1]{\left\vert#1\right\vert^2}
\newcommand{Re}[0]{\operatorname{Re}}
\newcommand{Im}[0]{\operatorname{Im}}$
This example illustrates the optimization for a quantum gate in an open quantum system, where the dynamics is governed by the Liouville-von Neumann equation. A naive extension of a gate optimization to Liouville space would seem to imply that it is necessary to optimize over the full basis of Liouville space (16 matrices, for a two-qubit gate). However, [Goerz et al., New J. Phys. 16, 055012 (2014)][1] showed that is not necessary, but that a set of 3 density matrices is sufficient to track the optimization.
This example reproduces the "Example II" from that paper, considering the optimization towards a $\sqrt{\text{iSWAP}}$ two-qubit gate on a system of two transmons with a shared transmission line resonator.
[1]: https://michaelgoerz.net/research/Goerz_NJP2014.pdf
**Note**: This notebook uses some parallelization features (`parallel_map`/`multiprocessing`). Unfortunately, on Windows (and macOS with Python >= 3.8), `multiprocessing` does not work correctly for functions defined in a Jupyter notebook (due to the [spawn method](https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods) being used on Windows, instead of Unix-`fork`, see also https://stackoverflow.com/questions/45719956). We can use the third-party [loky](https://loky.readthedocs.io/) library to fix this, but this significantly increases the overhead of multi-process parallelization. The use of parallelization here is for illustration only and makes no guarantee of actually improving the runtime of the optimization.
```
if sys.platform != 'linux':
krotov.parallelization.set_parallelization(use_loky=True)
from krotov.parallelization import parallel_map
```
## The two-transmon system
We consider the Hamiltonian from Eq (17) in the paper, in the rotating wave approximation, together with spontaneous decay and dephasing of each qubit. Alltogether, we define the Liouvillian as follows:
```
def two_qubit_transmon_liouvillian(
ω1, ω2, ωd, δ1, δ2, J, q1T1, q2T1, q1T2, q2T2, T, Omega, n_qubit
):
from qutip import tensor, identity, destroy
b1 = tensor(identity(n_qubit), destroy(n_qubit))
b2 = tensor(destroy(n_qubit), identity(n_qubit))
H0 = (
(ω1 - ωd - δ1 / 2) * b1.dag() * b1
+ (δ1 / 2) * b1.dag() * b1 * b1.dag() * b1
+ (ω2 - ωd - δ2 / 2) * b2.dag() * b2
+ (δ2 / 2) * b2.dag() * b2 * b2.dag() * b2
+ J * (b1.dag() * b2 + b1 * b2.dag())
)
H1_re = 0.5 * (b1 + b1.dag() + b2 + b2.dag()) # 0.5 is due to RWA
H1_im = 0.5j * (b1.dag() - b1 + b2.dag() - b2)
H = [H0, [H1_re, Omega], [H1_im, ZeroPulse]]
A1 = np.sqrt(1 / q1T1) * b1 # decay of qubit 1
A2 = np.sqrt(1 / q2T1) * b2 # decay of qubit 2
A3 = np.sqrt(1 / q1T2) * b1.dag() * b1 # dephasing of qubit 1
A4 = np.sqrt(1 / q2T2) * b2.dag() * b2 # dephasing of qubit 2
L = krotov.objectives.liouvillian(H, c_ops=[A1, A2, A3, A4])
return L
```
We will use internal units GHz and ns. Values in GHz contain an implicit factor 2π, and MHz and μs are converted to GHz and ns, respectively:
```
GHz = 2 * np.pi
MHz = 1e-3 * GHz
ns = 1
μs = 1000 * ns
```
This implicit factor $2 \pi$ is because frequencies ($\nu$) convert to energies as $E = h \nu$, but our propagation routines assume a unit $\hbar = 1$ for energies. Thus, the factor $h / \hbar = 2 \pi$.
We will use the same parameters as those given in Table 2 of the paper:
```
ω1 = 4.3796 * GHz # qubit frequency 1
ω2 = 4.6137 * GHz # qubit frequency 2
ωd = 4.4985 * GHz # drive frequency
δ1 = -239.3 * MHz # anharmonicity 1
δ2 = -242.8 * MHz # anharmonicity 2
J = -2.3 * MHz # effective qubit-qubit coupling
q1T1 = 38.0 * μs # decay time for qubit 1
q2T1 = 32.0 * μs # decay time for qubit 2
q1T2 = 29.5 * μs # dephasing time for qubit 1
q2T2 = 16.0 * μs # dephasing time for qubit 2
T = 400 * ns # gate duration
tlist = np.linspace(0, T, 2000)
```
While in the original paper, each transmon was cut off at 6 levels, here we truncate at 5 levels. This makes the propagation faster, while potentially introducing a slightly larger truncation error.
```
n_qubit = 5 # number of transmon levels to consider
```
In the Liouvillian, note the control being split up into a separate real and imaginary part. As a guess control we use a real-valued constant pulse with an amplitude of 35 MHz, acting over 400 ns, with a switch-on and switch-off in the first 20 ns (see plot below)
```
def Omega(t, args):
E0 = 35.0 * MHz
return E0 * krotov.shapes.flattop(t, 0, T, t_rise=(20 * ns), func='sinsq')
```
The imaginary part start out as zero:
```
def ZeroPulse(t, args):
return 0.0
```
We can now instantiate the Liouvillian:
```
L = two_qubit_transmon_liouvillian(
ω1, ω2, ωd, δ1, δ2, J, q1T1, q2T1, q1T2, q2T2, T, Omega, n_qubit
)
```
The guess pulse looks as follows:
```
def plot_pulse(pulse, tlist, xlimit=None):
fig, ax = plt.subplots()
if callable(pulse):
pulse = np.array([pulse(t, None) for t in tlist])
ax.plot(tlist, pulse/MHz)
ax.set_xlabel('time (ns)')
ax.set_ylabel('pulse amplitude (MHz)')
if xlimit is not None:
ax.set_xlim(xlimit)
plt.show(fig)
plot_pulse(L[1][1], tlist)
```
## Optimization objectives
Our target gate is $\Op{O} = \sqrt{\text{iSWAP}}$:
```
SQRTISWAP = qutip.Qobj(np.array(
[[1, 0, 0, 0],
[0, 1 / np.sqrt(2), 1j / np.sqrt(2), 0],
[0, 1j / np.sqrt(2), 1 / np.sqrt(2), 0],
[0, 0, 0, 1]]),
dims=[[2, 2], [2, 2]]
)
```
The key idea explored in the paper is that a set of three density matrices is sufficient to track the optimization
$$
\begin{align}
\Op{\rho}_1
&= \sum_{i=1}^{d} \frac{2 (d-i+1)}{d (d+1)} \ketbra{i}{i} \\
\Op{\rho}_2
&= \sum_{i,j=1}^{d} \frac{1}{d} \ketbra{i}{j} \\
\Op{\rho}_3
&= \sum_{i=1}^{d} \frac{1}{d} \ketbra{i}{i}
\end{align}
$$
In our case, $d=4$ for a two qubit-gate, and the $\ket{i}$, $\ket{j}$ are the canonical basis states $\ket{00}$, $\ket{01}$, $\ket{10}$, $\ket{11}$
```
ket00 = qutip.ket((0, 0), dim=(n_qubit, n_qubit))
ket01 = qutip.ket((0, 1), dim=(n_qubit, n_qubit))
ket10 = qutip.ket((1, 0), dim=(n_qubit, n_qubit))
ket11 = qutip.ket((1, 1), dim=(n_qubit, n_qubit))
basis = [ket00, ket01, ket10, ket11]
```
The three density matrices play different roles in the optimization, and, as shown in the paper, convergence may improve significantly by weighing the states relatively to each other. For this example, we place a strong emphasis on the optimization $\Op{\rho}_1 \rightarrow \Op{O}^\dagger \Op{\rho}_1 \Op{O}$, by a factor of 20. This reflects that the hardest part of the optimization is identifying the basis in which the gate is diagonal. We will be using the real-part functional ($J_{T,\text{re}}$) to evaluate the success of $\Op{\rho}_i \rightarrow \Op{O}\Op{\rho}_i\Op{O}^\dagger$. Because $\Op{\rho}_1$ and $\Op{\rho}_3$ are mixed states, the Hilbert-Schmidt overlap will take values smaller than one in the optimal case. To compensate, we divide the weights by the purity of the respective states.
```
weights = np.array([20, 1, 1], dtype=np.float64)
weights *= len(weights) / np.sum(weights) # manual normalization
weights /= np.array([0.3, 1.0, 0.25]) # purities
```
The `krotov.gate_objectives` routine can initialize the density matrices $\Op{\rho}_1$, $\Op{\rho}_2$, $\Op{\rho}_3$ automatically, via the parameter `liouville_states_set`. Alternatively, we could also use the `'full'` basis of 16 matrices or the extended set of $d+1 = 5$ pure-state density matrices.
```
objectives = krotov.gate_objectives(
basis,
SQRTISWAP,
L,
liouville_states_set='3states',
weights=weights,
normalize_weights=False,
)
objectives
```
The use of `normalize_weights=False` is because we have included the purities in the weights, as discussed above.
## Dynamics under the Guess Pulse
For numerical efficiency, both for the analysis of the guess/optimized controls, we will use a stateful density matrix propagator:
A true physical measure for the success of the optimization is the "average gate fidelity". Evaluating the fidelity requires to simulate the dynamics of the full basis of Liouville space:
```
full_liouville_basis = [psi * phi.dag() for (psi, phi) in product(basis, basis)]
```
We propagate these under the guess control:
```
def propagate_guess(initial_state):
return objectives[0].mesolve(
tlist,
rho0=initial_state,
).states[-1]
full_states_T = parallel_map(
propagate_guess, values=full_liouville_basis,
)
print("F_avg = %.3f" % krotov.functionals.F_avg(full_states_T, basis, SQRTISWAP))
```
Note that we use $F_{T,\text{re}}$, not $F_{\text{avg}}$ to steer the optimization, as the Krotov boundary condition $\frac{\partial F_{\text{avg}}}{\partial \rho^\dagger}$ would be non-trivial.
Before doing the optimization, we can look the population dynamics under the guess pulse. For this purpose we propagate the pure-state density matrices corresponding to the canonical logical basis in Hilbert space, and obtain the expectation values for the projection onto these same states:
```
rho00, rho01, rho10, rho11 = [qutip.ket2dm(psi) for psi in basis]
def propagate_guess_for_expvals(initial_state):
return objectives[0].propagate(
tlist,
propagator=krotov.propagators.DensityMatrixODEPropagator(),
rho0=initial_state,
e_ops=[rho00, rho01, rho10, rho11]
)
def plot_population_dynamics(dyn00, dyn01, dyn10, dyn11):
fig, axs = plt.subplots(ncols=2, nrows=2, figsize=(16, 8))
axs = np.ndarray.flatten(axs)
labels = ['00', '01', '10', '11']
dyns = [dyn00, dyn01, dyn10, dyn11]
for (ax, dyn, title) in zip(axs, dyns, labels):
for (i, label) in enumerate(labels):
ax.plot(dyn.times, dyn.expect[i], label=label)
ax.legend()
ax.set_title(title)
plt.show(fig)
plot_population_dynamics(
*parallel_map(
propagate_guess_for_expvals,
values=[rho00, rho01, rho10, rho11],
)
)
```
## Optimization
We now define the optimization parameters for the controls, the Krotov step size $\lambda_a$ and the update-shape that will ensure that the pulse switch-on and switch-off stays intact.
```
pulse_options = {
L[i][1]: dict(
lambda_a=1.0,
update_shape=partial(
krotov.shapes.flattop, t_start=0, t_stop=T, t_rise=(20 * ns))
)
for i in [1, 2]
}
```
Then we run the optimization for 2000 iterations
```
opt_result = krotov.optimize_pulses(
objectives,
pulse_options,
tlist,
propagator=krotov.propagators.DensityMatrixODEPropagator(reentrant=True),
chi_constructor=krotov.functionals.chis_re,
info_hook=krotov.info_hooks.print_table(J_T=krotov.functionals.J_T_re),
iter_stop=3,
)
```
(this takes a while)...
```
dumpfile = "./3states_opt_result.dump"
if os.path.isfile(dumpfile):
opt_result = krotov.result.Result.load(dumpfile, objectives)
else:
opt_result = krotov.optimize_pulses(
objectives,
pulse_options,
tlist,
propagator=krotov.propagators.DensityMatrixODEPropagator(reentrant=True),
chi_constructor=krotov.functionals.chis_re,
info_hook=krotov.info_hooks.print_table(J_T=krotov.functionals.J_T_re),
iter_stop=5,
continue_from=opt_result
)
opt_result.dump(dumpfile)
opt_result
```
## Optimization result
```
optimized_control = opt_result.optimized_controls[0] + 1j * opt_result.optimized_controls[1]
plot_pulse(np.abs(optimized_control), tlist)
def propagate_opt(initial_state):
return opt_result.optimized_objectives[0].propagate(
tlist,
propagator=krotov.propagators.DensityMatrixODEPropagator(),
rho0=initial_state,
).states[-1]
opt_full_states_T = parallel_map(
propagate_opt, values=full_liouville_basis,
)
print("F_avg = %.3f" % krotov.functionals.F_avg(opt_full_states_T, basis, SQRTISWAP))
def propagate_opt_for_expvals(initial_state):
return opt_result.optimized_objectives[0].propagate(
tlist,
propagator=krotov.propagators.DensityMatrixODEPropagator(),
rho0=initial_state,
e_ops=[rho00, rho01, rho10, rho11]
)
```
Plotting the population dynamics, we see the expected behavior for the $\sqrt{\text{iSWAP}}$ gate.
```
plot_population_dynamics(
*parallel_map(
propagate_opt_for_expvals,
values=[rho00, rho01, rho10, rho11],
)
)
def plot_convergence(result):
fig, ax = plt.subplots()
ax.semilogy(result.iters, result.info_vals)
ax.set_xlabel('OCT iteration')
ax.set_ylabel(r'optimization error $J_{T, re}$')
plt.show(fig)
plot_convergence(opt_result)
```
| true |
code
| 0.720048 | null | null | null | null |
|
# Working with Scikit-learn
This notebook shows how PySINDy objects interface with some useful tools from [Scikit-learn](https://scikit-learn.org/stable/).
## Setup
```
import numpy as np
from scipy.integrate import odeint
import pysindy as ps
```
Let's generate some training data from the [Lorenz system](https://en.wikipedia.org/wiki/Lorenz_system) with which to experiment.
```
def lorenz(z, t):
return [
10 * (z[1] - z[0]),
z[0] * (28 - z[2]) - z[1],
z[0] * z[1] - (8 / 3) * z[2]
]
# Generate training data
dt = .002
t_train = np.arange(0, 10, dt)
x0_train = [-8, 8, 27]
x_train = odeint(lorenz, x0_train, t_train)
# Evolve the Lorenz equations in time using a different initial condition
t_test = np.arange(0, 15, dt)
x0_test = np.array([8, 7, 15])
x_test = odeint(lorenz, x0_test, t_test)
```
## Cross-validation
PySINDy supports Scikit-learn-type cross-validation with a few caveats.
1. We must use **uniform timesteps** using the `t_default` parameter. This is because the `fit` and `score` methods of `SINDy` differ from those used in Scikit-learn in the sense that they both have an optional `t` parameter. Setting `t_default` is a workaround.
2. We have to be careful about the way we split up testing and training data during cross-validation. Because the `SINDy` object needs to differentiate the data, we need the training and test data to consist of sequential intervals of time. If we randomly sample the data, then the computed derivatives will be horribly inaccurate. Luckily, Scikit-learn has a `TimeSeriesSplit` object for such situations. If we really want to randomly sample the data during cross-validation, there is a way to do so. However, it's more complicated.
Note that we need to prepend `optimizer__`, `feature_library__`, or `differentiation_method__` to the parameter names.
### Cross-validation with TimeSeriesSplit
```
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import TimeSeriesSplit
model = ps.SINDy(t_default=dt)
param_grid = {
"optimizer__threshold": [0.001, 0.01, 0.1],
"optimizer__alpha": [0.01, 0.05, 0.1],
"feature_library": [ps.PolynomialLibrary(), ps.FourierLibrary()],
"differentiation_method__order": [1, 2]
}
search = GridSearchCV(
model,
param_grid,
cv=TimeSeriesSplit(n_splits=5)
)
search.fit(x_train)
print("Best parameters:", search.best_params_)
search.best_estimator_.print()
```
### Cross-validation without TimeSeriesSplit
If we want to use another cross-validation splitter, we'll need to (a) define a wrapper class which uses the argument "y" instead of "x_dot" and (b) precompute the derivatives. Note that (b) means that we will not be able to perform cross-validation on the parameters of the differentiation method.
```
from sklearn.metrics import r2_score
class SINDyCV(ps.SINDy):
def __init__(
self,
optimizer=None,
feature_library=None,
differentiation_method=None,
feature_names=None,
t_default=1,
discrete_time=False,
n_jobs=1
):
super(SINDyCV, self).__init__(
optimizer=optimizer,
feature_library=feature_library,
differentiation_method=differentiation_method,
feature_names=feature_names,
t_default=t_default,
discrete_time=discrete_time,
n_jobs=n_jobs
)
def fit(self, x, y, **kwargs):
return super(SINDyCV, self).fit(x, x_dot=y, **kwargs)
def score(
self,
x,
y,
t=None,
u=None,
multiple_trajectories=False,
metric=r2_score,
**metric_kws
):
return super(SINDyCV, self).score(
x,
x_dot=y,
t=t,
u=u,
multiple_trajectories=multiple_trajectories,
metric=metric,
**metric_kws
)
from sklearn.model_selection import ShuffleSplit
model = SINDyCV()
x_dot = model.differentiate(x_train, t=t_train)
param_grid = {
"optimizer__threshold": [0.002, 0.01, 0.1],
"optimizer__alpha": [0.01, 0.05, 0.1],
"feature_library__degree": [1, 2, 3],
}
search = GridSearchCV(
model,
param_grid,
cv=ShuffleSplit(n_splits=3, test_size=0.25)
)
search.fit(x_train, y=x_dot)
print("Best parameters:", search.best_params_)
search.best_estimator_.print()
```
## Sparse optimizers
Any of Scikit-learn's [linear models ](https://scikit-learn.org/stable/modules/linear_model.html) can be used for the `optimizer` parameter of a `SINDy` object, though we only recommend using those designed for sparse regression.
In the examples below we set `fit_intercept` to `False` since the default feature library (polynomials of degree up to two) already includes constant functions.
```
from sklearn.linear_model import ElasticNet
model = ps.SINDy(optimizer=ElasticNet(l1_ratio=0.9, fit_intercept=False), t_default=dt)
model.fit(x_train)
model.print()
from sklearn.linear_model import OrthogonalMatchingPursuit
model = ps.SINDy(
optimizer=OrthogonalMatchingPursuit(n_nonzero_coefs=8, fit_intercept=False),
t_default=dt
)
model.fit(x_train)
model.print()
```
| true |
code
| 0.675738 | null | null | null | null |
|
# Think Bayes
This notebook presents example code and exercise solutions for Think Bayes.
Copyright 2018 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
```
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import classes from thinkbayes2
from thinkbayes2 import Hist, Pmf, Suite, Beta
import thinkplot
```
## Unreliable observation
Suppose that instead of observing coin tosses directly, you measure the outcome using an instrument that is not always correct. Specifically, suppose there is a probability `y` that an actual heads is reported as tails, or actual tails reported as heads.
Write a class that estimates the bias of a coin given a series of outcomes and the value of `y`.
How does the spread of the posterior distribution depend on `y`?
```
# Solution
# Here's a class that models an unreliable coin
class UnreliableCoin(Suite):
def __init__(self, prior, y):
"""
prior: seq or map
y: probability of accurate measurement
"""
super().__init__(prior)
self.y = y
def Likelihood(self, data, hypo):
"""
data: outcome of unreliable measurement, either 'H' or 'T'
hypo: probability of heads, 0-100
"""
x = hypo / 100
y = self.y
if data == 'H':
return x*y + (1-x)*(1-y)
else:
return x*(1-y) + (1-x)*y
# Solution
# Now let's initialize an UnreliableCoin with `y=0.9`:
prior = range(0, 101)
suite = UnreliableCoin(prior, y=0.9)
thinkplot.Pdf(suite)
# Solution
# And update with 3 heads and 7 tails.
for outcome in 'HHHTTTTTTT':
suite.Update(outcome)
thinkplot.Pdf(suite)
# Solution
# Now let's try it out with different values of `y`:
def plot_posterior(y, data):
prior = range(0, 101)
suite = UnreliableCoin(prior, y=y)
for outcome in data:
suite.Update(outcome)
thinkplot.Pdf(suite, label='y=%g' % y)
# Solution
# The posterior distribution gets wider as the measurement gets less reliable.
data = 'HHHTTTTTTT'
plot_posterior(1, data)
plot_posterior(0.8, data)
plot_posterior(0.6, data)
thinkplot.decorate(xlabel='Probability of heads (x)',
ylabel='PMF')
# Solution
# At `y=0.5`, the measurement provides no information, so the posterior equals the prior:
plot_posterior(0.5, data)
thinkplot.decorate(xlabel='Probability of heads (x)',
ylabel='PMF')
# Solution
# As the coin gets less reliable (below `y=0.5`) the distribution gets narrower again.
# In fact, a measurement with `y=0` is just as good as one with `y=1`,
# provided that we know what `y` is.
plot_posterior(0.4, data)
plot_posterior(0.2, data)
plot_posterior(0.0, data)
thinkplot.decorate(xlabel='Probability of heads (x)',
ylabel='PMF')
```
| true |
code
| 0.766042 | null | null | null | null |
|
## Classify Radio Signals from Space using Keras
In this experiment, we attempt to classify radio signals from space.
Dataset has been provided by SETI. Details can be found here:
https://github.com/setiQuest/ML4SETI/blob/master/tutorials/Step_1_Get_Data.ipynb
## Import necessary libraries
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn import metrics
import seaborn as sns
import tensorflow as tf
%matplotlib inline
# Mount google drive to get data
from google.colab import drive
drive.mount('/content/drive')
!ls -l '/content/drive/My Drive/datasets/seti'
```
## Load data
```
# Load dataset from CSV
train_images = pd.read_csv('/content/drive/My Drive/datasets/seti/train/images.csv', header=None)
train_labels = pd.read_csv('/content/drive/My Drive/datasets/seti/train/labels.csv', header=None)
val_images = pd.read_csv('/content/drive/My Drive/datasets/seti/validation/images.csv', header=None)
val_labels = pd.read_csv('/content/drive/My Drive/datasets/seti/validation/labels.csv', header=None)
train_images.head()
train_labels.head()
# Check shape of train_images, train_labels, val_images nad val_labels
print("train_images shape:", train_images.shape)
print("train_labels shape:", train_labels.shape)
print("val_images shape:", val_images.shape)
print("val_labels shape:", val_labels.shape)
# Reshape the image sets
# Get the values as numpy array
x_train = train_images.values.reshape(3200, 64, 128, 1)
x_val = val_images.values.reshape(800, 64, 128, 1)
y_train = train_labels.values
y_val = val_labels.values
```
## Plot 2D spectrogram data
```
plt.figure(figsize=(15,15))
for i in range(1,4):
plt.subplot(1,3,i)
img = np.squeeze(x_train[np.random.randint(x_train.shape[0])])
plt.imshow(img, cmap='gray')
```
## Preprocess data
```
from tensorflow.keras.preprocessing.image import ImageDataGenerator
datagen_train = ImageDataGenerator(horizontal_flip=True)
datagen_train.fit(x_train)
datagen_val = ImageDataGenerator(horizontal_flip=True)
datagen_val.fit(x_val)
```
## Build model
```
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Dense, Flatten
from tensorflow.keras.layers import BatchNormalization, Dropout, Activation
from tensorflow.keras.models import Sequential
from tensorflow.keras.optimizers import Adam
# Initialize model
model = Sequential()
# 1st CNN block
model.add(Conv2D(32, (5,5), padding='same', input_shape=(64,128,1)))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.25))
# 2nd CNN block
model.add(Conv2D(64, (5,5), padding='same'))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.25))
# Falatter CNN output to feed to FC layer
model.add(Flatten())
# Fully connected layer
model.add(Dense(1024))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.4))
# Softmax layer
model.add(Dense(4, activation='softmax'))
```
## Compile the model
```
# Schedule learnning rate decay
lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay(
0.005,
decay_steps=5,
decay_rate=0.9,
staircase=True)
model.compile(optimizer=Adam(lr_schedule), loss='categorical_crossentropy', metrics=['accuracy'])
model.summary()
```
## Train the model
```
batch_size = 32
history = model.fit(
datagen_train.flow(x_train, y_train, batch_size=batch_size, shuffle=True),
steps_per_epoch=len(x_train)//batch_size,
validation_data = datagen_val.flow(x_val, y_val, batch_size=batch_size, shuffle=True),
validation_steps = len(x_val)//batch_size,
epochs=10,
)
```
## Evaluation
```
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['training', 'validation'])
plt.show()
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['training', 'validation'])
plt.show()
model.evaluate(x_val, y_val)
y_true = np.argmax(y_val, 1)
y_pred = np.argmax(model.predict(x_val), 1)
print(metrics.classification_report(y_true, y_pred))
print("Classification accuracy: %.2f" % metrics.accuracy_score(y_true, y_pred))
plt.figure(figsize=(8,8))
labels = ["squiggle", "narrowband", "noise", "narrowbanddrd"]
ax = plt.subplot()
sns.heatmap(metrics.confusion_matrix(y_true, y_pred, normalize='true'), annot=True, ax=ax, cmap=plt.cm.Blues)
ax.set_title('Confusion Matrix')
ax.xaxis.set_ticklabels(labels)
ax.yaxis.set_ticklabels(labels)
```
## Conclusions
Winning submission has used ResNet based architechure (WRN) on primary (full) dataset, and achieved a classification accuracy of 94.99%.
Reference: https://github.com/sgrvinod/Wide-Residual-Nets-for-SETI
Here we have used a simple CNN based model. The model did not learn much after the first 2 epochs (accuracy is around 74% after 10 epochs).
Reasons:
* The signals in the dataset have a noise factor added to it.
* Even though the dataset, we have used here, is simpler than the other datasets provided by SETI, it's a bit challenging to extract features using a simple model like ours. So it is essentially a underfitting problem.
Possible improvements:
* Add additional CNN blocks, change filter sizes (e.g. 7x7, 5x5 etc.) to learn more features.
* Add additional fully connected layers.
* Here we have used Adam optimizer. It has convergence issues. We can change it SGD, and see what happens.
* Use a different architechture altogether.
| true |
code
| 0.680826 | null | null | null | null |
|
## Linear Regression with PyTorch
#### Part 2 of "PyTorch: Zero to GANs"
*This post is the second in a series of tutorials on building deep learning models with PyTorch, an open source neural networks library developed and maintained by Facebook. Check out the full series:*
1. [PyTorch Basics: Tensors & Gradients](https://jovian.ml/aakashns/01-pytorch-basics)
2. [Linear Regression & Gradient Descent](https://jovian.ml/aakashns/02-linear-regression)
3. [Image Classfication using Logistic Regression](https://jovian.ml/aakashns/03-logistic-regression)
4. [Training Deep Neural Networks on a GPU](https://jovian.ml/aakashns/04-feedforward-nn)
5. [Image Classification using Convolutional Neural Networks](https://jovian.ml/aakashns/05-cifar10-cnn)
6. [Data Augmentation, Regularization and ResNets](https://jovian.ml/aakashns/05b-cifar10-resnet)
7. [Generating Images using Generative Adverserial Networks](https://jovian.ml/aakashns/06-mnist-gan)
Continuing where the [previous tutorial](https://jvn.io/aakashns/3143ceb92b4f4cbbb4f30e203580b77b) left off, we'll discuss one of the foundational algorithms of machine learning in this post: *Linear regression*. We'll create a model that predicts crop yields for apples and oranges (*target variables*) by looking at the average temperature, rainfall and humidity (*input variables or features*) in a region. Here's the training data:

In a linear regression model, each target variable is estimated to be a weighted sum of the input variables, offset by some constant, known as a bias :
```
yield_apple = w11 * temp + w12 * rainfall + w13 * humidity + b1
yield_orange = w21 * temp + w22 * rainfall + w23 * humidity + b2
```
Visually, it means that the yield of apples is a linear or planar function of temperature, rainfall and humidity:

The *learning* part of linear regression is to figure out a set of weights `w11, w12,... w23, b1 & b2` by looking at the training data, to make accurate predictions for new data (i.e. to predict the yields for apples and oranges in a new region using the average temperature, rainfall and humidity). This is done by adjusting the weights slightly many times to make better predictions, using an optimization technique called *gradient descent*.
## System setup
This tutorial takes a code-first approach towards learning PyTorch, and you should try to follow along by running and experimenting with the code yourself. The easiest way to start executing this notebook is to click the **"Run"** button at the top of this page, and select **"Run on Binder"**. This will run the notebook on [mybinder.org](https://mybinder.org), a free online service for running Jupyter notebooks.
**NOTE**: *If you're running this notebook on Binder, please skip ahead to the next section.*
### Running on your computer locally
You can clone this notebook hosted on [Jovian.ml](https://www.jovian.ml), install the required dependencies, and start Jupyter by running the following commands on the terminal:
```bash
pip install jovian --upgrade # Install the jovian library
jovian clone aakashns/02-linear-regression # Download notebook & dependencies
cd 02-linear-regression # Enter the created directory
jovian install # Install the dependencies
conda activate 02-linear-regression # Activate virtual environment
jupyter notebook # Start Jupyter
```
On older versions of conda, you might need to run `source activate 02-linear-regression` to activate the environment. For a more detailed explanation of the above steps, check out the *System setup* section in the [previous notebook](https://jovian.ml/aakashns/01-pytorch-basics).
We begin by importing Numpy and PyTorch:
```
# Uncomment the command below if Numpy or PyTorch is not installed
# !conda install numpy pytorch cpuonly -c pytorch -y
import numpy as np
import torch
```
## Training data
The training data can be represented using 2 matrices: `inputs` and `targets`, each with one row per observation, and one column per variable.
```
# Input (temp, rainfall, humidity)
inputs = np.array([[73, 67, 43],
[91, 88, 64],
[87, 134, 58],
[102, 43, 37],
[69, 96, 70]], dtype='float32')
# Targets (apples, oranges)
targets = np.array([[56, 70],
[81, 101],
[119, 133],
[22, 37],
[103, 119]], dtype='float32')
```
We've separated the input and target variables, because we'll operate on them separately. Also, we've created numpy arrays, because this is typically how you would work with training data: read some CSV files as numpy arrays, do some processing, and then convert them to PyTorch tensors as follows:
```
# Convert inputs and targets to tensors
inputs = torch.from_numpy(inputs)
targets = torch.from_numpy(targets)
print(inputs)
print(targets)
```
## Linear regression model from scratch
The weights and biases (`w11, w12,... w23, b1 & b2`) can also be represented as matrices, initialized as random values. The first row of `w` and the first element of `b` are used to predict the first target variable i.e. yield of apples, and similarly the second for oranges.
```
# Weights and biases
w = torch.randn(2, 3, requires_grad=True)
b = torch.randn(2, requires_grad=True)
print(w)
print(b)
```
`torch.randn` creates a tensor with the given shape, with elements picked randomly from a [normal distribution](https://en.wikipedia.org/wiki/Normal_distribution) with mean 0 and standard deviation 1.
Our *model* is simply a function that performs a matrix multiplication of the `inputs` and the weights `w` (transposed) and adds the bias `b` (replicated for each observation).

We can define the model as follows:
```
def model(x):
return x @ w.t() + b
```
`@` represents matrix multiplication in PyTorch, and the `.t` method returns the transpose of a tensor.
The matrix obtained by passing the input data into the model is a set of predictions for the target variables.
```
# Generate predictions
preds = model(inputs)
print(preds)
```
Let's compare the predictions of our model with the actual targets.
```
# Compare with targets
print(targets)
```
You can see that there's a huge difference between the predictions of our model, and the actual values of the target variables. Obviously, this is because we've initialized our model with random weights and biases, and we can't expect it to *just work*.
## Loss function
Before we improve our model, we need a way to evaluate how well our model is performing. We can compare the model's predictions with the actual targets, using the following method:
* Calculate the difference between the two matrices (`preds` and `targets`).
* Square all elements of the difference matrix to remove negative values.
* Calculate the average of the elements in the resulting matrix.
The result is a single number, known as the **mean squared error** (MSE).
```
# MSE loss
def mse(t1, t2):
diff = t1 - t2
return torch.sum(diff * diff) / diff.numel()
```
`torch.sum` returns the sum of all the elements in a tensor, and the `.numel` method returns the number of elements in a tensor. Let's compute the mean squared error for the current predictions of our model.
```
# Compute loss
loss = mse(preds, targets)
print(loss)
```
Here’s how we can interpret the result: *On average, each element in the prediction differs from the actual target by about 145 (square root of the loss 20834)*. And that’s pretty bad, considering the numbers we are trying to predict are themselves in the range 50–200. Also, the result is called the *loss*, because it indicates how bad the model is at predicting the target variables. Lower the loss, better the model.
## Compute gradients
With PyTorch, we can automatically compute the gradient or derivative of the loss w.r.t. to the weights and biases, because they have `requires_grad` set to `True`.
```
# Compute gradients
loss.backward()
```
The gradients are stored in the `.grad` property of the respective tensors. Note that the derivative of the loss w.r.t. the weights matrix is itself a matrix, with the same dimensions.
```
# Gradients for weights
print(w)
print(w.grad)
```
The loss is a [quadratic function](https://en.wikipedia.org/wiki/Quadratic_function) of our weights and biases, and our objective is to find the set of weights where the loss is the lowest. If we plot a graph of the loss w.r.t any individual weight or bias element, it will look like the figure shown below. A key insight from calculus is that the gradient indicates the rate of change of the loss, or the [slope](https://en.wikipedia.org/wiki/Slope) of the loss function w.r.t. the weights and biases.
If a gradient element is **positive**:
* **increasing** the element's value slightly will **increase** the loss.
* **decreasing** the element's value slightly will **decrease** the loss

If a gradient element is **negative**:
* **increasing** the element's value slightly will **decrease** the loss.
* **decreasing** the element's value slightly will **increase** the loss.

The increase or decrease in loss by changing a weight element is proportional to the value of the gradient of the loss w.r.t. that element. This forms the basis for the optimization algorithm that we'll use to improve our model.
Before we proceed, we reset the gradients to zero by calling `.zero_()` method. We need to do this, because PyTorch accumulates, gradients i.e. the next time we call `.backward` on the loss, the new gradient values will get added to the existing gradient values, which may lead to unexpected results.
```
w.grad.zero_()
b.grad.zero_()
print(w.grad)
print(b.grad)
```
## Adjust weights and biases using gradient descent
We'll reduce the loss and improve our model using the gradient descent optimization algorithm, which has the following steps:
1. Generate predictions
2. Calculate the loss
3. Compute gradients w.r.t the weights and biases
4. Adjust the weights by subtracting a small quantity proportional to the gradient
5. Reset the gradients to zero
Let's implement the above step by step.
```
# Generate predictions
preds = model(inputs)
print(preds)
```
Note that the predictions are same as before, since we haven't made any changes to our model. The same holds true for the loss and gradients.
```
# Calculate the loss
loss = mse(preds, targets)
print(loss)
# Compute gradients
loss.backward()
print(w.grad)
print(b.grad)
```
Finally, we update the weights and biases using the gradients computed above.
```
# Adjust weights & reset gradients
with torch.no_grad():
w -= w.grad * 1e-5
b -= b.grad * 1e-5
w.grad.zero_()
b.grad.zero_()
```
A few things to note above:
* We use `torch.no_grad` to indicate to PyTorch that we shouldn't track, calculate or modify gradients while updating the weights and biases.
* We multiply the gradients with a really small number (`10^-5` in this case), to ensure that we don't modify the weights by a really large amount, since we only want to take a small step in the downhill direction of the gradient. This number is called the *learning rate* of the algorithm.
* After we have updated the weights, we reset the gradients back to zero, to avoid affecting any future computations.
Let's take a look at the new weights and biases.
```
print(w)
print(b)
```
With the new weights and biases, the model should have lower loss.
```
# Calculate loss
preds = model(inputs)
loss = mse(preds, targets)
print(loss)
```
We have already achieved a significant reduction in the loss, simply by adjusting the weights and biases slightly using gradient descent.
## Train for multiple epochs
To reduce the loss further, we can repeat the process of adjusting the weights and biases using the gradients multiple times. Each iteration is called an epoch. Let's train the model for 100 epochs.
```
# Train for 100 epochs
for i in range(100):
preds = model(inputs)
loss = mse(preds, targets)
loss.backward()
with torch.no_grad():
w -= w.grad * 1e-5
b -= b.grad * 1e-5
w.grad.zero_()
b.grad.zero_()
```
Once again, let's verify that the loss is now lower:
```
# Calculate loss
preds = model(inputs)
loss = mse(preds, targets)
print(loss)
```
As you can see, the loss is now much lower than what we started out with. Let's look at the model's predictions and compare them with the targets.
```
# Predictions
preds
# Targets
targets
```
The prediction are now quite close to the target variables, and we can get even better results by training for a few more epochs.
At this point, we can save our notebook and upload it to [Jovian.ml](https://www.jovian.ml) for future reference and sharing.
```
!pip install jovian --upgrade -q
import jovian
jovian.commit()
```
`jovian.commit` uploads the notebook to [Jovian.ml](https://www.jovian.ml), captures the Python environment and creates a sharable link for the notebook. You can use this link to share your work and let anyone reproduce it easily with the `jovian clone` command. Jovian also includes a powerful commenting interface, so you (and others) can discuss & comment on specific parts of your notebook:

## Linear regression using PyTorch built-ins
The model and training process above were implemented using basic matrix operations. But since this such a common pattern , PyTorch has several built-in functions and classes to make it easy to create and train models.
Let's begin by importing the `torch.nn` package from PyTorch, which contains utility classes for building neural networks.
```
import torch.nn as nn
```
As before, we represent the inputs and targets and matrices.
```
# Input (temp, rainfall, humidity)
inputs = np.array([[73, 67, 43], [91, 88, 64], [87, 134, 58],
[102, 43, 37], [69, 96, 70], [73, 67, 43],
[91, 88, 64], [87, 134, 58], [102, 43, 37],
[69, 96, 70], [73, 67, 43], [91, 88, 64],
[87, 134, 58], [102, 43, 37], [69, 96, 70]],
dtype='float32')
# Targets (apples, oranges)
targets = np.array([[56, 70], [81, 101], [119, 133],
[22, 37], [103, 119], [56, 70],
[81, 101], [119, 133], [22, 37],
[103, 119], [56, 70], [81, 101],
[119, 133], [22, 37], [103, 119]],
dtype='float32')
inputs = torch.from_numpy(inputs)
targets = torch.from_numpy(targets)
inputs
```
We are using 15 training examples this time, to illustrate how to work with large datasets in small batches.
## Dataset and DataLoader
We'll create a `TensorDataset`, which allows access to rows from `inputs` and `targets` as tuples, and provides standard APIs for working with many different types of datasets in PyTorch.
```
from torch.utils.data import TensorDataset
# Define dataset
train_ds = TensorDataset(inputs, targets)
train_ds[0:3]
```
The `TensorDataset` allows us to access a small section of the training data using the array indexing notation (`[0:3]` in the above code). It returns a tuple (or pair), in which the first element contains the input variables for the selected rows, and the second contains the targets.
We'll also create a `DataLoader`, which can split the data into batches of a predefined size while training. It also provides other utilities like shuffling and random sampling of the data.
```
from torch.utils.data import DataLoader
# Define data loader
batch_size = 5
train_dl = DataLoader(train_ds, batch_size, shuffle=True)
```
The data loader is typically used in a `for-in` loop. Let's look at an example.
```
for xb, yb in train_dl:
print(xb)
print(yb)
break
```
In each iteration, the data loader returns one batch of data, with the given batch size. If `shuffle` is set to `True`, it shuffles the training data before creating batches. Shuffling helps randomize the input to the optimization algorithm, which can lead to faster reduction in the loss.
## nn.Linear
Instead of initializing the weights & biases manually, we can define the model using the `nn.Linear` class from PyTorch, which does it automatically.
```
# Define model
model = nn.Linear(3, 2)
print(model.weight)
print(model.bias)
```
PyTorch models also have a helpful `.parameters` method, which returns a list containing all the weights and bias matrices present in the model. For our linear regression model, we have one weight matrix and one bias matrix.
```
# Parameters
list(model.parameters())
```
We can use the model to generate predictions in the exact same way as before:
```
# Generate predictions
preds = model(inputs)
preds
```
## Loss Function
Instead of defining a loss function manually, we can use the built-in loss function `mse_loss`.
```
# Import nn.functional
import torch.nn.functional as F
```
The `nn.functional` package contains many useful loss functions and several other utilities.
```
# Define loss function
loss_fn = F.mse_loss
```
Let's compute the loss for the current predictions of our model.
```
loss = loss_fn(model(inputs), targets)
print(loss)
```
## Optimizer
Instead of manually manipulating the model's weights & biases using gradients, we can use the optimizer `optim.SGD`. SGD stands for `stochastic gradient descent`. It is called `stochastic` because samples are selected in batches (often with random shuffling) instead of as a single group.
```
# Define optimizer
opt = torch.optim.SGD(model.parameters(), lr=1e-5)
```
Note that `model.parameters()` is passed as an argument to `optim.SGD`, so that the optimizer knows which matrices should be modified during the update step. Also, we can specify a learning rate which controls the amount by which the parameters are modified.
## Train the model
We are now ready to train the model. We'll follow the exact same process to implement gradient descent:
1. Generate predictions
2. Calculate the loss
3. Compute gradients w.r.t the weights and biases
4. Adjust the weights by subtracting a small quantity proportional to the gradient
5. Reset the gradients to zero
The only change is that we'll work batches of data, instead of processing the entire training data in every iteration. Let's define a utility function `fit` which trains the model for a given number of epochs.
```
# Utility function to train the model
def fit(num_epochs, model, loss_fn, opt, train_dl):
# Repeat for given number of epochs
for epoch in range(num_epochs):
# Train with batches of data
for xb,yb in train_dl:
# 1. Generate predictions
pred = model(xb)
# 2. Calculate loss
loss = loss_fn(pred, yb)
# 3. Compute gradients
loss.backward()
# 4. Update parameters using gradients
opt.step()
# 5. Reset the gradients to zero
opt.zero_grad()
# Print the progress
if (epoch+1) % 10 == 0:
print('Epoch [{}/{}], Loss: {:.4f}'.format(epoch+1, num_epochs, loss.item()))
```
Some things to note above:
* We use the data loader defined earlier to get batches of data for every iteration.
* Instead of updating parameters (weights and biases) manually, we use `opt.step` to perform the update, and `opt.zero_grad` to reset the gradients to zero.
* We've also added a log statement which prints the loss from the last batch of data for every 10th epoch, to track the progress of training. `loss.item` returns the actual value stored in the loss tensor.
Let's train the model for 100 epochs.
```
fit(100, model, loss_fn, opt,train_dl)
```
Let's generate predictions using our model and verify that they're close to our targets.
```
# Generate predictions
preds = model(inputs)
preds
# Compare with targets
targets
```
Indeed, the predictions are quite close to our targets, and now we have a fairly good model to predict crop yields for apples and oranges by looking at the average temperature, rainfall and humidity in a region.
## Commit and update the notebook
As a final step, we can record a new version of the notebook using the `jovian` library.
```
import jovian
jovian.commit()
```
Note that running `jovian.commit` a second time records a new version of your existing notebook. With Jovian.ml, you can avoid creating copies of your Jupyter notebooks and keep versions organized. Jovian also provides a visual diff ([example](https://jovian.ml/aakashns/keras-mnist-jovian/diff?base=8&remote=2)) so you can inspect what has changed between different versions:

## Further Reading
We've covered a lot of ground this this tutorial, including *linear regression* and the *gradient descent* optimization algorithm. Here are a few resources if you'd like to dig deeper into these topics:
* For a more detailed explanation of derivates and gradient descent, see [these notes from a Udacity course](https://storage.googleapis.com/supplemental_media/udacityu/315142919/Gradient%20Descent.pdf).
* For an animated visualization of how linear regression works, [see this post](https://hackernoon.com/visualizing-linear-regression-with-pytorch-9261f49edb09).
* For a more mathematical treatment of matrix calculus, linear regression and gradient descent, you should check out [Andrew Ng's excellent course notes](https://github.com/Cleo-Stanford-CS/CS229_Notes/blob/master/lectures/cs229-notes1.pdf) from CS229 at Stanford University.
* To practice and test your skills, you can participate in the [Boston Housing Price Prediction](https://www.kaggle.com/c/boston-housing) competition on Kaggle, a website that hosts data science competitions.
With this, we complete our discussion of linear regression in PyTorch, and we’re ready to move on to the next topic: *Logistic regression*.
| true |
code
| 0.768772 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/training-data-analyst/blob/master/courses/fast-and-lean-data-science/01_MNIST_TPU_Keras.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## MNIST on TPU (Tensor Processing Unit)<br>or GPU using tf.Keras and tf.data.Dataset
<table><tr><td><img valign="middle" src="https://raw.githubusercontent.com/GoogleCloudPlatform/tensorflow-without-a-phd/master/tensorflow-rl-pong/images/keras-tensorflow-tpu300px.png" width="300" alt="Keras+Tensorflow+Cloud TPU"></td></tr></table>
This sample trains an "MNIST" handwritten digit
recognition model on a GPU or TPU backend using a Keras
model. Data are handled using the tf.data.Datset API. This is
a very simple sample provided for educational purposes. Do
not expect outstanding TPU performance on a dataset as
small as MNIST.
<h3><a href="https://cloud.google.com/gpu/"><img valign="middle" src="https://raw.githubusercontent.com/GoogleCloudPlatform/tensorflow-without-a-phd/master/tensorflow-rl-pong/images/gpu-hexagon.png" width="50"></a> Train on GPU or TPU <a href="https://cloud.google.com/tpu/"><img valign="middle" src="https://raw.githubusercontent.com/GoogleCloudPlatform/tensorflow-without-a-phd/master/tensorflow-rl-pong/images/tpu-hexagon.png" width="50"></a></h3>
1. Select a GPU or TPU backend (Runtime > Change runtime type)
1. Runtime > Run All (Watch out: the "Colab-only auth" cell requires user input)
<h3><a href="https://cloud.google.com/ml-engine/"><img valign="middle" src="https://raw.githubusercontent.com/GoogleCloudPlatform/tensorflow-without-a-phd/master/tensorflow-rl-pong/images/mlengine-hexagon.png" width="50"></a> Deploy to ML Engine</h3>
1. At the bottom of this notebook you can deploy your trained model to ML Engine for a serverless, autoscaled, REST API experience. You will need a GCP project and a GCS bucket for this last part.
TPUs are located in Google Cloud, for optimal performance, they read data directly from Google Cloud Storage (GCS)
### Parameters
```
BATCH_SIZE = 128 # On TPU, this will be the per-core batch size. A Cloud TPU has 8 cores so tha global TPU batch size is 1024
training_images_file = 'gs://mnist-public/train-images-idx3-ubyte'
training_labels_file = 'gs://mnist-public/train-labels-idx1-ubyte'
validation_images_file = 'gs://mnist-public/t10k-images-idx3-ubyte'
validation_labels_file = 'gs://mnist-public/t10k-labels-idx1-ubyte'
```
### Imports
```
import os, re, math, json, shutil, pprint
import PIL.Image, PIL.ImageFont, PIL.ImageDraw
import numpy as np
import tensorflow as tf
from matplotlib import pyplot as plt
from tensorflow.python.platform import tf_logging
print("Tensorflow version " + tf.__version__)
#@title visualization utilities [RUN ME]
"""
This cell contains helper functions used for visualization
and downloads only. You can skip reading it. There is very
little useful Keras/Tensorflow code here.
"""
# Matplotlib config
plt.rc('image', cmap='gray_r')
plt.rc('grid', linewidth=0)
plt.rc('xtick', top=False, bottom=False, labelsize='large')
plt.rc('ytick', left=False, right=False, labelsize='large')
plt.rc('axes', facecolor='F8F8F8', titlesize="large", edgecolor='white')
plt.rc('text', color='a8151a')
plt.rc('figure', facecolor='F0F0F0')# Matplotlib fonts
MATPLOTLIB_FONT_DIR = os.path.join(os.path.dirname(plt.__file__), "mpl-data/fonts/ttf")
# pull a batch from the datasets. This code is not very nice, it gets much better in eager mode (TODO)
def dataset_to_numpy_util(training_dataset, validation_dataset, N):
# get one batch from each: 10000 validation digits, N training digits
unbatched_train_ds = training_dataset.apply(tf.data.experimental.unbatch())
v_images, v_labels = validation_dataset.make_one_shot_iterator().get_next()
t_images, t_labels = unbatched_train_ds.batch(N).make_one_shot_iterator().get_next()
# Run once, get one batch. Session.run returns numpy results
with tf.Session() as ses:
(validation_digits, validation_labels,
training_digits, training_labels) = ses.run([v_images, v_labels, t_images, t_labels])
# these were one-hot encoded in the dataset
validation_labels = np.argmax(validation_labels, axis=1)
training_labels = np.argmax(training_labels, axis=1)
return (training_digits, training_labels,
validation_digits, validation_labels)
# create digits from local fonts for testing
def create_digits_from_local_fonts(n):
font_labels = []
img = PIL.Image.new('LA', (28*n, 28), color = (0,255)) # format 'LA': black in channel 0, alpha in channel 1
font1 = PIL.ImageFont.truetype(os.path.join(MATPLOTLIB_FONT_DIR, 'DejaVuSansMono-Oblique.ttf'), 25)
font2 = PIL.ImageFont.truetype(os.path.join(MATPLOTLIB_FONT_DIR, 'STIXGeneral.ttf'), 25)
d = PIL.ImageDraw.Draw(img)
for i in range(n):
font_labels.append(i%10)
d.text((7+i*28,0 if i<10 else -4), str(i%10), fill=(255,255), font=font1 if i<10 else font2)
font_digits = np.array(img.getdata(), np.float32)[:,0] / 255.0 # black in channel 0, alpha in channel 1 (discarded)
font_digits = np.reshape(np.stack(np.split(np.reshape(font_digits, [28, 28*n]), n, axis=1), axis=0), [n, 28*28])
return font_digits, font_labels
# utility to display a row of digits with their predictions
def display_digits(digits, predictions, labels, title, n):
plt.figure(figsize=(13,3))
digits = np.reshape(digits, [n, 28, 28])
digits = np.swapaxes(digits, 0, 1)
digits = np.reshape(digits, [28, 28*n])
plt.yticks([])
plt.xticks([28*x+14 for x in range(n)], predictions)
for i,t in enumerate(plt.gca().xaxis.get_ticklabels()):
if predictions[i] != labels[i]: t.set_color('red') # bad predictions in red
plt.imshow(digits)
plt.grid(None)
plt.title(title)
# utility to display multiple rows of digits, sorted by unrecognized/recognized status
def display_top_unrecognized(digits, predictions, labels, n, lines):
idx = np.argsort(predictions==labels) # sort order: unrecognized first
for i in range(lines):
display_digits(digits[idx][i*n:(i+1)*n], predictions[idx][i*n:(i+1)*n], labels[idx][i*n:(i+1)*n],
"{} sample validation digits out of {} with bad predictions in red and sorted first".format(n*lines, len(digits)) if i==0 else "", n)
# utility to display training and validation curves
def display_training_curves(training, validation, title, subplot):
if subplot%10==1: # set up the subplots on the first call
plt.subplots(figsize=(10,10), facecolor='#F0F0F0')
plt.tight_layout()
ax = plt.subplot(subplot)
ax.grid(linewidth=1, color='white')
ax.plot(training)
ax.plot(validation)
ax.set_title('model '+ title)
ax.set_ylabel(title)
ax.set_xlabel('epoch')
ax.legend(['train', 'valid.'])
```
### Colab-only auth for this notebook and the TPU
```
IS_COLAB_BACKEND = 'COLAB_GPU' in os.environ # this is always set on Colab, the value is 0 or 1 depending on GPU presence
if IS_COLAB_BACKEND:
from google.colab import auth
auth.authenticate_user() # Authenticates the backend and also the TPU using your credentials so that they can access your private GCS buckets
```
### tf.data.Dataset: parse files and prepare training and validation datasets
Please read the [best practices for building](https://www.tensorflow.org/guide/performance/datasets) input pipelines with tf.data.Dataset
```
def read_label(tf_bytestring):
label = tf.decode_raw(tf_bytestring, tf.uint8)
label = tf.reshape(label, [])
label = tf.one_hot(label, 10)
return label
def read_image(tf_bytestring):
image = tf.decode_raw(tf_bytestring, tf.uint8)
image = tf.cast(image, tf.float32)/256.0
image = tf.reshape(image, [28*28])
return image
def load_dataset(image_file, label_file):
imagedataset = tf.data.FixedLengthRecordDataset(image_file, 28*28, header_bytes=16)
imagedataset = imagedataset.map(read_image, num_parallel_calls=16)
labelsdataset = tf.data.FixedLengthRecordDataset(label_file, 1, header_bytes=8)
labelsdataset = labelsdataset.map(read_label, num_parallel_calls=16)
dataset = tf.data.Dataset.zip((imagedataset, labelsdataset))
return dataset
def get_training_dataset(image_file, label_file, batch_size):
dataset = load_dataset(image_file, label_file)
dataset = dataset.cache() # this small dataset can be entirely cached in RAM, for TPU this is important to get good performance from such a small dataset
dataset = dataset.shuffle(5000, reshuffle_each_iteration=True)
dataset = dataset.repeat() # Mandatory for Keras for now
dataset = dataset.batch(batch_size, drop_remainder=True) # drop_remainder is important on TPU, batch size must be fixed
dataset = dataset.prefetch(-1) # fetch next batches while training on the current one (-1: autotune prefetch buffer size)
return dataset
def get_validation_dataset(image_file, label_file):
dataset = load_dataset(image_file, label_file)
dataset = dataset.cache() # this small dataset can be entirely cached in RAM, for TPU this is important to get good performance from such a small dataset
dataset = dataset.batch(10000, drop_remainder=True) # 10000 items in eval dataset, all in one batch
dataset = dataset.repeat() # Mandatory for Keras for now
return dataset
# instantiate the datasets
training_dataset = get_training_dataset(training_images_file, training_labels_file, BATCH_SIZE)
validation_dataset = get_validation_dataset(validation_images_file, validation_labels_file)
# For TPU, we will need a function that returns the dataset
training_input_fn = lambda: get_training_dataset(training_images_file, training_labels_file, BATCH_SIZE)
validation_input_fn = lambda: get_validation_dataset(validation_images_file, validation_labels_file)
```
### Let's have a look at the data
```
N = 24
(training_digits, training_labels,
validation_digits, validation_labels) = dataset_to_numpy_util(training_dataset, validation_dataset, N)
display_digits(training_digits, training_labels, training_labels, "training digits and their labels", N)
display_digits(validation_digits[:N], validation_labels[:N], validation_labels[:N], "validation digits and their labels", N)
font_digits, font_labels = create_digits_from_local_fonts(N)
```
### Keras model: 3 convolutional layers, 2 dense layers
If you are not sure what cross-entropy, dropout, softmax or batch-normalization mean, head here for a crash-course: [Tensorflow and deep learning without a PhD](https://github.com/GoogleCloudPlatform/tensorflow-without-a-phd/#featured-code-sample)
```
# This model trains to 99.4% sometimes 99.5% accuracy in 10 epochs (with a batch size of 32)
l = tf.keras.layers
model = tf.keras.Sequential(
[
l.Reshape(input_shape=(28*28,), target_shape=(28, 28, 1)),
l.Conv2D(filters=6, kernel_size=3, padding='same', use_bias=False), # no bias necessary before batch norm
l.BatchNormalization(scale=False, center=True), # no batch norm scaling necessary before "relu"
l.Activation('relu'), # activation after batch norm
l.Conv2D(filters=12, kernel_size=6, padding='same', use_bias=False, strides=2),
l.BatchNormalization(scale=False, center=True),
l.Activation('relu'),
l.Conv2D(filters=24, kernel_size=6, padding='same', use_bias=False, strides=2),
l.BatchNormalization(scale=False, center=True),
l.Activation('relu'),
l.Flatten(),
l.Dense(200, use_bias=False),
l.BatchNormalization(scale=False, center=True),
l.Activation('relu'),
l.Dropout(0.5), # Dropout on dense layer only
l.Dense(10, activation='softmax')
])
model.compile(optimizer='adam', # learning rate will be set by LearningRateScheduler
loss='categorical_crossentropy',
metrics=['accuracy'])
# print model layers
model.summary()
# set up learning rate decay
lr_decay = tf.keras.callbacks.LearningRateScheduler(lambda epoch: 0.0001 + 0.02 * math.pow(0.5, 1+epoch), verbose=True)
```
### Train and validate the model
```
EPOCHS = 10
steps_per_epoch = 60000//BATCH_SIZE # 60,000 items in this dataset
tpu = None
trained_model = model
# Counting steps and batches on TPU: the tpu.keras_to_tpu_model API regards the batch size of the input dataset
# as the per-core batch size. The effective batch size is 8x more because Cloud TPUs have 8 cores. It increments
# the step by +8 everytime a global batch (8 per-core batches) is processed. Therefore batch size and steps_per_epoch
# settings can stay as they are for TPU training. The training will just go faster.
# Warning: this might change in the final version of the Keras/TPU API.
try: # TPU detection
tpu = tf.contrib.cluster_resolver.TPUClusterResolver() # Picks up a connected TPU on Google's Colab, ML Engine, Kubernetes and Deep Learning VMs accessed through the 'ctpu up' utility
#tpu = tf.contrib.cluster_resolver.TPUClusterResolver('MY_TPU_NAME') # If auto-detection does not work, you can pass the name of the TPU explicitly (tip: on a VM created with "ctpu up" the TPU has the same name as the VM)
except ValueError:
print('Training on GPU/CPU')
if tpu: # TPU training
strategy = tf.contrib.tpu.TPUDistributionStrategy(tpu)
trained_model = tf.contrib.tpu.keras_to_tpu_model(model, strategy=strategy)
# Work in progress: reading directly from dataset object not yet implemented
# for Keras/TPU. Keras/TPU needs a function that returns a dataset.
history = trained_model.fit(training_input_fn, steps_per_epoch=steps_per_epoch, epochs=EPOCHS,
validation_data=validation_input_fn, validation_steps=1, callbacks=[lr_decay])
else: # GPU/CPU training
history = trained_model.fit(training_dataset, steps_per_epoch=steps_per_epoch, epochs=EPOCHS,
validation_data=validation_dataset, validation_steps=1, callbacks=[lr_decay])
```
### Visualize training and validation curves
```
print(history.history.keys())
display_training_curves(history.history['acc'], history.history['val_acc'], 'accuracy', 211)
display_training_curves(history.history['loss'], history.history['val_loss'], 'loss', 212)
```
### Visualize predictions
```
# recognize digits from local fonts
probabilities = trained_model.predict(font_digits, steps=1)
predicted_labels = np.argmax(probabilities, axis=1)
display_digits(font_digits, predicted_labels, font_labels, "predictions from local fonts (bad predictions in red)", N)
# recognize validation digits
probabilities = trained_model.predict(validation_digits, steps=1)
predicted_labels = np.argmax(probabilities, axis=1)
display_top_unrecognized(validation_digits, predicted_labels, validation_labels, N, 7)
```
## Deploy the trained model to ML Engine
Push your trained model to production on ML Engine for a serverless, autoscaled, REST API experience.
You will need a GCS bucket and a GCP project for this.
Models deployed on ML Engine autoscale to zero if not used. There will be no ML Engine charges after you are done testing.
Google Cloud Storage incurs charges. Empty the bucket after deployment if you want to avoid these. Once the model is deployed, the bucket is not useful anymore.
### Configuration
```
PROJECT = "" #@param {type:"string"}
BUCKET = "gs://" #@param {type:"string", default:"jddj"}
NEW_MODEL = True #@param {type:"boolean"}
MODEL_NAME = "colabmnist" #@param {type:"string"}
MODEL_VERSION = "v0" #@param {type:"string"}
assert PROJECT, 'For this part, you need a GCP project. Head to http://console.cloud.google.com/ and create one.'
assert re.search(r'gs://.+', BUCKET), 'For this part, you need a GCS bucket. Head to http://console.cloud.google.com/storage and create one.'
```
### Export the model for serving from ML Engine
```
class ServingInput(tf.keras.layers.Layer):
# the important detail in this boilerplate code is "trainable=False"
def __init__(self, name, dtype, batch_input_shape=None):
super(ServingInput, self).__init__(trainable=False, name=name, dtype=dtype, batch_input_shape=batch_input_shape)
def get_config(self):
return {'batch_input_shape': self._batch_input_shape, 'dtype': self.dtype, 'name': self.name }
def call(self, inputs):
# When the deployed model is called through its REST API,
# the JSON payload is parsed automatically, transformed into
# a tensor and passed to this input layer. You can perform
# additional transformations, such as decoding JPEGs for example,
# before sending the data to your model. However, you can only
# use tf.xxxx operations.
return inputs
# little wrinkle: must copy the model from TPU to CPU manually. This is a temporary workaround.
tf_logging.set_verbosity(tf_logging.INFO)
restored_model = model
restored_model.set_weights(trained_model.get_weights()) # this copied the weights from TPU, does nothing on GPU
tf_logging.set_verbosity(tf_logging.WARN)
# add the serving input layer
serving_model = tf.keras.Sequential()
serving_model.add(ServingInput('serving', tf.float32, (None, 28*28)))
serving_model.add(restored_model)
export_path = tf.contrib.saved_model.save_keras_model(serving_model, os.path.join(BUCKET, 'keras_export')) # export he model to your bucket
export_path = export_path.decode('utf-8')
print("Model exported to: ", export_path)
```
### Deploy the model
This uses the command-line interface. You can do the same thing through the ML Engine UI at https://console.cloud.google.com/mlengine/models
```
# Create the model
if NEW_MODEL:
!gcloud ml-engine models create {MODEL_NAME} --project={PROJECT} --regions=us-central1
# Create a version of this model (you can add --async at the end of the line to make this call non blocking)
# Additional config flags are available: https://cloud.google.com/ml-engine/reference/rest/v1/projects.models.versions
# You can also deploy a model that is stored locally by providing a --staging-bucket=... parameter
!echo "Deployment takes a couple of minutes. You can watch your deployment here: https://console.cloud.google.com/mlengine/models/{MODEL_NAME}"
!gcloud ml-engine versions create {MODEL_VERSION} --model={MODEL_NAME} --origin={export_path} --project={PROJECT} --runtime-version=1.10
```
### Test the deployed model
Your model is now available as a REST API. Let us try to call it. The cells below use the "gcloud ml-engine"
command line tool but any tool that can send a JSON payload to a REST endpoint will work.
```
# prepare digits to send to online prediction endpoint
digits = np.concatenate((font_digits, validation_digits[:100-N]))
labels = np.concatenate((font_labels, validation_labels[:100-N]))
with open("digits.json", "w") as f:
for digit in digits:
# the format for ML Engine online predictions is: one JSON object per line
data = json.dumps({"serving_input": digit.tolist()}) # "serving_input" because the ServingInput layer was named "serving". Keras appends "_input"
f.write(data+'\n')
# Request online predictions from deployed model (REST API) using the "gcloud ml-engine" command line.
predictions = !gcloud ml-engine predict --model={MODEL_NAME} --json-instances digits.json --project={PROJECT} --version {MODEL_VERSION}
print(predictions)
probabilities = np.stack([json.loads(p) for p in predictions[1:]]) # first line is the name of the input layer: drop it, parse the rest
predictions = np.argmax(probabilities, axis=1)
display_top_unrecognized(digits, predictions, labels, N, 100//N)
```
## License
---
author: Martin Gorner<br>
twitter: @martin_gorner
---
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
---
This is not an official Google product but sample code provided for an educational purpose
| true |
code
| 0.533094 | null | null | null | null |
|
---
**Universidad de Costa Rica** | Escuela de Ingeniería Eléctrica
*IE0405 - Modelos Probabilísticos de Señales y Sistemas*
### `PyX` - Serie de tutoriales de Python para el análisis de datos
# `Py5` - *Curvas de ajuste de datos*
> Los modelos para describir un fenómeno y sus parámetros pueden obtenerse a partir de una muestra de datos. Debido a la gran cantidad de modelos probabilísticos disponibles, a menudo es necesario hacer una comparación de ajuste entre muchas de ellas.
*Fabián Abarca Calderón* \
*Jonathan Rojas Sibaja*
---
## Ajuste de modelos
El ajuste de modelos es ampliamente utilizado para obtener un modelo matemático que caracterize el comportamiento de cierto sistema basandose en los datos experimentales obtenidos. Este modelo deberá predecir también otras medidas experimentales que se obtengan de su recreación.
### Estimación de máxima verosimilitud (MLE)
(Esto es de menor prioridad) La estimación de máxima verosimilitud (**MLE**, *maximum likelihood estimation*) es...
---
## 5.1 - Con el módulo `numpy`
Para iniciar, con la función `polyfit()` de la librería `numpy` se puede realizar el ajuste de datos experimentals a polinomios de cualquier orden. Esta función devuelve los parámetros de la recta para un modelo lineal de la forma:
$$
f(x) = mx + b
$$
Esto en el caso de un polinomio de grado 1. Un ejemplo utilizando este método es el siguiente:
```
from numpy import *
import matplotlib.pyplot as plt
# Datos experimentales
x = array([ 0., 1., 2., 3., 4.])
y = array([ 10.2 , 12.1, 15.5 , 18.3, 20.6 ])
# Ajuste a una recta (polinomio de grado 1)
p = polyfit(x, y, 1)
# Una vez conocidos los parámetros de la recta de ajuste,
#se pueden utilizar para graficar la recta de ajuste.
y_ajuste = p[0]*x + p[1]
# Dibujamos los datos experimentales
p_datos, = plt.plot(x, y, 'b.')
# Dibujamos la recta de ajuste
p_ajuste, = plt.plot(x, y_ajuste, 'r-')
plt.title('Ajuste lineal por minimos cuadrados')
plt.xlabel('Eje x')
plt.ylabel('Eje y')
plt.legend(('Datos experimentales', 'Ajuste lineal'), loc="upper left")
```
En el caso de otro tipo de regresiones, se debe aumentar el grado del polinomio. Por ejemplo, el caso de una regresió polinomial se muestra a continuación:
```
import numpy
import matplotlib.pyplot as plt
#Lo primero es crear los vectores que definen los puntos de datos
x = [1,2,3,5,6,7,8,9,10,12,13,14,15,16,18,19,21,22]
y = [100,90,80,60,60,55,60,65,70,70,75,76,78,79,90,99,99,100]
#Este método nos permite crear un modelo polinomial
mimodelo = numpy.poly1d(numpy.polyfit(x, y, 3))
#Esto determina cómo se mostrara la línea, la cual inicia en 1
#y termina en 22
milinea = numpy.linspace(1,22,100)
#Y por último graficamos los datos y la curva de
#la regresion polinomial
plt.scatter(x,y)
plt.plot(milinea, mimodelo(milinea))
plt.show()
```
Una vez trazada la recta de mejor ajuste, se puede obtener el valor de un punto dado, evaluando la curva en dicho punto. por ejemplo si quisieramos obtener el valor dado para un valor de 17 en el eje x, entonces sería:
```
valor = mimodelo(17)
print(valor)
```
---
## 5.2 - Con el módulo `stats`
En este caso existen diversos comandos que pueden ser utilizados para crear diferentes distribuciones basadas en datos dados. por ejemplo, partiendo de los datos de un histograma de una PDF, se puede crear el la curva de dicha distribución normal utiliando el comando `scipy.stats.rv_histogram`, además también se puede graficar el CDF de dichos datos:
```
import scipy.stats
import numpy as np
import matplotlib.pyplot as plt
data = scipy.stats.norm.rvs(size=100000, loc=0, scale=1.5, random_state=123)
hist = np.histogram(data, bins=100)
hist_dist = scipy.stats.rv_histogram(hist)
X = np.linspace(-5.0, 5.0, 100)
plt.title("Datos aleatorios")
plt.hist(data, density=True, bins=100)
plt.show()
X = np.linspace(-5.0, 5.0, 100)
plt.title("PDF de los datos")
plt.plot(X, hist_dist.pdf(X), label='PDF')
plt.show()
X = np.linspace(-5.0, 5.0, 100)
plt.title("CDF de los datos")
plt.plot(X, hist_dist.cdf(X), label='CDF')
plt.show()
```
Otro paquete que brinda la librería ´Scipy´ es ´optimize´ el cuál tiene algorítmos de curvas de ajuste por medio de la función ´curve_fit´ con la cuál se pueden ajustar curvas de sistemas no lineales utilizando mínimos cuadrados. A continuación un ejemplo de su implementación para encontrar la recta de mejor ajuste ante una serie de datos experimentales obtenidos:
```
import numpy
from scipy.optimize import curve_fit
import matplotlib.pyplot as plt
def _polynomial(x, *p):
"""Ajuste polinomial de grado arbitrario"""
poly = 0.
for i, n in enumerate(p):
poly += n * x**i
return poly
# Se definen los datos experimentales:
x = numpy.linspace(0., numpy.pi)
y = numpy.cos(x) + 0.05 * numpy.random.normal(size=len(x))
# p0 es la suposición inicial para los coeficientes de ajuste, este valor
# establece el orden del polinomio que desea ajustar. Aquí yo
# ya establecí todas las conjeturas iniciales en 1., es posible que tenga una mejor idea de
# qué valores esperar en función de sus datos.
p0 = numpy.ones(6,)
coeff, var_matrix = curve_fit(_polynomial, x, y, p0=p0)
yfit = [_polynomial(xx, *tuple(coeff)) for xx in x]
plt.plot(x, y, label='Test data')
plt.plot(x, yfit, label='fitted data')
plt.show()
```
---
## 5.3 - Con la librería `fitter`
Si es necesario, el paquete de `fitter` provee una simple clases la cual identifica la distribución de la cuál las muestras de datos son generados. Utiliza 80 distribuciones de Scipy y permite graficar los resultados para verificar que dicha distribución es la que mejor se ajusta a los datos. En el siguiente ejemplo se generarán los una muestra de 1000 puntos con una distribución gamma, para luego utilizar `fitter` el cuál revisará las 80 distribuciones de Scipy y desplegará un resumen con las distribuciones que calzan de mejor forma con nuestros datos, basandose en la suma del cuadrado de los errores. Los resultados del resumen se puede verificar de manera visual en las gráficas que dicho resumen traza por sí mismo:
```
from scipy import stats
from fitter import Fitter
# Crear los datos
data = stats.gamma.rvs(2, loc=1.5, scale=2, size=1000)
# Definir cuáles distribuciones queremos que evalúe
f = Fitter(data, distributions=['gamma', 'rayleigh', 'uniform'])
f.fit()
f.summary()
```
Por último, un ejemplo que que ilustra la combinación de el paquete ´scipy.stats´ y ´fitter´ es mediante el modulo ´histfit´, el cuál permite graficar tanto los datos y también las curvas de mejor ajuste al agregar ruido a la medición y calcular ese ajuste en 10 ocaciones, ´Nfit = 10´. En este caso la serie de datos utilizada corresponde a una distribuación normal (creada con el paquete ´scipy.stats´) y se obtuvieron 10 curvas de mejor ajuste ante diversos casos de ruido (con ´error_rate = 0.01´) y además se obtuvo un estimado de los valores correspondientes a la media, la varianza y la amplitud de la distribución de las curvas de mejor ajuste.
```
from fitter import HistFit
from pylab import hist
import scipy.stats
#Creamos la curva con distribución normal
data = [scipy.stats.norm.rvs(2,3.4) for x in range(10000)]
#Graficamos los valores asignándoles espaciamiento temporal
Y, X, _ = hist(data, bins=30)
#Creamos las curvas de mejor ajuste
hf = HistFit(X=X, Y=Y)
#Aplicamos un margen de error para simular ruido y calcular 10
#curvas de mejor ajuste
hf.fit(error_rate=0.01, Nfit=10)
#Obtenemos los valores correspondientes a la media, la varianza y
#la amplitud de las curvas de mejor ajuste
print(hf.mu, hf.sigma, hf.amplitude)
```
---
### Más información
* [Página web](https://www.google.com/)
* Libro o algo
* Tutorial [w3schools](https://www.w3schools.com/python/)
---
**Universidad de Costa Rica** | Facultad de Ingeniería | Escuela de Ingeniería Eléctrica
© 2021
---
| true |
code
| 0.442215 | null | null | null | null |
|
# This task is not quite ready as we don't have an open source route for simulating geometry that requires imprinting and merging. However this simulation can be carried out using Trelis.
# Heating Mesh Tally on CAD geometry made from Components
This constructs a reactor geometry from 3 Component objects each made from points.
The Component made include a breeder blanket, PF coil and a central column shield.
2D and 3D Meshes tally are then simulated to show nuclear heating, flux and tritium_production across the model.
This section makes the 3d geometry for the entire reactor from a input parameters.
```
import paramak
my_reactor = paramak.BallReactor(
inner_bore_radial_thickness=50,
inboard_tf_leg_radial_thickness=55,
center_column_shield_radial_thickness=50,
divertor_radial_thickness=50,
inner_plasma_gap_radial_thickness=50,
plasma_radial_thickness=100,
outer_plasma_gap_radial_thickness=50,
firstwall_radial_thickness=1,
blanket_radial_thickness=100,
blanket_rear_wall_radial_thickness=10,
elongation=2,
triangularity=0.55,
number_of_tf_coils=16,
rotation_angle=180,
)
# TF and PF coils can be added with additional arguments.
# see the documentation for more details
# https://paramak.readthedocs.io/en/main/paramak.parametric_reactors.html
my_reactor.show()
```
The next section defines the materials. This can be done using openmc.Materials or in this case strings that look up materials from the neutronics material maker.
```
my_reactor.export_stp()
from IPython.display import FileLink
display(FileLink('blanket.stp'))
display(FileLink('pf_coil.stp'))
display(FileLink('center_column.stp'))
display(FileLink('Graveyard.stp'))
```
The next section defines the materials. This can be done using openmc.Materials or in this case strings that look up materials from the neutronics material maker.
```
from neutronics_material_maker import Material
mat1 = Material.from_library(name='Li4SiO4')
mat2 = Material.from_library(name='copper')
mat3 = Material.from_library(name='WC')
```
This next step makes a simple point source.
```
import openmc
# initialises a new source object
source = openmc.Source()
# sets the location of the source to x=0 y=0 z=0
source.space = openmc.stats.Point((100, 0, 0))
# sets the direction to isotropic
source.angle = openmc.stats.Isotropic()
# sets the energy distribution to 100% 14MeV neutrons
source.energy = openmc.stats.Discrete([14e6], [1])
```
This next section combines the geometry with the materials and specifies a few mesh tallies
```
import paramak_neutronics
neutronics_model = paramak_neutronics.NeutronicsModel(
geometry=my_reactor,
cell_tallies=['heating', 'flux', 'TBR', 'spectra'],
mesh_tally_2d=['heating', 'flux', '(n,Xt)'],
mesh_tally_3d=['heating', 'flux', '(n,Xt)'],
source=source,
simulation_batches=2,
simulation_particles_per_batch=10000,
materials={
'blanket_material': mat1,
'pf_coil_material': mat2,
'center_column_material': mat3,
}
)
# You will need to have Trelis installed to run this command
neutronics_model.simulate()
```
The next section produces download links for:
- vtk files that contain the 3D mesh results (open with Paraview)
- png images that show the resuls of the 2D mesh tally
```
from IPython.display import FileLink
display(FileLink('heating_on_3D_mesh.vtk'))
display(FileLink('flux_on_3D_mesh.vtk'))
display(FileLink('tritium_production_on_3D_mesh.vtk'))
display(FileLink('flux_on_2D_mesh_xy.png'))
display(FileLink('flux_on_2D_mesh_xz.png'))
display(FileLink('flux_on_2D_mesh_yz.png'))
display(FileLink('heating_on_2D_mesh_xy.png'))
display(FileLink('heating_on_2D_mesh_xz.png'))
display(FileLink('heating_on_2D_mesh_yz.png'))
display(FileLink('tritium_production_on_2D_mesh_yz.png'))
display(FileLink('tritium_production_on_2D_mesh_xz.png'))
display(FileLink('tritium_production_on_2D_mesh_yz.png'))
```
| true |
code
| 0.627238 | null | null | null | null |
|
# CSAILVision semantic segmention models
This is a semantic segmentation notebook using an [ADE20K](http://groups.csail.mit.edu/vision/datasets/ADE20K/) pretrained model from the open source project [CSAILVision/semantic-segmentation-pytorch](https://github.com/CSAILVision/semantic-segmentation-pytorch).
For other deep-learning Colab notebooks, visit [tugstugi/dl-colab-notebooks](https://github.com/tugstugi/dl-colab-notebooks).
## Clone repo and install dependencies
```
import os
from os.path import exists, join, basename, splitext
git_repo_url = 'https://github.com/CSAILVision/semantic-segmentation-pytorch.git'
project_name = splitext(basename(git_repo_url))[0]
if not exists(project_name):
# clone and install dependencies
!git clone -q $git_repo_url
#!cd $project_name && pip install -q -r requirement.txt
import sys
sys.path.append(project_name)
import time
import matplotlib
import matplotlib.pylab as plt
plt.rcParams["axes.grid"] = False
```
## Download a pretrained model
According to [https://github.com/CSAILVision/semantic-segmentation-pytorch#performance](https://github.com/CSAILVision/semantic-segmentation-pytorch#performance), **UperNet101** was the best performing model. We will use it as the pretrained model:
```
ENCODER_NAME = 'resnet101'
DECODER_NAME = 'upernet'
PRETRAINED_ENCODER_MODEL_URL = 'http://sceneparsing.csail.mit.edu/model/pytorch/baseline-%s-%s/encoder_epoch_50.pth' % (ENCODER_NAME, DECODER_NAME)
PRETRAINED_DECODER_MODEL_URL = 'http://sceneparsing.csail.mit.edu/model/pytorch/baseline-%s-%s/decoder_epoch_50.pth' % (ENCODER_NAME, DECODER_NAME)
pretrained_encoder_file = basename(PRETRAINED_ENCODER_MODEL_URL)
if not exists(pretrained_encoder_file):
!wget -q $PRETRAINED_ENCODER_MODEL_URL
pretrained_decoder_file = basename(PRETRAINED_DECODER_MODEL_URL)
if not exists(pretrained_decoder_file):
!wget -q $PRETRAINED_DECODER_MODEL_URL
```
## Prepare model
Load the pretrained model:
```
from types import SimpleNamespace
import torch
from models import ModelBuilder, SegmentationModule
from dataset import TestDataset
from utils import colorEncode
from scipy.io import loadmat
# options
options = SimpleNamespace(fc_dim=2048,
num_class=150,
imgSize = [300, 400, 500, 600],
imgMaxSize=1000,
padding_constant=8,
segm_downsampling_rate=8)
# create model
builder = ModelBuilder()
net_encoder = builder.build_encoder(arch=ENCODER_NAME, weights=pretrained_encoder_file,
fc_dim=options.fc_dim)
net_decoder = builder.build_decoder(arch=DECODER_NAME, weights=pretrained_decoder_file,
fc_dim=options.fc_dim, num_class=options.num_class, use_softmax=True)
segmentation_module = SegmentationModule(net_encoder, net_decoder, torch.nn.NLLLoss(ignore_index=-1))
segmentation_module = segmentation_module.eval()
torch.set_grad_enabled(False)
if torch.cuda.is_available():
segmentation_module = segmentation_module.cuda()
# test on a given image
def test(test_image_name):
dataset_test = TestDataset([{'fpath_img': test_image_name}], options, max_sample=-1)
batch_data = dataset_test[0]
segSize = (batch_data['img_ori'].shape[0], batch_data['img_ori'].shape[1])
img_resized_list = batch_data['img_data']
scores = torch.zeros(1, options.num_class, segSize[0], segSize[1])
if torch.cuda.is_available():
scores = scores.cuda()
for img in img_resized_list:
feed_dict = batch_data.copy()
feed_dict['img_data'] = img
del feed_dict['img_ori']
del feed_dict['info']
if torch.cuda.is_available():
feed_dict = {k: o.cuda() for k, o in feed_dict.items()}
# forward pass
pred_tmp = segmentation_module(feed_dict, segSize=segSize)
scores = scores + pred_tmp / len(options.imgSize)
_, pred = torch.max(scores, dim=1)
return pred.squeeze(0).cpu().numpy()
```
## Evaluate on a test image
First, download a test image from the internet:
```
IMAGE_URL = 'https://raw.githubusercontent.com/tugstugi/dl-colab-notebooks/master/resources/lidl.jpg'
image_file = basename(IMAGE_URL)
!wget -q -O $image_file $IMAGE_URL
plt.figure(figsize=(10, 5))
plt.imshow(matplotlib.image.imread(image_file))
```
Now, test on the downloaded image:
```
t = time.time()
pred = test(image_file)
print("executed in %.3fs" % (time.time()-t))
pred_color = colorEncode(pred, loadmat(os.path.join(project_name, 'data/color150.mat'))['colors'])
plt.imshow(pred_color)
```
| true |
code
| 0.646349 | null | null | null | null |
|
# Bayesian Camera Calibration
> Let's apply Bayesian analysis to calibrate a camera
- toc: true
- badges: true
- comments: true
- categories: [Bayesian, Computer Vision]
- image: images/2020-03-28-Bayesian-Camera-Calibration/header.jpg
```
import numpy as np
import matplotlib.pyplot as plt
import pymc3 as pm
plt.rcParams['figure.figsize'] = [10,10]
def x_rot(theta,x,y,z):
theta *= np.pi/180
x_rot = x
y_rot = np.cos(theta)*y - np.sin(theta)*z
z_rot = np.sin(theta)*y + np.cos(theta)*z
return(x_rot,y_rot,z_rot)
def y_rot(theta,x,y,z):
theta *= np.pi/180
x_rot = np.cos(theta)*x + np.sin(theta)*z
y_rot = y
z_rot = -np.sin(theta)*x + np.cos(theta)*z
return(x_rot,y_rot,z_rot)
def z_rot(theta,x,y,z):
theta *= np.pi/180
x_rot = np.cos(theta)*x - np.sin(theta)*y
y_rot = np.sin(theta)*x + np.cos(theta)*y
z_rot = z
return(x_rot,y_rot,z_rot)
points = np.loadtxt("data/2020-02-23-An-Adventure-In-Camera-Calibration/points.csv")
points_2d = points[:,0:2]
points_3d = points[:,2:5]
number_points = points.shape[0]
px = points_2d[:,0]
py = points_2d[:,1]
X_input = points_3d[:,0]
Y_input = points_3d[:,1]
Z_input = points_3d[:,2]
def rotate(theta_Z_est,theta_Y_est,theta_X_est, X_est, Y_est, Z_est):
X_est, Y_est, Z_est = z_rot(theta_Z_est, X_est, Y_est, Z_est)
X_est, Y_est, Z_est = y_rot(theta_Y_est, X_est, Y_est, Z_est)
X_est, Y_est, Z_est = x_rot(theta_X_est, X_est, Y_est, Z_est)
return(X_est, Y_est, Z_est)
# Define priors
X_translate_est = pm.Normal('X_translate', mu = -7, sigma = 1)
Y_translate_est = pm.Normal('Y_translate', mu = -13, sigma = 1)
Z_translate_est = pm.Normal('Z_translate', mu = 3, sigma = 1)
focal_length_est = pm.Normal('focal_length',mu = 1000, sigma = 100)
theta_Z_est = pm.Normal('theta_Z',mu = -45, sigma = 30)
theta_Y_est = pm.Normal('theta_Y',mu = 0, sigma = 15)
theta_X_est = pm.Normal('theta_X',mu = 90, sigma = 30)
c_x_est = pm.Normal('c_x',mu = 1038.42, sigma = 100)
c_y_est = pm.Normal('c_y',mu = 2666.56, sigma = 100)
k1 = -0.351113
k2 = 0.185768
k3 = -0.032289
error_scale = 2
X_est = X_input + X_translate_est
Y_est = Y_input + Y_translate_est
Z_est = Z_input + Z_translate_est
X_est, Y_est, Z_est = rotate(theta_Z_est, theta_Y_est, theta_X_est, X_est, Y_est, Z_est)
px_est = X_est / Z_est
py_est = Y_est / Z_est
r = np.sqrt(px_est**2 + py_est**2)
px_est *= (1 + k1 * r + k2 * r**2 + k3 * r**3)
py_est *= (1 + k1 * r + k2 * r**2 + k3 * r**3)
px_est *= focal_length_est
py_est *= focal_length_est
px_est += c_x_est
py_est += c_y_est
delta = np.sqrt((px - px_est)**2 + (py - py_est)**2)
# Define likelihood
likelihood = pm.Normal('error', mu = delta, sigma = error_scale, observed=np.zeros(number_points))
# Inference!
trace = pm.sample(2_000, cores=4, tune=5000)
plt.figure(figsize=(7, 7))
pm.traceplot(trace[1000:])
plt.tight_layout();
pm.plot_posterior(trace);
pm.summary(trace)
```
| true |
code
| 0.536252 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/cstorm125/abtestoo/blob/master/notebooks/frequentist_colab.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# A/B Testing from Scratch: Frequentist Approach
Frequentist A/B testing is one of the most used and abused statistical methods in the world. This article starts with a simple problem of comparing two online ads campaigns (or teatments, user interfaces or slot machines). It outlines several useful statistical concepts and how we exploit them to solve our problem. At the end, it acknowledges some common pitfalls we face when doing a frequentist A/B test and proposes some possible solutions to a more robust A/B testing. Readers are encouraged to tinker with the widgets provided in order to explore the impacts of each parameter.
Thanks to [korakot](https://github.com/korakot) for notebook conversion to Colab.
```
# #depedencies for colab
# %%capture
# !pip install plotnine
import numpy as np
import pandas as pd
from typing import Collection, Tuple
#widgets เอาออก เปลี่ยนไปใช้ colab form แทน
#from ipywidgets import interact, interactive, fixed, interact_manual
#import ipywidgets as widgets
# from IPython.display import display
#plots
import matplotlib.pyplot as plt
from plotnine import *
#stats
import scipy as sp
#suppress annoying warning prints
import warnings
warnings.filterwarnings('ignore')
```
## Start with A Problem
A typical situation marketers (research physicians, UX researchers, or gamblers) find themselves in is that they have two variations of ads (treatments, user interfaces, or slot machines) and want to find out which one has the better performance in the long run.
Practitioners know this as A/B testing and statisticians as **hypothesis testing**. Consider the following problem. We are running an online ads campaign `A` for a period of time, but now we think a new ads variation might work better so we run an experiemnt by dividing our audience in half: one sees the existing campaign `A` whereas the other sees a new campaign `B`. Our performance metric is conversion (sales) per click (ignore [ads attribution problem](https://support.google.com/analytics/answer/1662518) for now). After the experiment ran for two months, we obtain daily clicks and conversions of each campaign and determine which campaign has the better performance.
We simulate the aforementioned problem with both campaigns getting randomly about a thousand clicks per day. The secrete we will pretend to not know is that hypothetical campaign `B` has slightly better conversion rate than `A` in the long run. With this synthetic data, we will explore some useful statistical concepts and exploit them for our frequentist A/B testing.
```
def gen_bernoulli_campaign(p1: float, p2: float,
lmh: Collection = [500, 1000, 1500],
timesteps: int = 60,
scaler: float = 300, seed: int = 1412) -> pd.DataFrame:
'''
:meth: generate fake impression-conversion campaign based on specified parameters
:param float p1: true conversion rate of group 1
:param float p2: true conversion rate of group 2
:param Collection lmh: low-, mid-, and high-points for the triangular distribution of clicks
:param int nb_days: number of timesteps the campaigns run for
:param float scaler: scaler for Gaussian noise
:param int seed: seed for Gaussian noise
:return: dataframe containing campaign results
'''
np.random.seed(seed)
ns = np.random.triangular(*lmh, size=timesteps * 2).astype(int)
np.random.seed(seed)
es = np.random.randn(timesteps * 2) / scaler
n1 = ns[:timesteps]
c1 = ((p1 + es[:timesteps]) * n1).astype(int)
n2 = ns[timesteps:]
c2 = ((p2 + es[timesteps:]) * n2).astype(int)
result = pd.DataFrame({'timesteps': range(timesteps), 'impression_a': n1, 'conv_a': c1, 'impression_b': n2, 'conv_b': c2})
result = result[['timesteps', 'impression_a', 'impression_b', 'conv_a', 'conv_b']]
result['cumu_impression_a'] = result.impression_a.cumsum()
result['cumu_impression_b'] = result.impression_b.cumsum()
result['cumu_conv_a'] = result.conv_a.cumsum()
result['cumu_conv_b'] = result.conv_b.cumsum()
result['cumu_rate_a'] = result.cumu_conv_a / result.cumu_impression_a
result['cumu_rate_b'] = result.cumu_conv_b / result.cumu_impression_b
return result
conv_days = gen_bernoulli_campaign(p1 = 0.10,
p2 = 0.105,
timesteps = 60,
scaler=300,
seed = 1412) #god-mode
conv_days.columns = [i.replace('impression','click') for i in conv_days.columns] #function uses impressions but we use clicks
conv_days.head()
rates_df = conv_days[['timesteps','cumu_rate_a','cumu_rate_b']].melt(id_vars='timesteps')
g = (ggplot(rates_df, aes(x='timesteps', y='value', color='variable')) + geom_line() + theme_minimal() +
xlab('Days of Experiment Run') + ylab('Cumulative Conversions / Cumulative Clicks'))
g
#sum after 2 months
conv_df = pd.DataFrame({'campaign_id':['A','B'], 'clicks':[conv_days.click_a.sum(),conv_days.click_b.sum()],
'conv_cnt':[conv_days.conv_a.sum(),conv_days.conv_b.sum()]})
conv_df['conv_per'] = conv_df['conv_cnt'] / conv_df['clicks']
conv_df
```
## Random Variables and Probability Distributions
Take a step back and think about the numbers we consider in our daily routines, whether it is conversion rate of an ads campaign, the relative risk of a patient group, or sales and revenues of a shop during a given period of time. From our perspective, they have one thing in common: **we do not know exactly how they come to be**. In fact, we would not need an A/B test if we do. For instance, if we know for certain that conversion rate of an ads campaign will be `0.05 + 0.001 * number of letters in the ads`, we can tell exactly which ads to run: the one with the highest number of letters in it.
With our lack of knowledge, we do the next best thing and assume that our numbers are generated by some mathematical formula, calling them **random variables**. For instance, we might think of the probability of a click converting the same way as a coin-flip event, with the probability of converting as $p$ (say 0.1) and not converting as $1-p$ (thus 0.9). With this, we can simulate the event aka click conversion for as many times as we want:
```
def bernoulli(n,p):
flips = np.random.choice([0,1], size=n, p=[1-p,p])
flips_df = pd.DataFrame(flips)
flips_df.columns = ['conv_flag']
g = (ggplot(flips_df,aes(x='factor(conv_flag)')) + geom_bar(aes(y = '(..count..)/sum(..count..)')) +
theme_minimal() + xlab('Conversion Flag') + ylab('Percentage of Occurence') +
geom_hline(yintercept=p, colour='red') + ggtitle(f'Distribution after {n} Trials'))
g.draw()
print(f'Expectation: {p}\nVariance: {p*(1-p)}')
print(f'Sample Mean: {np.mean(flips)}\nSample Variance: {np.var(flips)}')
# ใช้ colab form แทน interact
#interact(bernoulli, n=widgets.IntSlider(min=1,max=500,step=1,value=20),
# p=widgets.FloatSlider(min=0.1,max=0.9))
#@title {run: "auto"}
n = 20 #@param {type:"slider", min:1, max:500, step:1}
p = 0.1 #@param {type:"slider", min:0.1, max:0.9, step:0.1}
bernoulli(n, p)
```
**Probability distribution** is represented with the values of a random variable we are interested in the X-axis, and the chance of them appearing after a number of trials in the Y-axis. The distribution above is called [Bernoulli Distribution](http://mathworld.wolfram.com/BernoulliDistribution.html), usually used to model hypothetical coin flips and online advertisements. [Other distributions](https://en.wikipedia.org/wiki/List_of_probability_distributions) are used in the same manner for other types of random variables. [Cloudera](https://www.cloudera.com/) provided a [quick review](https://blog.cloudera.com/blog/2015/12/common-probability-distributions-the-data-scientists-crib-sheet/) on a few of them you might find useful.
<img src='https://github.com/cstorm125/abtestoo/blob/master/images/distribution.png?raw=1' alt='Common Probability Distributions; Cloudera'/>
## Law of Large Numbers
There are two sets of indicators of a distribution that are especially relevant to our problem: one derived theoretically and another derived from data we observed. **Law of Large Numbers (LLN)** describes the relationship of between them.
Theoretically, we can derive these values about any distribution:
* **Expectation** of a random variable $X_i$ is its long-run average dervied from repetitively sampling $X_i$ from the same distribution. Each distribution requires its own way to obtain the expectation. For our example, it is the weighted average of outcomes $X_i$ ($X_i=1$ converted; $X_i=0$ not converted) and their respective probabilities ($p$ converted; $1-p$ not converted):
\begin{align}
E[X_i] &= \mu = \sum_{i=1}^{k} p_i * X_i \\
&= (1-p)*0 + p*1 \\
&= p
\end{align}
where $k$ is number of patterns of outcomes
* **Variance** of a random variable $X_i$ represents the expectation of how much $X_i$ deviates from its expectation, for our example formulated as:
\begin{align}
Var(X_i) &= \sigma^2 = E[(X_i-E(X_i))^2] \\
&= E[X_i^2] - E[X_i]^2 \\
&= \{(1-p)*0^2 + p*1^2\} - p^2 \\
&= p(1-p)
\end{align}
Empirically, we can also calculate their counterparts with the any amount of data we have on hand:
* **Sample Mean** is simply an average of all $X_i$ we currently have in our sample of size $n$:
\begin{align}
\bar{X} &= \frac{1}{n} \sum_{i=1}^{n} X_i
\end{align}
* **Sample Variance** is the variance based on deviation from sample mean; the $n-1$ is due to [Bessel's correction](https://en.wikipedia.org/wiki/Bessel%27s_correction#Source_of_bias) (See Appendix):
\begin{align}
s^2 &= \frac{1}{n-1} \sum_{i=1}^{n} (X_i - \bar{X})^2
\end{align}
LLN posits that when we have a large enough number of sample $n$, the sample mean will converge to expectation. This can be shown with a simple simulation:
```
def lln(n_max,p):
mean_flips = []
var_flips = []
ns = []
for n in range(1,n_max):
flips = np.random.choice([0,1], size=n, p=[1-p,p])
ns.append(n)
mean_flips.append(flips.mean())
var_flips.append(flips.var())
flips_df = pd.DataFrame({'n':ns,'mean_flips':mean_flips,'var_flips':var_flips}).melt(id_vars='n')
g = (ggplot(flips_df,aes(x='n',y='value',colour='variable')) + geom_line() +
facet_wrap('~variable', ncol=1, scales='free') + theme_minimal() +
ggtitle(f'Expectation={p:2f}; Variance={p*(1-p):2f}') + xlab('Number of Samples') +
ylab('Value'))
g.draw()
# interact(lln, n_max=widgets.IntSlider(min=2,max=10000,step=1,value=1000),
# p=widgets.FloatSlider(min=0.1,max=0.9))
#@title {run: "auto"}
n = 1000 #@param {type:"slider", min:2, max:10000, step:1}
p = 0.1 #@param {type:"slider", min:0.1, max:0.9, step:0.1}
lln(n, p)
```
Notice that even though LLN does not says that sample variance will also converge to variance as $n$ grows large enough, it is also the case. Mathematically, it can be derived as follows:
\begin{align}
s^2 &= \frac{1}{n}\sum_{i=1}^{n}(X_i - \bar{X}^2) \\
&= \frac{1}{n}\sum_{i=1}^{n}(X_i - \mu)^2 \text{; as }n\rightarrow\infty\text{ }\bar{X}\rightarrow\mu\\
&=\frac{1}{n}(\sum_{i=1}^{n}{X_i}^2 - 2\mu\sum_{i=1}^{n}X_i + n\mu^2) \\
&=\frac{\sum_{i=1}^{n}{X_i}^2}{n} - \frac{2\mu\sum_{i=1}^{n}X_i}{n} + \mu^2 \\
&= \frac{\sum_{i=1}^{n}{X_i}^2}{n} - 2\mu\bar{X} + \mu^2\text{; as }\frac{\sum_{i=1}^{n}X_i}{n} = \bar{X}\\
&= \frac{\sum_{i=1}^{n}{X_i}^2}{n} - 2\mu^2 + \mu^2 = \frac{\sum_{i=1}^{n}{X_i}^2}{n} - \mu^2 \text{; as }n\rightarrow\infty\text{ }\bar{X}\rightarrow\mu\\
&= E[{X_i}^2] - E[X_i]^2 = Var(X_i) = \sigma^2
\end{align}
## Central Limit Theorem
Assuming some probability distribution for our random variable also lets us exploit another extremely powerful statistical concept: **Central Limit Theorem (CLT)**. To see CLT in action, let us simplify our problem a bit and say we are only trying to find out if a hypothetical ads campaign `C` has a conversion rate of more than 10% or not, assuming data collected so far say that `C` has 1,000 clicks and 107 conversions.
```
c_df = pd.DataFrame({'campaign_id':'C','clicks':1000,'conv_cnt':107,'conv_per':0.107},index=[0])
c_df
```
CLT goes as follows:
> If $X_i$ is an independent and identically distributed (i.i.d.) random variable with expectation $\mu$ and variance $\sigma^2$ and $\bar{X_j}$ is the sample mean of $n$ samples of $X_i$ we drew as part of sample group $j$, then when $n$ is large enough, $\bar{X_j}$ will follow a [normal distribution](http://mathworld.wolfram.com/NormalDistribution.html) with with expectation $\mu$ and variance $\frac{\sigma^2}{n}$
It is a mouthful to say and full of weird symbols, so let us break it down line by line.
**If $X_i$ is an independent and identically distributed (i.i.d.) random variable with expectation $\mu$ and variance $\sigma^2$** <br/>In our case, $X_i$ is if click $i$ is coverted ($X_i=1$) or not converted ($X_i=0$) with $\mu$ as some probability that represents how likely a click will convert on average. *Independent* means that the probability of each click converting depends only on itself and not other clicks. *Identically distributed* means that the true probability of each click converting is more or less the same. We need to rely on domain knowledge to verify these assumptions; for example, in online advertisement, we would expect, at least for when working with a reputable ads network such as Criteo, that each click comes from indepdent users, as opposed to, say, a click farm where we would see a lot of clicks behaving the same way by design. Identical distribution is a little difficult to assume since we would think different demographics the ads are shown to will react differently so they might not have the same expectation.
```
ind_df = pd.DataFrame({'iid':[False]*100+[True]*100,
'order': list(range(100)) + list(range(100)),
'conv_flag':[1]*50+ [0]*50+ list(np.random.choice([0,1], size=100))})
g = (ggplot(ind_df,aes(x='order',y='conv_flag',color='iid')) + geom_point() +
facet_wrap('~iid') + theme_minimal() + xlab('i-th Click') + ylab('Conversion') +
ggtitle('Both plots has conversion rate of 50% but only one is i.i.d.'))
g
```
**and $\bar{X_j}$ is the sample mean of $n$ samples of $X_i$ we drew as part of sample group $j$, then**<br/>
For campaign `C`, we can think of all the clicks we observed as one sample group, which exists in parallel with an infinite number of sample groups that we have not seen yet but can be drawn from the distribution by additional data collection. This way, we calculate the sample mean as total conversions divided by total number of clicks observed during the campaign.
<img src='https://github.com/cstorm125/abtestoo/blob/master/images/sample_group.png?raw=1' alt='Sample Group in Universe'>
**when $n$ is large enough, $\bar{X_j}$ will follow a [normal distribution](http://mathworld.wolfram.com/NormalDistribution.html) with with expectation $\mu$ and variance $\frac{\sigma^2}{n}$**</br>
Here's the kicker: regardless of what distribution each $X_i$ of sample group $j$ is drawn from, as long as you have enough number of sample $n$, the sample mean of that sample group $\bar{X_j}$ will converge to a normal distribution. Try increase $n$ in the plot below and see what happens.
```
def clt(n, dist):
n_total = n * 10000
if dist == 'discrete uniform':
r = np.random.uniform(size=n_total)
elif dist =='bernoulli':
r = np.random.choice([0,1],size=n_total,p=[0.9,0.1])
elif dist =='poisson':
r = np.random.poisson(size=n_total)
else:
raise ValueError('Choose distributions that are available')
#generate base distribution plot
r_df = pd.DataFrame({'r':r})
g1 = (ggplot(r_df, aes(x='r')) + geom_histogram(bins=30) + theme_minimal() +
xlab('Values') + ylab('Number of Samples') +
ggtitle(f'{dist} distribution where sample groups are drawn from'))
g1.draw()
#generate sample mean distribution plot
normal_distribution = np.random.normal(loc=np.mean(r), scale=np.std(r) / np.sqrt(n), size=10000)
sm_df = pd.DataFrame({'sample_means':r.reshape(-1,n).mean(1),
'normal_distribution': normal_distribution}).melt()
g2 = (ggplot(sm_df, aes(x='value',fill='variable')) +
geom_histogram(bins=30,position='nudge',alpha=0.5) +
theme_minimal() + xlab('Sample Means') + ylab('Number of Sample Means') +
ggtitle(f'Distribution of 10,000 sample means with size {n}'))
g2.draw()
dists = ['bernoulli','discrete uniform','poisson']
# interact(clt, n=widgets.IntSlider(min=1,max=100,value=1),
# dist = widgets.Dropdown(
# options=dists,
# value='bernoulli')
# )
#@title {run: "auto"}
n = 30 #@param {type:"slider", min:1, max:100, step:1}
dist = 'bernoulli' #@param ["discrete uniform", "bernoulli", "poisson"] {type:"string"}
clt(n, dist)
```
The expectation and variance of the sample mean distribution can be derived as follows:
\begin{align}
E[\bar{X_j}] &= E[\frac{\sum_{i=1}^{n} X_i}{n}] \\
&= \frac{1}{n} \sum_{i=1}^{n} E[X_i] = \frac{1}{n} \sum_{i=1}^{n} \mu\\
&= \frac{n\mu}{n} = \mu \\
Var(\bar{X_j}) &= Var(\frac{\sum_{i=1}^{n} X_i}{n}) \\
&= \frac{1}{n^2} \sum_{i=1}^{n} Var(X_i) = \frac{1}{n^2} \sum_{i=1}^{n} \sigma^2\\
&= \frac{n\sigma^2}{n^2} = \frac{\sigma^2}{n} \\
\end{align}
The fact that we know this specific normal distribution of sample means has expectation $\mu$ and variance $\frac{\sigma^2}{n}$ is especially useful. Remember we want to find out whether campaign `C` **in general, not just in any sample group,** has better conversion rate than 10%. Below is that exact normal distribution based on information from our sample group (1,000 clicks) and the assumption that conversion rate is 10%:
\begin{align}
E[\bar{X_j}] &= \mu = p\\
&= 0.1 \text{; by our assumption}\\
Var(\bar{X_j}) &= \frac{\sigma^2}{n} = \frac{p*(1-p)}{n}\\
&= \frac{0.1 * (1-0.1)}{1000}\\
&= 0.0009\\
\end{align}
```
n = c_df.clicks[0]
x_bar = c_df.conv_per[0]
p = 0.1
mu = p; variance = p*(1-p)/n; sigma = (variance)**(0.5)
# mu = 0; variance = 1; sigma = (variance)**(0.5)
x = np.arange(0.05, 0.15, 1e-3)
y = np.array([sp.stats.norm.pdf(i, loc=mu, scale=sigma) for i in x])
sm_df = pd.DataFrame({'x': x, 'y': y, 'crit':[False if i>x_bar else True for i in x]})
g = (ggplot(sm_df, aes(x='x', y='y')) + geom_area() +
theme_minimal() + xlab('Sample Means') + ylab('Probability Density Function') +
ggtitle('Sample mean distribution under our assumption'))
g
```
As long as we know the expectation (which we usually do as part of the assumption) and variance (which is more tricky) of the base distribution, we can use this normal distribution to model random variable from *any* distribution. That is, we can model *any* data as long as we can assume their expectation and variance.
## Think Like A ~~Detective~~ Frequentist
In a frequentist perspective, we treat a problem like a criminal persecution. First, we assume innocence of the defendant often called **null hypothesis** (in our case that conversion rate is *less than or equal to* 10%). Then, we collect the evidence (all clicks and conversions from campaign `C`). After that, we review how *unlikely* it is that we have this evidence assuming the defendant is innocent (by looking at where our sample mean lands on the sample mean distribution). Most frequentist tests are simply saying:
>If we assume that [conversion rate]() of [ads campaign C]() has the long-run [conversion rate]() of less than or equal to [10%](), our results with sample mean [0.107]() or more extreme ones are so unlikely that they happen only [23%]() of the time, calculated by the area of the distribution with higher value than our sample mean.
Note that you can substitute the highlighted parts with any other numbers and statistics you are comparing; for instance, medical trials instead of ads campaigns and relative risks instead of converion rates.
```
g = (ggplot(sm_df, aes(x='x', y='y', group='crit')) + geom_area(aes(fill='crit')) +
theme_minimal() + xlab('Sample Means') + ylab('Probability Density Function') +
ggtitle('Sample mean distribution under our assumption') +
guides(fill=guide_legend(title="Conversion Rate < 0.1")))
g
```
Whether 23% is unlikely *beyond reasonable doubt* depends on how much we are willing to tolerate the false positive rate (the percentage of innocent people you are willing to execute). By convention, a lot of practioners set this to 1-5% depending on their problems; for instance, an experiment in physics may use 1% or less because physical phenomena is highly reproducible whereas social science may use 5% because the human behaviors are more variable. This is not to be confused with **false discovery rate** which is the probability of our positive predictions turning out to be wrong. The excellent book [Statistics Done Wrong](https://www.statisticsdonewrong.com/p-value.html) has given this topic an extensive coverage that you definitely should check out (Reinhart, 2015).
This degree of acceptable unlikeliness is called **alpha** and the probability we observe is called **p-value**. We must set alpha as part of the assumption before looking at the data (the law must first state how bad an action is for a person to be executed).
## Transforming A Distribution
In the previous example of `C`, we are only interested when the conversion rate is *more than* 10% so we look only beyond the right-hand side of our sample mean (thus called **one-tailed tests**). If we were testing whether the conversion rate is *equal to* 10% or not we would be interested in both sides (thus called **two-tailed tests**). However, it is not straightforward since we have to know the equivalent position of our sample mean on the left-hand side of the distribution.
One way to remedy this is to convert the sample mean distribution to a distribution that is symmetrical around zero and has a fixed variance so the value on one side is equivalent to minus that value of the other side. **Standard normal distribution** is the normal distribution with expectation $\mu=0$ and variance $\sigma^2=1$. We convert any normal distribution to a standard normal distribution by:
1. Shift its expectation to zero. This can be done by substracting all values of a distribution by its expectation:
\begin{align}
E[\bar{X_j}-\mu] &= E[\bar{X_j}]-\mu \\
&= \mu-\mu \\
&= 0 \\
\end{align}
2. Scale its variance to 1. This can be done by dividing all values by square root of its variance called **standard deviation**:
\begin{align}
Var(\frac{\bar{X_j}}{\sqrt{\sigma^2/n}}) &= \frac{1}{\sigma^2/n}Var(\bar{X_j})\\
&= \frac{\sigma^2/n}{\sigma^2/n}\\
&=1
\end{align}
Try shifting and scaling the distribution below with different $m$ and $v$.
```
def shift_normal(m,v):
n = c_df.clicks[0]
x_bar = c_df.conv_per[0]
p = 0.1
mu = p; variance = p*(1-p)/n; sigma = (variance)**(0.5)
x = np.arange(0.05, 0.15, 1e-3)
y = np.array([sp.stats.norm.pdf(i, loc=mu, scale=sigma) for i in x])
sm_df = pd.DataFrame({'x': x, 'y': y})
#normalize process
sm_df['x'] = (sm_df.x - m) / np.sqrt(v)
sm_df['y'] = np.array([sp.stats.norm.pdf(i, loc=mu-m, scale=sigma/np.sqrt(v)) for i in sm_df.x])
print(f'Expectation of sample mean: {mu-m}; Variance of sample mean: {variance/v}')
g = (ggplot(sm_df, aes(x='x', y='y')) + geom_area() +
theme_minimal() + xlab('Sample Means') + ylab('Probability Density Function') +
ggtitle('Shifted Normal Distribution of Sample Mean'))
g.draw()
# interact(shift_normal,
# m=widgets.FloatSlider(min=-1e-1,max=1e-1,value=1e-1,step=1e-2),
# v=widgets.FloatSlider(min=9e-5,max=9e-3,value=9e-5,step=1e-4, readout_format='.5f'))
#@title {run: "auto"}
m = 0.1 #@param {type:"slider", min:-1e-1, max:1e-1, step:1e-2}
v = 9e-5 #@param {type:"slider", min:9e-5, max:9e-3, step:1e-4}
shift_normal(m,v)
```
By shifting and scaling, we can find out where `C`'s sample mean of 0.107 lands on the X-axis of a standard normal distribution:
\begin{align}
\bar{Z_j} &= \frac{\bar{X_j} - \mu}{\sigma / \sqrt{n}} \\
&= \frac{0.107 - 0.1}{0.3 / \sqrt{1000}} \approx 0.7378648\\
\end{align}
With $\bar{Z_j}$ and $-\bar{Z_j}$, we can calculate the probability of falsely rejecting the null hypotheysis, or p-value, as the area in red, summing up to approximately 46%. This is most likely too high a false positive rate anyone is comfortable with (no one believes a pregnancy test that turns out positive for 46% of the people who are not pregnant), so we fail to reject the null hypothesis that conversion rate of `C` is equal to 10%.
If someone asks a frequentist for an opinion, they would probably say that they cannot disprove `C` has conversion rate of 10% in the long run. If they were asked to choose an action, they would probably go with the course of action that assumes `C` has a conversion rate of 10%.
```
n = c_df.clicks[0]
x_bar = c_df.conv_per[0]
p = 0.1; mu = p; variance = p*(1-p)/n; sigma = (variance)**(0.5)
x_bar_norm = (x_bar - mu) / sigma
def standard_normal(x_bar_norm, legend_title):
x_bar_norm = abs(x_bar_norm)
x = np.arange(-3, 3, 1e-2)
y = np.array([sp.stats.norm.pdf(i, loc=0, scale=1) for i in x])
sm_df = pd.DataFrame({'x': x, 'y': y})
#normalize process
sm_df['crit'] = sm_df.x.map(lambda x: False if ((x<-x_bar_norm)|(x>x_bar_norm)) else True)
g = (ggplot(sm_df, aes(x='x', y='y',group='crit')) + geom_area(aes(fill='crit')) +
theme_minimal() + xlab('Sample Means') + ylab('Probability Density Function') +
ggtitle('Standard Normal Distribution of Sample Mean') +
guides(fill=guide_legend(title=legend_title)))
g.draw()
standard_normal(x_bar_norm, "Conversion Rate = 0.1")
```
## Z-test and More
With CLT and standard normal distribution (sometimes called **Z-distribution**), we now have all the tools for one of the most popular and useful statistical hypothesis test, the **Z-test**. In fact we have already done it with the hypothetical campaign `C`. But let us go back to our original problem of comparing the long-run conversion rates of `A` and `B`. Let our null hypothesis be that they are equal to each other and alpha be 0.05 (we are comfortable with false positive rate of 5%).
```
conv_df
```
We already know how to compare a random variable to a fixed value, but now we have two random variables from two ads campaign. We get around this by comparing **the difference of their sample mean** $\bar{X_\Delta} = \bar{X_{A}} - \bar{X_{B}}$ to 0. This way, our null hypothesis states that there is no difference between the long-run conversion rates of these campaigns. Through another useful statistical concept, we also know that the variance of $\bar{X_\Delta}$ is the sum of sample mean variances of $\bar{X_\text{A}}$ and $\bar{X_\text{B}}$ (Normal Sum Theorem; [Lemon, 2002](https://www.goodreads.com/book/show/3415974-an-introduction-to-stochastic-processes-in-physics)).
Thus, we can calculate the **test statistic** or, specifically for Z-test, **Z-value** as follows:
\begin{align}
\bar{Z_\Delta} &= \frac{\bar{X_\Delta}-\mu}{\sqrt{\frac{\sigma^2_\text{A}}{n_\text{A}} + \frac{\sigma^2_\text{B}}{n_\text{B}}}} \\
&= \frac{\bar{X_\Delta}-\mu}{\sqrt{\sigma^2_\text{pooled} * (\frac{1}{n_\text{A}} + \frac{1}{n_\text{B}})}}
\end{align}
Since we are assuming that `A` and `B` has the same conversion rate, their variance is also assumed to be the same:
$$\sigma^2_{A} = \sigma^2_{B} = \sigma_\text{pooled} = p * (1-p)$$
where $p$ is the total conversions of both campaigns divided by their clicks (**pooled probability**).
In light of the Z-value calculated from our data, we found that p-value of rejecting the null hypothesis that conversion rates of `A` and `B` are equal to each other is less than 3%, lower than our acceptable false positive rate of 5%, so we reject the null hypothesis that they perform equally well. The result of the test is **statistically significant**; that is, it is unlikely enough for us given the null hypothesis.
```
def proportion_test(c1: int, c2: int,
n1: int, n2: int,
mode: str = 'one_sided') -> Tuple[float, float]:
'''
:meth: Z-test for difference in proportion
:param int c1: conversions for group 1
:param int c2: conversions for group 2
:param int n1: impressions for group 1
:param int n2: impressions for group 2
:param str mode: mode of test; `one_sided` or `two_sided`
:return: Z-score, p-value
'''
p = (c1 + c2) / (n1 + n2)
p1 = c1 / n1
p2 = c2 / n2
z = (p1 - p2) / np.sqrt(p * (1 - p) * (1 / n1 + 1 / n2))
if mode == 'two_sided':
p = 2 * (1 - sp.stats.norm.cdf(abs(z)))
elif mode == 'one_sided':
p = 1 - sp.stats.norm.cdf(abs(z))
else:
raise ValueError('Available modes are `one_sided` and `two_sided`')
return z, p
z_value, p_value = proportion_test(c1=conv_df.conv_cnt[0], c2=conv_df.conv_cnt[1],
n1=conv_df.clicks[0], n2=conv_df.clicks[1], mode='two_sided')
print(f'Z-value: {z_value}; p-value: {p_value}')
standard_normal(z_value, "No Difference in Conversion Rates")
```
This rationale extends beyond comparing proportions such as conversion rates. For instance, we can also compare revenues of two different stores, assuming they are i.i.d. However in this case, we do not know the variance of the base distribution $\sigma^2$, as it cannot be derived from our assumption (variance of Bernoulli distribution is $p*(1-p)$ but store revenues are not modelled after a coin flip). The test statistic then is created with sample variance $s^2$ based on our sample group and follows a slightly modified version of standard normal distribution (see [Student's t-test](https://en.wikipedia.org/wiki/Student%27s_t-test)). Your test statistics and sample mean distributions may change, but bottom line of frequentist A/B test is exploiting CLT and frequentist reasoning.
## Confidence Intervals
Notice that we can calculate p-value from Z-value and vice versa. This gives us another canny way to look at the problem; that is, we can calculate the intervals where there is an arbitrary probability, say 95%, that sample mean of `A` or `B` will fall into. We call it **confidence interval**. You can see that despite us rejecting the null hypothesis that their difference is zero, the confidence intervals of both campaigns can still overlap.
Try changing the number of conversion rate and clicks of each group as well as the alpha to see what changes in terms of p-value of Z-test and confidence intervals. You will see that the sample mean distribution gets "wider" as we have fewer samples in a group. Intuitively, this makes sense because the fewer clicks you have collected, the less information you have about true performance of an ads campaign and less confident you are about where it should be. So when designing an A/B test, you should plan to have similar number of sample between both sample groups in order to have similarly distributed sample means.
```
def proportion_plot(c1: int, c2: int,
n1: int, n2: int, alpha: float = 0.05,
mode: str = 'one_sided') -> None:
'''
:meth: plot Z-test for difference in proportion and confidence intervals for each campaign
:param int c1: conversions for group 1
:param int c2: conversions for group 2
:param int n1: impressions for group 1
:param int n2: impressions for group 2
:param float alpha: alpha
:param str mode: mode of test; `one_sided` or `two_sided`
:return: None
'''
p = (c1 + c2) / (n1 + n2)
p1 = c1 / n1
p2 = c2 / n2
se1 = np.sqrt(p1 * (1 - p1) / n1)
se2 = np.sqrt(p2 * (1 - p2) / n2)
z = sp.stats.norm.ppf(1 - alpha / 2)
x1 = np.arange(p1 - 3 * se1, p1 + 3 * se1, 1e-4)
x2 = np.arange(p2 - 3 * se2, p2 + 3 * se2, 1e-4)
y1 = np.array([sp.stats.norm.pdf(i, loc=p1, scale=np.sqrt(p1 * (1 - p1) / n1)) for i in x1])
y2 = np.array([sp.stats.norm.pdf(i, loc=p2, scale=np.sqrt(p2 * (1 - p2) / n2)) for i in x2])
sm_df = pd.DataFrame({'campaign_id': ['Campaign A'] * len(x1) + ['Campaign B'] * len(x2),
'x': np.concatenate([x1, x2]), 'y': np.concatenate([y1, y2])})
z_value, p_value = proportion_test(c1, c2, n1, n2, mode)
print(f'Z-value: {z_value}; p-value: {p_value}')
g = (ggplot(sm_df, aes(x='x', y='y', fill='campaign_id')) +
geom_area(alpha=0.5)
+ theme_minimal() + xlab('Sample Mean Distribution of Each Campaign')
+ ylab('Probability Density Function')
+ geom_vline(xintercept=[p1 + se1 * z, p1 - se1 * z], colour='red')
+ geom_vline(xintercept=[p2+se2*z, p2-se2*z], colour='blue')
+ ggtitle(f'Confident Intervals at alpha={alpha}'))
g.draw()
# interact(ci_plot,
# p1 = widgets.FloatSlider(min=0,max=1,value=conv_df.conv_cnt[0] / conv_df.clicks[0],
# step=1e-3,readout_format='.5f'),
# p2 = widgets.FloatSlider(min=0,max=1,value=conv_df.conv_cnt[1] / conv_df.clicks[1],
# step=1e-3,readout_format='.5f'),
# n1 = widgets.IntSlider(min=10,max=70000,value=conv_df.clicks[0]),
# n2 = widgets.IntSlider(min=10,max=70000,value=conv_df.clicks[1]),
# alpha = widgets.FloatSlider(min=0,max=1,value=0.05))
conv_df.clicks[0], conv_df.clicks[1]
#@title {run: "auto"}
c1 = 5950 #@param {type:"slider", min:0, max:70000}
c2 = 6189 #@param {type:"slider", min:0, max:70000}
n1 = 59504 #@param {type:"slider", min:10, max:70000, step:10}
n2 = 58944 #@param {type:"slider", min:10, max:70000, step:10}
alpha = 0.05 #@param {type:"slider", min:0, max:1, step:1e-3}
proportion_plot(c1,c2,n1,n2,alpha)
```
## Any Hypothesis Test Is Statistically Significant with Enough Samples
Because we generated the data, we know that conversion rate of campaign `A` (10%) is about 95% that of campaign `B` (10.5%). If we go with our gut feeling, most of us would say that they are practically the same; yet, our Z-test told us that they are different. The reason for this becomes apparent graphically when we decrease the number of clicks for both campaigns in the plot above. The Z-test stops becoming significant when both campaigns have about 50,000 clicks each, even though they still have exactly the same conversion rate. The culprit is our Z-value calculated as:
\begin{align}
\bar{Z_\Delta} &= \frac{\bar{X_\Delta}-\mu}{\sqrt{\sigma^2_\text{pooled} * (\frac{1}{n_\text{A}} + \frac{1}{n_\text{B}})}}
\end{align}
Notice number of clicks $n_\text{A}$ and $n_\text{B}$ hiding in the denominator. Our test statistics $\bar{Z_\Delta}$ will go infinitely higher as long as we collect more clicks. If both campaigns `A` and `B` have one million clicks each, the difference of as small as 0.1% will be detected as statistically significant. Try adjusting the probabilities $p1$ and $p2$ in the plot below and see if the area of statistical significance expands or contracts as the difference between the two numbers changes.
```
def significance_plot(p1,p2):
n1s = pd.DataFrame({'n1':[10**i for i in range(1,7)],'k':0})
n2s = pd.DataFrame({'n2':[10**i for i in range(1,7)],'k':0})
ns = pd.merge(n1s,n2s,how='outer').drop('k',1)
ns['p_value'] = ns.apply(lambda row: proportion_test(p1*row['n1'], p2*row['n2'],row['n1'],row['n2'])[1], 1)
g = (ggplot(ns,aes(x='factor(n1)',y='factor(n2)',fill='p_value')) + geom_tile(aes(width=.95, height=.95)) +
geom_text(aes(label='round(p_value,3)'), size=10)+ theme_minimal() +
xlab('Number of Samples in A') + ylab('Number of Samples in B') +
guides(fill=guide_legend(title="p-value")))
g.draw()
# interact(significance_plot,
# p1 = widgets.FloatSlider(min=0,max=1,value=conv_df.conv_cnt[0] / conv_df.clicks[0],
# step=1e-3,readout_format='.5f'),
# p2 = widgets.FloatSlider(min=0,max=1,value=conv_df.conv_cnt[1] / conv_df.clicks[1],
# step=1e-3,readout_format='.5f'))
#@title {run: "auto"}
p1 = 0.09898494218876042 #@param {type:"slider", min:0, max:1, step:1e-3}
p2 = 0.10367467426710097 #@param {type:"slider", min:0, max:1, step:1e-3}
significance_plot(p1,p2)
```
More practically, look at cumulative conversion rates and z-values of `A` and `B` on a daily basis. Every day that we check the results based on cumulative clicks and conversions, we will come up with a different test statistic and p-value. Difference in conversion rates seem to stabilize after 20 days; however, notice that if you stop the test at day 25 or so, you would say it is NOT statistically significant, whereas if you wait a little longer, you will get the opposite result. The only thing that changes as time goes on is that we have more samples.
```
g = (ggplot(rates_df, aes(x='timesteps', y='value', color='variable')) + geom_line() + theme_minimal() +
xlab('Days of Experiment Run') + ylab('Cumulative Conversions / Cumulative Clicks'))
g
#test
conv_days['cumu_z_value'] = conv_days.apply(lambda row: proportion_test(row['cumu_conv_a'],
row['cumu_conv_b'],row['cumu_click_a'],
row['cumu_click_b'], mode='two_sided')[0],1)
conv_days['cumu_p_value'] = conv_days.apply(lambda row: proportion_test(row['cumu_conv_a'],
row['cumu_conv_b'],row['cumu_click_a'],
row['cumu_click_b'], mode='two_sided')[1],1)
#plot
g = (ggplot(conv_days, aes(x='timesteps',y='cumu_z_value',color='cumu_p_value')) + geom_line() + theme_minimal() +
xlab('Days of Campaign') + ylab('Z-value Calculated By Cumulative Data') +
geom_hline(yintercept=[sp.stats.norm.ppf(0.95),sp.stats.norm.ppf(0.05)], color=['red','green']) +
annotate("text", label = "Above this line A is better than B", x = 20, y = 2, color = 'red') +
annotate("text", label = "Below this line B is better than A", x = 20, y = -2, color = 'green'))
g
```
## Minimum Detectable Effect, Power and Required Sample Size
We argue that this too-big-to-fail phenomena among sample groups is especially dangerous in the context of today's "big data" society. Gone are the days where statistical tests are done among two control groups of 100 people each using paper survey forms. Now companies are performning A/B testing between ad variations that could have tens of thousands or more samples (impressions or clicks), and potentially all of them will be "statistically significant".
One way to remedy this is to do what frequentists do best: make more assumptions, more specifically **two** more.
First, if we want to find out whether `B` has *better* conversion than `A`, we do not only make assumptions about the mean of the null hypothesis but **minimally by how much**, aka the mean of the alternative hypothesis. We can set **mininum detectable effect** as the smallest possible difference that would be worth investing the time and money in one campaign over the other; let say that from experience we think it is 1%. We then ask:
> What is the mininum number of samples in a sample group (clicks in a campaign) should we have in order to reject the null hypothesis at a **significance level ($\alpha$)** and **power ($1-\beta$)** when the difference in sample means is [1%]()?
The **significance level ($\alpha$)** takes care of the false positive rate promise, for example to be lower than 5% (95% specificity), where as **power ($1-\beta$)** indicates the desired recall, for example to be 80% (20% false negative rate).
```
def power_plot(mean_h0: float,
mean_h1: float,
critical: float) -> None:
'''
:meth: plot Z-test for difference in proportion with power and alpha highlighted
:param float mean1: mean for null hypothesis
:param float mean2: mean for alternative hypothesis
:param float critical: critical value selected
:return: None
'''
x = np.arange(-4,6,0.1)
dat = pd.DataFrame({'x':x,
'y1':sp.stats.norm.pdf(x,mean_h0,1),
'y2':sp.stats.norm.pdf(x,mean_h1,1)})
dat['x1'] = dat.x.map(lambda x: np.where(x>critical,x,None))
dat['x2'] = dat.x.map(lambda x: np.where(x>critical,x,None))
g = (
ggplot(dat, aes(x = 'x')) +
geom_line(aes(y = 'y1'), color='red', size = 1.2) +
geom_line(aes(y = 'y2'), color='blue',size = 1.2) +
geom_vline(xintercept=mean_h0,linetype='dashed',color='red')+
geom_vline(xintercept=mean_h1,linetype='dashed',color='blue')+
geom_area(aes(y = 'y1', x = 'x1'), fill='red') +
geom_area(aes(y = 'y2', x = 'x2'), fill = 'blue', alpha = 0.3) +
ylab('Probability Density Function') + xlab('Z value')+
ggtitle(f'significance level = {sp.stats.norm.pdf(critical,mean_h0,1):.2f}; power ={1-sp.stats.norm.pdf(critical,mean_h1,1):.2f}')+
theme_minimal()
)
g.draw()
#@title {run: "auto"}
mean_h0 = 0 #@param {type:"slider", min:0, max:6, step:1e-3}
mean_h1 = 3.18 #@param {type:"slider", min:0, max:6, step:1e-3}
critical = 2 #@param {type:"slider", min:0, max:3, step:1e-1}
power_plot(mean_h0, mean_h1, critical)
```
Given a minimum detectable effect $\text{MDE}$, significance level $\alpha$ and power $1-\beta$, we can calculate the critical Z value $Z_{critical}$ that satisfies these conditions, where the required number of samples in each group is $n$ and $mn$ (where m is multiplier):
\begin{align}
Z_{critical} &= \mu_{H0} + Z_{\alpha} * \sqrt{\sigma^2 * (\frac{1}{n} + \frac{1}{mn})}\\
Z_{critical} &= 0 + Z_{\alpha} * \sqrt{\sigma^2 * (\frac{1}{n} + \frac{1}{mn})}\\
Z_{critical} &= \mu_{H1}-\mu_{H0} - Z_{\beta} * \sqrt{\sigma^2 * (\frac{1}{n} + \frac{1}{mn})}\\
Z_{critical} &= \text{MDE} - Z_{\beta} * \sqrt{\sigma^2 * (\frac{1}{n} + \frac{1}{mn})}\\
0 + Z_{\alpha} * \sqrt{\sigma^2 * (\frac{1}{n} + \frac{1}{mn})} &= \text{MDE} - Z_{\beta} * \sqrt{\sigma^2 * (\frac{1}{n} + \frac{1}{mn})}\\
Z_{\alpha} + Z_{\beta} &= \frac{\text{MDE}}{\sqrt{\sigma^2 * (\frac{1}{n} + \frac{1}{mn})}} \\
\frac{(m+1)\sigma^2}{mn} &= (\frac{\text{MDE}}{Z_{\alpha} + Z_{\beta}})^2 \\
n &= \frac{m+1}{m}(\frac{(Z_{\alpha} + Z_{\beta}) \sigma}{\text{MDE}})^2 \\
n &= 2(\frac{(Z_{\alpha} + Z_{\beta}) \sigma}{\text{MDE}})^2; m=1
\end{align}
Second, we make yet another crucial assumption about **the variance $\sigma^2$ we expect**. Remember we used to estimate the variance by using the pooled probability of our sample groups, but here we have not even started the experiments. In a conventional A/B testing scenario, we are testing whether an experimental variation is better than the existing one, so one choice is **using sample variance of a campaign you are currently running**; for instance, if `A` is our current ads and we want to know if we should change to `B`, then we will use conversion rate of `A` from past time period to calculate the variance, say 10%.
Let us go back in time before we even started our 2-month-long test between campaign `A` and `B`. Now we assume not only acceptable false positive rate alpha of 0.05 but also minimum detectable effect of 1% and expected variance of $\sigma^2 = 0.1 * (1-0.1) = 0.09$, then we calculate that the minimum number of samples we should collect for each campaign. You can see that should we have done that we would have not been able to reject the null hypothesis, and stuck with campaign `A` going forward.
The upside is that now we only have to run the test for about 5 days instead of 60 days assuming every day is the same for the campaigns (no peak traffic on weekends, for instance). The downside is that our null hypothesis gets much more specific with not only one but three assumptions:
* Long-run conversion rate of `B` is no better than `A`'s
* The difference that will matter to us is at least 1%
* The expected variance conversion rates is $\sigma^2 = 0.1 * (1-0.1) = 0.09$
This fits many A/B testing scenarios since we might not want to change to a new variation even though it is better but not so much that we are willing to invest our time and money to change our current setup. Try adjusting $\text{MDE}$ and $\sigma$ in the plot below and see how the number of required samples change.
```
def proportion_samples(mde: float, p: float, m: float = 1,
alpha: float = 0.05,
beta: float = 0.8,
mode: str = 'one_sided') -> float:
'''
:meth: get number of required sample based on minimum detectable difference (in absolute terms)
:param float mde: minimum detectable difference
:param float p: pooled probability of both groups
:param float m: multiplier of number of samples; groups are n and nm
:param float alpha: alpha
:param float beta: beta
:param str mode: mode of test; `one_sided` or `two_sided`
:return: estimated number of samples to get significance
'''
variance = p * (1 - p)
z_b = sp.stats.norm.ppf(beta)
if mode == 'two_sided':
z_a = sp.stats.norm.ppf(1 - alpha / 2)
elif mode == 'one_sided':
z_a = sp.stats.norm.ppf(1 - alpha)
else:
raise ValueError('Available modes are `one_sided` and `two_sided`')
return ((m + 1) / m) * variance * ((z_a+z_b) / mde)**2
def plot_proportion_samples(mde, p, m=1, alpha=0.05,beta=0.8, mode='one_sided'):
minimum_samples = proportion_samples(mde, p,m, alpha,beta, mode)
g = (ggplot(conv_days, aes(x='cumu_click_a',y='cumu_z_value',color='cumu_p_value')) + geom_line() +
theme_minimal() +
xlab('Number of Samples per Campaign') + ylab('Z-value Calculated By Cumulative Data') +
geom_hline(yintercept=[sp.stats.norm.ppf(0.95),sp.stats.norm.ppf(0.05)], color=['red','green']) +
annotate("text", label = "Above this line A is better than B", x = 30000, y = 2, color = 'red') +
annotate("text", label = "Below this line B is better than A", x = 30000, y = -2, color = 'green') +
annotate("text", label = f'Minimum required samples at MDE {mde}={int(minimum_samples)}', x = 30000, y = 0,) +
geom_vline(xintercept=minimum_samples))
g.draw()
#@title {run: "auto"}
mde = 0.01 #@param {type:"slider", min:0.001, max:0.01, step:1e-3}
p = 0.1 #@param {type:"slider", min:0, max:1, step:1e-3}
m = 1 #@param {type:"slider", min:0, max:1, step:1e-1}
p_value = 0.05 #@param {type:"slider", min:0.01, max:0.1, step:1e-3}
mode = 'one_sided' #@param ['one_sided','two_sided'] {type:"string"}
plot_proportion_samples(mde, p, m, alpha, mode)
```
## You Will Get A Statistically Significant Result If You Try Enough Times
The concept p-value represents is false positive rate of our test, that is, how unlikely it is to observe our sample groups given that they do not have different conversion rates in the long run. Let us re-simulate our campaigns `A` and `B` to have equal expectation of 10%. If we apply our current method, we can be comfortably sure we will not get statistical significance (unless we have an extremely large number of samples).
```
conv_days = gen_bernoulli_campaign(p1 = 0.10,
p2 = 0.10,
timesteps = 60,
scaler=100,
seed = 1412) #god-mode
conv_days.columns = [i.replace('impression','click') for i in conv_days.columns] #function uses impressions but we use clicks
conv_days['cumu_z_value'] = conv_days.apply(lambda row: proportion_test(row['cumu_conv_a'],
row['cumu_conv_b'],row['cumu_click_a'],
row['cumu_click_b'], mode='two_sided')[0],1)
conv_days['cumu_p_value'] = conv_days.apply(lambda row: proportion_test(row['cumu_conv_a'],
row['cumu_conv_b'],row['cumu_click_a'],
row['cumu_click_b'], mode='two_sided')[1],1)
conv_days['z_value'] = conv_days.apply(lambda row: proportion_test(row['conv_a'],
row['conv_b'],row['click_a'],
row['click_b'], mode='two_sided')[0],1)
conv_days['p_value'] = conv_days.apply(lambda row: proportion_test(row['conv_a'],
row['conv_b'],row['click_a'],
row['click_b'], mode='two_sided')[1],1)
g = (ggplot(conv_days, aes(x='timesteps',y='cumu_z_value',color='cumu_p_value')) + geom_line() + theme_minimal() +
xlab('Days in Campaign') + ylab('Z-value Calculated By Cumulative Data') +
geom_hline(yintercept=[sp.stats.norm.ppf(0.975),sp.stats.norm.ppf(0.025)], color=['red','red']))
g
```
Another approach is instead of doing the test only once, we **do it every day using clicks and conversions of that day alone**. We will have 60 tests where 3 of them give statistically significant results that `A` and `B` have different conversion rates in the long run. The fact that we have exactly 5% of the tests turning positive despite knowing that none of them should is not a coincidence. The Z-value is calculated based on alpha of 5%, which means even if there is no difference at 5% of the time we perform this test with this specific set of assumptions we will still have a positive result ([Obligatory relevant xkcd strip](https://xkcd.com/882/); Munroe, n.d.).
```
g = (ggplot(conv_days, aes(x='timesteps',y='z_value',color='p_value')) + geom_line() + theme_minimal() +
xlab('Each Day in Campaign') + ylab('Z-value Calculated By Daily Data') +
geom_hline(yintercept=[sp.stats.norm.ppf(0.975),sp.stats.norm.ppf(0.025)], color=['red','red']) +
ggtitle(f'We Have {(conv_days.p_value<0.05).sum()} False Positives Out of {conv_days.shape[0]} Days ({100*(conv_days.p_value<0.05).sum()/conv_days.shape[0]}%)'))
g
```
Not many people will test online ads campaigns based on daily data, but many researchers perform repeated experiments and by necessity repeated A/B tests as shown above. If you have a reason to believe that sample groups from different experiments have the same distribution, you might consider grouping them together and perform one large test as usual. Otherwise, you can tinker the assumption of how much false positive you can tolerate. One such approach, among [others](https://en.wikipedia.org/wiki/Multiple_comparisons_problem), is the [Bonferroni correction](http://mathworld.wolfram.com/BonferroniCorrection.html). It scales your alpha down by the number of tests you perform to make sure that your false positive rate stays at most your original alpha. In our case, if we cale our alpha as$\alpha_{\text{new}}=\frac{0.05}{60} \approx 0.0008$, we will have the following statistically non-significant results.
```
g = (ggplot(conv_days, aes(x='timesteps',y='z_value',color='p_value')) + geom_line() + theme_minimal() +
xlab('Each Day in Campaign') + ylab('Z-value Calculated By Daily Data') +
geom_hline(yintercept=[sp.stats.norm.ppf(1-0.0008/2),sp.stats.norm.ppf(0.0008/2)], color=['red','red']) +
ggtitle(f'We Have {(conv_days.p_value<0.05).sum()} False Positives Out of {conv_days.shape[0]} Days ({100*(conv_days.p_value<0.05).sum()/conv_days.shape[0]}%)'))
g
```
## Best Practices
To the best of our knowledge, the most reasonable and practical way to perform a frequentist A/B test is to know your assumptions, including but not limited to:
* What distribution should your data be assumed to be drawn from? In many cases, we use Bernoulli distribution for proportions, Poisson distribution for counts and normal distribution for real numbers.
* Are you comparing your sample group to a fixed value or another sample group?
* Do you want to know if the expectation of the sample group is equal to, more than or less than its counterpart?
* What is the minimum detectable effect and how many samples should you collect? What is a reasonable variance to assume in order to calculated required sample size?
* What is the highest false positive rate $\alpha$ that you can accept?
With these assumptions cleared, you can most likely create a test statistics, then with frequentist reasoning, you can determine if the sample group you collected are unlikely enough that you would reject your null hypothesis because of it.
## References
* Lemons, D. S. (2002). An introduction to stochastic processes in physics. Baltimore: Johns Hopkins University Press.
Normal Sum Theorem; p34
* Munroe, Randall (n.d.). HOW TO Absurd Scientific Answers toCommon Real-world Problems. Retrieved from https://xkcd.com/882/
* Reinhart, A. (2015, March 1). The p value and the base rate fallacy. Retrieved from https://www.statisticsdonewrong.com/p-value.html
* [whuber](https://stats.stackexchange.com/users/919/whuber) (2017). Can a probability distribution value exceeding 1 be OK?. Retrieved from https://stats.stackexchange.com/q/4223
## Appendix
### Bessel's Correction for Sample Variance
Random variables can be thought of as estimation of the real values such as sample variance is an estimation of variance from the "true" distribution. An estimator is said to be **biased** when its expectation is not equal to the true value (not to be confused with LLN where the estimator itself approaches the true value as number of samples grows).
We can repeat the experiment we did for LLN with sample mean and true mean, but this time we compare how biased version ($\frac{1}{n} \sum_{i=1}^{n} (X_i - \bar{X})^2$) and unbiased version ($\frac{1}{n-1} \sum_{i=1}^{n} (X_i - \bar{X})^2$) of sample variance approach true variance as number of sample groups grow. Clearly, we can see that biased sample variance normally underestimates the true variance.
```
def var(x, dof=0):
n = x.shape[0]
mu = np.sum(x)/n
return np.sum((x - mu)**2) / (n-dof)
n_total = 10000 #total number of stuff
n_sample = 100 #number of samples per sample group
sg_range = range(1,100) #number of sample groups to take average of sample variances from
r = np.random.normal(loc=0,scale=1,size=n_total) #generate random variables based on Z distribution
pop_var = var(r) #true variance of the population
mean_s_bs = []
mean_s_us = []
for n_sg in sg_range:
s_bs = []
s_us =[]
for i in range(n_sg):
sg = np.random.choice(r,size=n_sample,replace=False)
s_bs.append(var(sg)) #biased sample variance
s_us.append(var(sg,1)) #unbiased sample variance
mean_s_bs.append(np.mean(s_bs))
mean_s_us.append(np.mean(s_us))
s_df = pd.DataFrame({'nb_var':sg_range,'biased_var':mean_s_bs,
'unbiased_var':mean_s_us}).melt(id_vars='nb_var')
g = (ggplot(s_df,aes(x='nb_var',y='value',color='variable',group='variable')) + geom_line() +
geom_hline(yintercept=pop_var) + theme_minimal() +
xlab('Number of Sample Groups') + ylab('Sample Mean of Sample Variance in Each Group'))
g
```
We derive exactly how much the bias is as follows:
$$B[s_{biased}^2] = E[s_{biased}^2] - \sigma^2 = E[s_{biased}^2 - \sigma^2]$$
where $B[s^2]$ is the bias of estimator (biased sample variance) $s_{biased}^2$ of variance $\sigma^2$. Then we can calculate the bias as:
\begin{align}
E[s_{biased}^2 - \sigma^2] &= E[\frac{1}{n} \sum_{i=1}^n(X_i - \bar{X})^2 - \frac{1}{n} \sum_{i=1}^n(X_i - \mu)^2] \\
&= \frac{1}{n}E[(\sum_{i=1}^n X_i^2 -2\bar{X}\sum_{i=1}^n X_i + n\bar{X^2}) - (\sum_{i=1}^n X_i^2 -2\mu\sum_{i=1}^n X_i + n\mu^2)] \\
&= E[\bar{X^2} - \mu^2 - 2\bar{X^2} + 2\mu\bar{X}] \\
&= -E[\bar{X^2} -2\mu\bar{X} +\mu^2] \\
&= -E[(\bar{X} - \mu)^2] \\
&= -\frac{\sigma^2}{n} \text{; variance of sample mean}\\
E[s_{biased}^2] &= \sigma^2 - \frac{\sigma^2}{n} \\
&= (1-\frac{1}{n})\sigma^2
\end{align}
Therefore if we divide biased estimator $s_{biased}^2$ by $1-\frac{1}{n}$, we will get an unbiased estimator of variance $s_{unbiased}^2$,
\begin{align}
s_{unbiased}^2 &= \frac{s_{biased}^2}{1-\frac{1}{n}} \\
&= \frac{\frac{1}{n} \sum_{i=1}^n(X_i - \bar{X})^2}{1-\frac{1}{n}}\\
&= \frac{1}{n-1} \sum_{i=1}^n(X_i - \bar{X})^2
\end{align}
This is why the sample variance we usually use $s^2$ has $n-1$ instead of $n$. Also, this is not to be confused with the variance of sample means which is $\frac{\sigma^2}{n}$ when variance of the base distribution is known or assumed and $\frac{s^2}{n}$ when it is not.
### Mass vs Density
You might wonder why the sample mean distribution has Y-axis that exceeds 1 even though it seemingly should represents probability of each value of sample mean. The short answer is that it does not represents probability but rather **probability density function**. The long answer is that there are two ways of representing probability distributions depending on whether they describe **discrete** or **continuous** data. See also this excellent [answer on Stack Exchange](https://stats.stackexchange.com/questions/4220/can-a-probability-distribution-value-exceeding-1-be-ok) (whuber, 2017).
**Discrete probability distributions** contain values that are finite (for instance, $1, 2, 3, ...$) or countably infinite (for instance, $\frac{1}{2^i}$ where $i=1, 2, 3, ...$). They include but not limited to distributions we have used to demonstrate CLT namely uniform, Bernoulli and Poisson distribution. In all these distributions, the Y-axis, now called **probability mass function**, represents the exact probability each value in the X-axis will take, such as the Bernouilli distribution we have shown before:
```
flips = np.random.choice([0,1], size=n, p=[1-p,p])
flips_df = pd.DataFrame(flips)
flips_df.columns = ['conv_flag']
g = (ggplot(flips_df,aes(x='factor(conv_flag)')) + geom_bar(aes(y = '(..count..)/sum(..count..)')) +
theme_minimal() + xlab('Value') + ylab('Probability Mass Function') +
ggtitle(f'Bernoulli Distribution'))
g
```
**Continuous probability distribution** contains values that can take infinitely many, uncountable values (for instance, all real numbers between 0 and 1). Since there are infinitely many values, the probability of each individual value is essentially zero (what are the chance of winning the lottery that has infinite number of digits). Therefore, instead of the exact probability of each value (probability mass function), the Y-axis only represents the **probability density function**. This can be thought of as the total probability within an immeasurably small interval around the value. Take an example of a normal distribution with expectation $\mu=0$ and variance $\sigma^2=0.01$. The probability density function of the value 0 is described as:
\begin{align}
f(x) &= \frac{1}{\sqrt{2\pi\sigma^2}} e^{\frac{-(x-\mu)^2}{2\sigma^2}}\\
&= \frac{1}{\sqrt{2\pi(0.01)}} e^{\frac{-(x-0)^2}{2(0.01)}} \text{; }\mu=0;\sigma^2=0.01 \\
&\approx 3.989 \text{; when } x=0
\end{align}
This of course does not mean that there is 398.9% chance that we will draw the value 0 but the density of the probability around the value. The actual probability of that interval around 0 is 3.989 times an immeasurably small number which will be between 0 and 1.
Intuitively, we can think of these intervals as start from relatively large numbers such as 0.1 and gradually decreases to smaller numbers such as 0.005. As you can see from the plot below, the plot becomes more fine-grained and looks more "normal" as the intervals get smaller.
```
def prob_density(step,mu=0,sigma=0.1):
x = np.arange(-0.5, 0.5, step)
y = np.array([sp.stats.norm.pdf(i, loc=mu, scale=sigma) for i in x])
sm_df = pd.DataFrame({'x': x, 'y': y})
g = (ggplot(sm_df, aes(x='x', y='y')) + geom_bar(stat='identity') +
theme_minimal() + xlab('Value') + ylab('Probability Density Function') +
ggtitle(f'Normal Distribution with Expectation={mu} and Variance={sigma**2:2f}'))
g.draw()
# interact(prob_density, step=widgets.FloatSlider(min=5e-3,max=1e-1,value=1e-1,step=1e-3,readout_format='.3f'))
#@title {run: "auto"}
step = 0.1 #@param {type:"slider", min:5e-3, max:0.1, step:1e-3}
prob_density(step)
```
| true |
code
| 0.471345 | null | null | null | null |
|
## CS536: Perceptrons
#### Done by - Vedant Choudhary, vc389
In the usual way, we need data that we can fit and analyze using perceptrons. Consider generating data points (X, Y) in the following way:
- For $i = 1,....,k-1$, let $X_i ~ N(0, 1)$ (i.e. each $X_i$ is an i.i.d. standard normal)
- For $i = k$, generate $X_k$ in the following way: let $D ~ Exp(1)$, and for a parameter $\epsilon > 0$ take
$X_k = (\epsilon + D)$ with probability 1/2
$X_k = -(\epsilon + D)$ with probability 1/2
The effect of this is that while $X_1,...X_{k-1}$ are i.i.d. standard normals, $X_k$ is distributed randomly with some gap (of size $2\epsilon$ around $X_k = 0$. We can then classify each point according to the following:
$Y = 1$ if $X_k$ > 0
$Y = -1$ if $X_k$ < 0
We see that the class of each data point is determined entirely by the value of the $X_k$ feature
#### 1. Show that there is a perceptron that correctly classifies this data. Is this perceptron unique? What is the ‘best’ perceptron for this data set, theoretically?
**Solution:** The perceptron generated when the data is linearly separable is unique. Best perceptron for a data would be the perceptron that relies heaviliy on the last feature of the dataset, as target value is governed by that.
```
# Importing required libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import pprint
from tqdm import tqdm
%matplotlib inline
# Creating X (feature) vectors for the data
def create_data(k, m, D, epsilon):
X_k_minus_1 = np.random.normal(0, 1, (m,k-1))
X_k = []
for i in range(m):
temp = np.random.choice(2, 1, p=[0.5,0.5])
# print(temp)
if temp == 1:
X_k.append(epsilon + D)
else:
X_k.append(-(epsilon + D))
X_k = np.asarray(X_k).reshape((1,m))
# print(X_k_minus_1)
# print(X_k)
return np.concatenate((X_k_minus_1, X_k.T), axis=1)
# Creating target column for the data
def create_y(X, m):
y = []
for i in range(m):
if X[i][-1] > 0:
y.append(1)
else:
y.append(-1)
return y
# Combining all the sub data points into a dataframe
def create_dataset(k, m, epsilon, D):
X = np.asarray(create_data(k, m, epsilon, D))
y = np.asarray(create_y(X, m)).reshape((m,1))
# print(X.shape,y.shape)
# Training data is an appended version of X and y arrays
data = pd.DataFrame(np.append(X, y, axis=1), columns=["X" + str(i) for i in range(1,k+1)]+['Y'])
return data
# Global Variables - k = 20, m = 100, epsilon = 1
k, m, epsilon = 20, 100, 1
D = float(np.random.exponential(1, 1))
train_data = create_dataset(k, m, epsilon, D)
train_data.head()
```
#### 2. We want to consider the problem of learning perceptrons from data sets. Generate a set of data of size m = 100 with k = 20, $\epsilon$ = 1
##### - Implement the perceptron learning algorithm. This data is separable, so the algorithm will terminate. How does the output perceptron compare to your theoretical answer in the previous problem?
```
# Class for Perceptron
class Perceptron():
def __init__(self):
pass
'''
Calculates the sign of the predicted value
Input: dot product (X.w + b)
Return: Predicted sign of f_x
'''
def sign_function(self, data_vec):
return np.array([1 if val >= 1 else -1 for val in data_vec])[:, np.newaxis]
'''
Perceptron learning algorithm according to the notes posted
Input: dataset
Return: final weights and biases, along with number of steps for convergence
and upper bound of theoretical convergence
'''
def pla(self, data):
X = np.asarray(data.iloc[:,:-1])
y = np.asarray(data.iloc[:,-1:])
num_samples, num_features = X.shape
# Initialize weight and bias parameters
self.w = np.zeros(shape=(num_features, 1))
self.bias = 0
count_till_solution = 0
f_x = [0]*num_samples
i = 0
theoretical_termination = []
while True:
mismatch = 0
for i in range(num_samples):
# Calculate the mapping function f(x)
f_x[i] = float(self.sign_function(np.dot(X[i].reshape((num_features, 1)).T, self.w) + self.bias))
# Compute weights if f_x != y
if float(f_x[i]) != float(y[i]):
mismatch += 1
self.w += np.dot(X[i].reshape((num_features, 1)), y[i].reshape((1,1)))
self.bias += y[i]
count_till_solution += 1
min_margin = 99999
for i in range(num_samples):
margin = abs(np.dot(self.w.T, X[i].reshape(-1,1))/(np.linalg.norm(self.w)))
if margin < min_margin:
min_margin = margin
theoretical_termination.append(int(1/(min_margin**2)))
f_x = np.asarray(f_x).reshape((num_samples, 1))
i += 1
if (np.array_equal(y, f_x)) or (mismatch >= 0.3*num_samples and count_till_solution >= 5000):
break
return self.w, self.bias, count_till_solution, max(theoretical_termination)
'''
Predicts the target value based on a data vector
Input - a single row of dataset or a single X vector
Return - predicted value
'''
def predict(self, instance_data):
instance_data = np.asarray(instance_data)
prediction = self.sign_function(np.dot(self.w.T, instance_data.reshape((len(instance_data),1))) + self.bias)
return prediction
'''
Predicts the target value and then calculates error based on the predictions
Input - dataset, decision tree built
Return - error
'''
def fit(self, data):
error = 0
for i in range(len(data)):
prediction = self.predict(data.iloc[i][:-1])
if prediction != data.iloc[i][-1]:
print("Not equal")
error += 1
return error/len(data)
perceptron = Perceptron()
final_w, final_b, num_steps, theoretical_steps = perceptron.pla(train_data)
print("Final weights:\n",final_w)
print("Final bias:\n", final_b)
print("Number of steps till convergence: \n", num_steps)
print("Theoretical number of steps till convergence can be found for linear separation: ", theoretical_steps)
error = perceptron.fit(train_data)
error
plt.plot(np.linspace(0, 20, 20), list(final_w))
plt.title("Weight vector by feature")
plt.xlabel("Feature number")
plt.ylabel("Weights")
plt.show()
```
**Solution:** On implementing the perceptron learning algorithm on the dataset provided, we see that it is similar to our theoretical answer. The last feature has highest weight associated to it (as can be seen from the graph generated above). This is so because the data is created such that the target value depends solely on the last feature value.
#### 3. For any given data set, there may be multiple separators with multiple margins - but for our data set, we can effectively control the size of the margin with the parameter $\epsilon$ - the bigger this value, the bigger the margin of our separator.
#### – For m = 100, k = 20, generate a data set for a given value of $\epsilon$ and run the learning algorithm to completion. Plot, as a function of $\epsilon$ ∈ [0, 1], the average or typical number of steps the algorithm needs to terminate. Characterize the dependence.
```
def varied_margin():
k, m = 20, 100
epsilon = list(np.arange(0, 1.05, 0.02))
avg_steps = []
for i in tqdm(range(len(epsilon))):
steps = []
for j in range(100):
train_data = create_dataset(k, m, epsilon[i], D)
perceptron = Perceptron()
final_w, final_b, num_steps, theoretical_steps = perceptron.pla(train_data)
steps.append(num_steps)
avg_steps.append(sum(steps)/len(steps))
plt.plot(epsilon, avg_steps)
plt.title("Number of steps w.r.t. margin")
plt.xlabel("Margin value")
plt.ylabel("#Steps")
plt.show()
varied_margin()
```
**Solution:** On plotting average number of steps needed for termination of a linearly separable data w.r.t. $\epsilon$, we observe that bigger the margin, lesser the number of steps are needed for the perceptron to terminate. This dependence can be proved by the Perceptron Convergence Theorem - If data is linearly separable, perceptron algorithm will find a linear classifier that classifies all data correctly, whose convergence is inversely proportional to the square of margin.
This means as the margin increases, the convergence steps decrease.
#### 4. One of the nice properties of the perceptron learning algorithm (and perceptrons generally) is that learning the weight vector w and bias value b is typically independent of the ambient dimension. To see this, consider the following experiment:
#### – Fixing m = 100, $\epsilon$ = 1, consider generating a data set on k features and running the learning algorithm on it. Plot, as a function k (for k = 2, . . . , 40), the typical number of steps to learn a perceptron on a data set of this size. How does the number of steps vary with k? Repeat for m = 1000.
```
def varied_features(m):
epsilon = 1
D = float(np.random.exponential(1, 1))
k = list(np.arange(2, 40, 1))
steps = []
for i in range(len(k)):
train_data = create_dataset(k[i], m, epsilon, D)
perceptron = Perceptron()
final_w, final_b, num_steps, theoretical_steps = perceptron.pla(train_data)
steps.append(num_steps)
plt.plot(k, steps)
plt.title("Number of steps w.r.t. features")
plt.xlabel("#Features")
plt.ylabel("#Steps")
plt.show()
varied_features(100)
varied_features(1000)
```
**Solution:** The number of steps needed for convergence of a linearly separable data through perceptrons is usually independent of number of features the data has. This is shown through the above experiment too. For this case, I see no change in number of steps, but some different runs have shown very random change in number of steps, that also by just 1 step more or less. We cannot establish a trend of convergence w.r.t. the number of features.
#### 5. As shown in class, the perceptron learning algorithm always terminates in finite time - if there is a separator. Consider generating non-separable data in the following way: generate each $X_1, . . . , X_k$ as i.i.d. standard normals N(0, 1). Define Y by
$$Y = 1 if \sum_{i=1}^k{X_i^2} \ge k $$
$$Y = -1 else$$
```
def create_non_separable_data(k, m):
X = np.random.normal(0, 1, (m,k))
y = []
for i in range(m):
total = 0
for j in range(k):
total += X[i][j]**2
if total >= k:
y.append(1)
else:
y.append(-1)
return X, y
def create_non_separable_dataset(k, m):
X, y = create_non_separable_data(k, m)
X = np.asarray(X)
y = np.asarray(y).reshape((m,1))
# Training data is an appended version of X and y arrays
data = pd.DataFrame(np.append(X, y, axis=1), columns=["X" + str(i) for i in range(1,k+1)]+['Y'])
return data
k, m = 2, 100
train_ns_data = create_non_separable_dataset(k, m)
train_ns_data.head()
perceptron2 = Perceptron()
final_w2, final_b2, num_steps2, theoretical_steps = perceptron2.pla(train_ns_data)
plt.scatter(X1, X2, c=y2)
plt.title("Dataset")
plt.xlabel("First feature")
plt.ylabel("Second feature")
plt.show()
```
The data represented above is the data generated from the new rules of creating a non separable data. As can be seen, this data cannot be linearly separated through Perceptrons. A kernel method has to be applied to this data to find a separable hyper-plane.
```
def plot_hyperplane(x1, x2, y, w, b):
slope = -w[0]/w[1]
intercept = -b/w[1]
x_hyperplane = np.linspace(-3,3,20)
y_hyperplane = slope*x_hyperplane + intercept
plt.scatter(x1, x2, c=y)
plt.plot(x_hyperplane, y_hyperplane, 'b-')
plt.title("Dataset with fitted hyperplane")
plt.xlabel("First feature")
plt.ylabel("Second feature")
plt.show()
X2_1 = train_ns_data.iloc[:,:-2]
X2_2 = train_ns_data.iloc[:,1:-1]
y2 = train_ns_data.iloc[:,-1:]
plot_hyperplane(X2_1, X2_2, y2, final_w2, final_b2)
```
**Solution:** For a linearly non-separable data, perceptron is not a good algorithm to use, because it will never converge. Theoretically, it is possible to find an upper bound on number of steps required to converge (if the data is linearly separable). But, it cannot be put into practice easily, as to compute that, we first need to find the weight vector.
Another thing to note is that, even if there is a convergence, the number of steps needed might be too large, which might bring the problem of computation power.
For this assignment, I have established a heurisitc that if the mismatch % is approximately 30% of the total number of samples and the iterations have been more than 10000, then that means that possibly the data is not separable linearly. My reasoning for this is very straight forward, if 30% of data is still mismatched, it is likely that the mismatch will continue to happen for long, which is not computationally feasible.
| true |
code
| 0.540136 | null | null | null | null |
|
# 5章 線形回帰
```
# 必要ライブラリの導入
!pip install japanize_matplotlib | tail -n 1
!pip install torchviz | tail -n 1
!pip install torchinfo | tail -n 1
# 必要ライブラリのインポート
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import japanize_matplotlib
from IPython.display import display
import torch
import torch.nn as nn
import torch.optim as optim
from torchviz import make_dot
# デフォルトフォントサイズ変更
plt.rcParams['font.size'] = 14
# デフォルトグラフサイズ変更
plt.rcParams['figure.figsize'] = (6,6)
# デフォルトで方眼表示ON
plt.rcParams['axes.grid'] = True
# numpyの浮動小数点の表示精度
np.set_printoptions(suppress=True, precision=4)
```
## 5.3 線形関数(nn.Linear)
### 入力:1 出力:1 の線形関数
```
# 乱数の種固定
torch.manual_seed(123)
# 入力:1 出力:1 の線形関数の定義
l1 = nn.Linear(1, 1)
# 線形関数の表示
print(l1)
# パラメータ名、パラメータ値、shapeの表示
for param in l1.named_parameters():
print('name: ', param[0])
print('tensor: ', param[1])
print('shape: ', param[1].shape)
# 初期値設定
nn.init.constant_(l1.weight, 2.0)
nn.init.constant_(l1.bias, 1.0)
# 結果確認
print(l1.weight)
print(l1.bias)
# テスト用データ生成
# x_npをnumpy配列で定義
x_np = np.arange(-2, 2.1, 1)
# Tensor化
x = torch.tensor(x_np).float()
# サイズを(N,1)に変更
x = x.view(-1,1)
# 結果確認
print(x.shape)
print(x)
# 1次関数のテスト
y = l1(x)
print(y.shape)
print(y.data)
```
### 入力:2 出力:1 の線形関数
```
# 入力:2 出力:1 の線形関数の定義
l2 = nn.Linear(2, 1)
# 初期値設定
nn.init.constant_(l2.weight, 1.0)
nn.init.constant_(l2.bias, 2.0)
# 結果確認
print(l2.weight)
print(l2.bias)
# 2次元numpy配列
x2_np = np.array([[0, 0], [0, 1], [1, 0], [1,1]])
# Tensor化
x2 = torch.tensor(x2_np).float()
# 結果確認
print(x2.shape)
print(x2)
# 関数値計算
y2 = l2(x2)
# shape確認
print(y2.shape)
# 値確認
print(y2.data)
```
### 入力:2 出力:3 の線形関数
```
# 入力:2 出力:3 の線形関数の定義
l3 = nn.Linear(2, 3)
# 初期値設定
nn.init.constant_(l3.weight[0,:], 1.0)
nn.init.constant_(l3.weight[1,:], 2.0)
nn.init.constant_(l3.weight[2,:], 3.0)
nn.init.constant_(l3.bias, 2.0)
# 結果確認
print(l3.weight)
print(l3.bias)
# 関数値計算
y3 = l3(x2)
# shape確認
print(y3.shape)
# 値確認
print(y3.data)
```
## 5.4 カスタムクラスを利用したモデル定義
```
# モデルのクラス定義
class Net(nn.Module):
def __init__(self, n_input, n_output):
# 親クラスnn.Modulesの初期化呼び出し
super().__init__()
# 出力層の定義
self.l1 = nn.Linear(n_input, n_output)
# 予測関数の定義
def forward(self, x):
x1 = self.l1(x) # 線形回帰
return x1
# ダミー入力
inputs = torch.ones(100,1)
# インスタンスの生成 (1入力1出力の線形モデル)
n_input = 1
n_output = 1
net = Net(n_input, n_output)
# 予測
outputs = net(inputs)
```
## 5.6 データ準備
UCI公開データセットのうち、回帰でよく使われる「ボストン・データセット」を用いる。
https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html
オリジナルのデーセットは、17項目の入力値から、不動産価格を予測する目的のものだが、
一番単純な「単回帰モデル」(1入力)のモデルを作るため、このうち``RM``の1項目だけを抽出する。
```
# 学習用データ準備
# ライブラリのインポート
from sklearn.datasets import load_boston
# データ読み込み
boston = load_boston()
# 入力データと正解データ取得
x_org, yt = boston.data, boston.target
# 項目名リスト取得
feature_names = boston.feature_names
# 結果確認
print('元データ', x_org.shape, yt.shape)
print('項目名: ', feature_names)
# データ絞り込み (項目 RMのみ)
x = x_org[:,feature_names == 'RM']
print('絞り込み後', x.shape)
print(x[:5,:])
# 正解データ yの表示
print('正解データ')
print(yt[:5])
# 散布図の表示
plt.scatter(x, yt, s=10, c='b')
plt.xlabel('部屋数')
plt.ylabel('価格')
plt.title('部屋数と価格の散布図')
plt.show()
```
## 5.7 モデル定義
```
# 変数定義
# 入力次元数
n_input= x.shape[1]
# 出力次元数
n_output = 1
print(f'入力次元数: {n_input} 出力次元数: {n_output}')
# 機械学習モデル(予測モデル)クラス定義
class Net(nn.Module):
def __init__(self, n_input, n_output):
# 親クラスnn.Modulesの初期化呼び出し
super().__init__()
# 出力層の定義
self.l1 = nn.Linear(n_input, n_output)
# 初期値を全部1にする
# 「ディープラーニングの数学」と条件を合わせる目的
nn.init.constant_(self.l1.weight, 1.0)
nn.init.constant_(self.l1.bias, 1.0)
# 予測関数の定義
def forward(self, x):
x1 = self.l1(x) # 線形回帰
return x1
# インスタンスの生成
# 1入力1出力の線形モデル
net = Net(n_input, n_output)
# モデル内のパラメータの確認
# モデル内の変数取得にはnamed_parameters関数を利用する
# 結果の第1要素が名前、第2要素が値
#
# predict.weightとpredict.biasがあることがわかる
# 初期値はどちらも1.0になっている
for parameter in net.named_parameters():
print(f'変数名: {parameter[0]}')
print(f'変数値: {parameter[1].data}')
# パラメータのリスト取得にはparameters関数を利用する
for parameter in net.parameters():
print(parameter)
```
### モデル確認
```
# モデルの概要表示
print(net)
# モデルのサマリー表示
from torchinfo import summary
summary(net, (1,))
```
### 損失関数と最適化関数
```
# 損失関数: 平均2乗誤差
criterion = nn.MSELoss()
# 学習率
lr = 0.01
# 最適化関数: 勾配降下法
optimizer = optim.SGD(net.parameters(), lr=lr)
```
## 5.8 勾配降下法
```
# 入力変数x と正解値 ytのテンソル変数化
inputs = torch.tensor(x).float()
labels = torch.tensor(yt).float()
# 次元数確認
print(inputs.shape)
print(labels.shape)
# 損失値計算用にlabels変数を(N,1)次元の行列に変換する
labels1 = labels.view((-1, 1))
# 次元数確認
print(labels1.shape)
# 予測計算
outputs = net(inputs)
# 損失計算
loss = criterion(outputs, labels1)
# 損失値の取得
print(f'{loss.item():.5f}')
# 損失の計算グラフ可視化
g = make_dot(loss, params=dict(net.named_parameters()))
display(g)
# 予測計算
outputs = net(inputs)
# 損失計算
loss = criterion(outputs, labels1)
# 勾配計算
loss.backward()
# 勾配の結果が取得可能に
print(net.l1.weight.grad)
print(net.l1.bias.grad)
# パラメータ修正
optimizer.step()
# パラメータ値が変わる
print(net.l1.weight)
print(net.l1.bias)
# 勾配値の初期化
optimizer.zero_grad()
# 勾配値がすべてゼロになっている
print(net.l1.weight.grad)
print(net.l1.bias.grad)
```
### 繰り返し計算
```
# 学習率
lr = 0.01
# インスタンス生成 (パラメータ値初期化)
net = Net(n_input, n_output)
# 損失関数: 平均2乗誤差
criterion = nn.MSELoss()
# 最適化関数: 勾配降下法
optimizer = optim.SGD(net.parameters(), lr=lr)
# 繰り返し回数
num_epochs = 50000
# 評価結果記録用 (損失関数値のみ記録)
history = np.zeros((0,2))
# 繰り返し計算メインループ
for epoch in range(num_epochs):
# 勾配値初期化
optimizer.zero_grad()
# 予測計算
outputs = net(inputs)
# 損失計算
# 「ディープラーニングの数学」に合わせて2で割った値を損失とした
loss = criterion(outputs, labels1) / 2.0
# 勾配計算
loss.backward()
# パラメータ修正
optimizer.step()
# 100回ごとに途中経過を記録する
if ( epoch % 100 == 0):
history = np.vstack((history, np.array([epoch, loss.item()])))
print(f'Epoch {epoch} loss: {loss.item():.5f}')
```
## 5.9 結果確認
```
# 損失初期値と最終値
print(f'損失初期値: {history[0,1]:.5f}')
print(f'損失最終値: {history[-1,1]:.5f}')
# 学習曲線の表示 (損失)
# 最初の1つを除く
plt.plot(history[1:,0], history[1:,1], 'b')
plt.xlabel('繰り返し回数')
plt.ylabel('損失')
plt.title('学習曲線(損失)')
plt.show()
# 回帰直線の算出
# xの最小値、最大値
xse = np.array((x.min(), x.max())).reshape(-1,1)
Xse = torch.tensor(xse).float()
with torch.no_grad():
Yse = net(Xse)
print(Yse.numpy())
# 散布図と回帰直線の描画
plt.scatter(x, yt, s=10, c='b')
plt.xlabel('部屋数')
plt.ylabel('価格')
plt.plot(Xse.data, Yse.data, c='k')
plt.title('散布図と回帰直線')
plt.show()
```
## 5.10 重回帰モデルへの拡張
```
# 列(LSTAT: 低所得者率)の追加
x_add = x_org[:,feature_names == 'LSTAT']
x2 = np.hstack((x, x_add))
# shapeの表示
print(x2.shape)
# 入力データxの表示
print(x2[:5,:])
# 今度は入力次元数=2
n_input = x2.shape[1]
print(n_input)
# モデルインスタンスの生成
net = Net(n_input, n_output)
# モデル内のパラメータの確認
# predict.weight が2次元に変わった
for parameter in net.named_parameters():
print(f'変数名: {parameter[0]}')
print(f'変数値: {parameter[1].data}')
# モデルの概要表示
print(net)
# モデルのサマリー表示
from torchinfo import summary
summary(net, (2,))
# 入力変数x2 のテンソル変数化
# labels, labels1は前のものをそのまま利用
inputs = torch.tensor(x2).float()
```
### くり返し計算
```
# 初期化処理
# 学習率
lr = 0.01
# インスタンス生成 (パラメータ値初期化)
net = Net(n_input, n_output)
# 損失関数: 平均2乗誤差
criterion = nn.MSELoss()
# 最適化関数: 勾配降下法
optimizer = optim.SGD(net.parameters(), lr=lr)
# 繰り返し回数
num_epochs = 50000
# 評価結果記録用 (損失関数値のみ記録)
history = np.zeros((0,2))
# 繰り返し計算メインループ
for epoch in range(num_epochs):
# 勾配値初期化
optimizer.zero_grad()
# 予測計算
outputs = net(inputs)
# 誤差計算
# 「ディープラーニングの数学」に合わせて2で割った値を損失とした
loss = criterion(outputs, labels1) / 2.0
# 勾配計算
loss.backward()
# パラメータ修正
optimizer.step()
# 100回ごとに途中経過を記録する
if ( epoch % 100 == 0):
history = np.vstack((history, np.array([epoch, loss.item()])))
print(f'Epoch {epoch} loss: {loss.item():.5f}')
```
## 5.11 学習率の変更
```
# 繰り返し回数
#num_epochs = 50000
num_epochs = 2000
# 学習率
#l r = 0.01
lr = 0.001
# モデルインスタンスの生成
net = Net(n_input, n_output)
# 損失関数: 平均2乗誤差
criterion = nn.MSELoss()
# 最適化関数: 勾配降下法
optimizer = optim.SGD(net.parameters(), lr=lr)
# 繰り返し計算メインループ
# 評価結果記録用 (損失関数値のみ記録)
history = np.zeros((0,2))
for epoch in range(num_epochs):
# 勾配値初期化
optimizer.zero_grad()
# 予測計算
outputs = net(inputs)
# 誤差計算
loss = criterion(outputs, labels1) / 2.0
#勾配計算
loss.backward()
# パラメータ修正
optimizer.step()
# 100回ごとに途中経過を記録する
if ( epoch % 100 == 0):
history = np.vstack((history, np.array([epoch, loss.item()])))
print(f'Epoch {epoch} loss: {loss.item():.5f}')
# 損失初期値、最終値
print(f'損失初期値: {history[0,1]:.5f}')
print(f'損失最終値: {history[-1,1]:.5f}')
# 学習曲線の表示 (損失)
plt.plot(history[:,0], history[:,1], 'b')
plt.xlabel('繰り返し回数')
plt.ylabel('損失')
plt.title('学習曲線(損失)')
plt.show()
```
| true |
code
| 0.689436 | null | null | null | null |
|
# XGBoost vs LightGBM
In this notebook we collect the results from all the experiments and reports the comparative difference between XGBoost and LightGBM
```
import matplotlib.pyplot as plt
import nbformat
import json
from toolz import pipe, juxt
import pandas as pd
import seaborn
from toolz import curry
from bokeh.io import show, output_notebook
from bokeh.charts import Bar
from bokeh.models.renderers import GlyphRenderer
from bokeh.models.glyphs import Rect
from bokeh.models import Range1d
from toolz import curry
from bokeh.io import export_svgs
from IPython.display import SVG, display
import warnings
warnings.filterwarnings("ignore")
%matplotlib inline
output_notebook()
```
We are going to read the results from the following notebooks
```
notebooks = {
'Airline':'01_airline.ipynb',
'Airline_GPU': '01_airline_GPU.ipynb',
'BCI': '02_BCI.ipynb',
'BCI_GPU': '02_BCI_GPU.ipynb',
'Football': '03_football.ipynb',
'Football_GPU': '03_football_GPU.ipynb',
'Planet': '04_PlanetKaggle.ipynb',
'Plannet_GPU': '04_PlanetKaggle_GPU.ipynb',
'Fraud': '05_FraudDetection.ipynb',
'Fraud_GPU': '05_FraudDetection_GPU.ipynb',
'HIGGS': '06_HIGGS.ipynb',
'HIGGS_GPU': '06_HIGGS_GPU.ipynb'
}
def read_notebook(notebook_name):
with open(notebook_name) as f:
return nbformat.read(f, as_version=4)
def results_cell_from(nb):
for cell in nb.cells:
if cell['cell_type']=='code' and cell['source'].startswith('# Results'):
return cell
def extract_text(cell):
return cell['outputs'][0]['text']
@curry
def remove_line_with(match_str, json_string):
return '\n'.join(filter(lambda x: match_str not in x, json_string.split('\n')))
def process_nb(notebook_name):
return pipe(notebook_name,
read_notebook,
results_cell_from,
extract_text,
remove_line_with('total RAM usage'),
json.loads)
```
Here we collect the results from all the exeperiment notebooks. The method simply searches the notebooks for a cell that starts with # Results. It then reads that cells output in as JSON.
```
results = {nb_key:process_nb(nb_name) for nb_key, nb_name in notebooks.items()}
results
datasets = [k for k in results.keys()]
print(datasets)
algos = [a for a in results[datasets[0]].keys()]
print(algos)
```
We wish to compare LightGBM and XGBoost both in terms of performance as well as how long they took to train.
```
def average_performance_diff(dataset):
lgbm_series = pd.Series(dataset['lgbm']['performance'])
try:
perf = 100*((lgbm_series-pd.Series(dataset['xgb']['performance']))/lgbm_series).mean()
except KeyError:
perf = None
return perf
def train_time_ratio(dataset):
try:
val = dataset['xgb']['train_time']/dataset['lgbm']['train_time']
except KeyError:
val = None
return val
def train_time_ratio_hist(dataset):
try:
val = dataset['xgb_hist']['train_time']/dataset['lgbm']['train_time']
except KeyError:
val = None
return val
def test_time_ratio(dataset):
try:
val = dataset['xgb']['test_time']/dataset['lgbm']['test_time']
except KeyError:
val = None
return val
metrics = juxt(average_performance_diff, train_time_ratio, train_time_ratio_hist, test_time_ratio)
res_per_dataset = {dataset_key:metrics(dataset) for dataset_key, dataset in results.items()}
results_df = pd.DataFrame(res_per_dataset, index=['Perf. Difference(%)',
'Train Time Ratio',
'Train Time Ratio Hist',
'Test Time Ratio']).T
results_df
results_gpu = results_df.ix[[idx for idx in results_df.index if idx.endswith('GPU')]]
results_cpu = results_df.ix[~results_df.index.isin(results_gpu.index)]
```
Plot of train time ratio for CPU experiments.
```
data = {
'Ratio': results_cpu['Train Time Ratio'].values.tolist() + results_cpu['Train Time Ratio Hist'].values.tolist(),
'label': results_cpu.index.values.tolist()*2,
'group': ['xgb/lgb']*len(results_cpu.index.values) + ['xgb_hist/lgb']*len(results_cpu.index.values)
}
bar = Bar(data, values='Ratio', agg='mean', label='label', group='group',
plot_width=600, plot_height=400, bar_width=0.7, color=['#5975a4','#99ccff'], legend='top_right')
bar.axis[0].axis_label=''
bar.axis[1].axis_label='Train Time Ratio (XGBoost/LightGBM)'
bar.axis[1].axis_label_text_font_size='12pt'
bar.y_range = Range1d(0, 30)
bar.toolbar_location='above'
bar.legend[0].visible=True
show(bar)
bar.output_backend = "svg"
export_svgs(bar, filename="xgb_vs_lgbm_train_time.svg")
display(SVG('xgb_vs_lgbm_train_time.svg'))
```
Plot of train time ratio for GPU experiments.
```
data = {
'Ratio': results_gpu['Train Time Ratio'].values.tolist() + results_gpu['Train Time Ratio Hist'].values.tolist(),
'label': results_gpu.index.values.tolist()*2,
'group': ['xgb/lgb']*len(results_gpu.index.values) + ['xgb_hist/lgb']*len(results_gpu.index.values)
}
bar = Bar(data, values='Ratio', agg='mean', label='label', group='group',
plot_width=600, plot_height=400, bar_width=0.5, color=['#ff8533','#ffd1b3'], legend='top_right')
bar.axis[0].axis_label=''
bar.y_range = Range1d(0, 30)
bar.axis[1].axis_label='Train Time Ratio (XGBoost/LightGBM)'
bar.axis[1].axis_label_text_font_size='12pt'
bar.toolbar_location='above'
bar.legend[0].visible=True
show(bar)
bar.output_backend = "svg"
export_svgs(bar, filename="xgb_vs_lgbm_train_time_gpu.svg")
display(SVG('xgb_vs_lgbm_train_time_gpu.svg'))
data = {
'Perf. Difference(%)': results_df['Perf. Difference(%)'].values,
'label': results_df.index.values
}
bar = Bar(data, values='Perf. Difference(%)', agg='mean', label=['label'],
plot_width=600, plot_height=400, bar_width=0.7, color='#5975a4')
bar.axis[0].axis_label=''
bar.axis[1].axis_label='Perf. Difference(%)'
bar.toolbar_location='above'
bar.legend[0].visible=False
show(bar)
bar.output_backend = "svg"
export_svgs(bar, filename="xgb_vs_lgbm_performance.svg")
display(SVG('xgb_vs_lgbm_performance.svg'))
```
For the speed results we can see that LightGBM is on average 5 times faster than the CPU and GPU versions of XGBoost and XGBoost histogram. In regards to the performance, we can see that LightGBM is sometimes better and sometimes worse.
Analyzing the results of XGBoost in CPU we can see that XGBoost histogram is faster than XGBoost in the Airline, Fraud and HIGGS datasets, but much slower in Planet and BCI dataset. In these two cases there is a memory overhead due to the high number of features. In the case of football dataset, the histogram implementation is slightly slower, we believe that there could be a slight principle of memory overhead.
Finally, if we look at the results of XGBoost in GPU we see that there are several values missing. This is due to an out of memory of the standard version. In our experiments we observed that XGBoost's memory consumption is around 10 times higher than LightGBM and 5 times higher than XGBoost histogram. We see that the histogram version is faster except in the BCI dataset, where there could be a memory overhead like in the CPU version.
| true |
code
| 0.484136 | null | null | null | null |
|
```
import os
import sys
import time
import matplotlib.pyplot as plt
import numpy as np
import GCode
import GRBL
# Flip a 2D array. Effectively reversing the path.
flip2 = np.array([
[0, 1],
[1, 0],
])
flip2
# Flip a 2x3 array. Effectively reversing the path.
flip3 = np.array([
[0, 0, 1],
[0, 1, 0],
[1, 0, 0],
])
flip3
A = np.array([
[1, 2],
[3, 4],
])
A
np.matmul(flip2, A)
B = np.array([
[1, 2],
[3, 4],
[5, 6],
])
B
np.matmul(flip3, B)
B.shape[0]
np.eye(B.shape[0])
flip_n_reverseit = np.eye(B.shape[0])[:, ::-1]
flip_n_reverseit
def reverse(self, points):
flip_n_reverseit = np.eye(points.shape[0])[:, ::-1]
return np.matmul(flip_n_reverseit, points)
reverse(None, B)
```
# Code:
Draw a 10 mm line from (0, 0) to (10, 0).
```
line_len = 10
line_n_points = 2
p = np.linspace(0, line_len, line_n_points, endpoint=True)
p
line_n_points = 3
p = np.linspace(0, line_len, line_n_points, endpoint=True)
p
line_n_points = 4
p = np.linspace(0, line_len, line_n_points, endpoint=True)
p
p
Y=0
for X in np.linspace(0, line_len, line_n_points, endpoint=True):
def HorzLine(X0=0, Xf=10, Y=0, n_points=2):
p = np.linspace(X0, Xf, n_points, endpoint=True)
line_points = np.array([
p,
Y*np.ones(p.shape),
])
return line_points.transpose()
HorzLine()
def VertLine(X=0, Y0=0, Yf=10, n_points=2):
p = np.linspace(Y0, Yf, n_points, endpoint=True)
line_points = np.array([
X*np.ones(p.shape),
p,
])
return line_points.transpose()
VertLine()
points = HorzLine(X0=0, Xf=10, Y=0, n_points=2)
points
line = GCode.Line(points=points)
line
line.__repr__()
prog_cfg={
"points": points
}
prog_cfg
line_cfg = {
"X0": 0,
"Xf": 10,
"Y": 0,
"n_points": 2
}
line_cfg
help(GCode.Line)
help(GCode.Program)
progs = list()
for n_points in range(2, 10):
line_cfg = {
"X0": 0,
"Xf": 10,
"Y": 0,
"n_points": n_points
}
points = HorzLine(**line_cfg)
line_cfg = {
"points": points,
"feed":120,
"power":128,
"dynamic_power": True,
}
line = GCode.Line(points=points)
prog_cfg={
"lines": [line, line],
"feed": 120
}
prog = GCode.Program(**prog_cfg)
progs.append(prog)
progs
for prog in progs:
print(len(prog.buffer))
for prog in progs:
prog.generate_gcode()
print(len(prog.buffer))
list(map(lambda prog: prog.generate_gcode(), progs))
list(map(lambda prog: len(prog.buffer), progs))
import threading
def concurrent_map(func, data):
"""
Similar to the bultin function map(). But spawn a thread for each argument
and apply `func` concurrently.
Note: unlike map(), we cannot take an iterable argument. `data` should be an
indexable sequence.
"""
N = len(data)
result = [None] * N
# wrapper to dispose the result in the right slot
def task_wrapper(i):
result[i] = func(data[i])
threads = [threading.Thread(target=task_wrapper, args=(i,)) for i in range(N)]
for t in threads:
t.start()
for t in threads:
t.join()
return result
concurrent_map(lambda prog: prog.generate_gcode(), progs)
concurrent_map(lambda prog: len(prog.buffer), progs)
concurrent_map(lambda prog: prog.__repr__(), progs)
concurrent_map(lambda prog: prog.dist, progs)
concurrent_map(lambda prog: prog.jog_dist, progs)
concurrent_map(lambda prog: prog.laserin_dist, progs)
m=concurrent_map(lambda prog: prog.laserin_dist, progs)
np.diff(m)
np.diff(m)==0
np.all(np.diff(m)==0)
assert(np.all(np.diff(m)==0))
flip2
reverse(None, progs[1].lines[0].points)
progs
```
| true |
code
| 0.419886 | null | null | null | null |
|
# Supervised Learning
Supervised learning consists in learning the link between two datasets: the observed data X and an external variable y that we are trying to predict, usually called “target” or “labels”. Most often, y is a 1D array of length n_samples.
If the prediction task is to classify the observations in a set of finite labels, in other words to “name” the objects observed, the task is said to be a **classification** task. On the other hand, if the goal is to predict a continuous target variable, it is said to be a **regression** task.
Clustering, which we've just done with K means, is a type of *unsupervised* learning similar to classification. Here, the difference is that we'll be using the labels in our data in our algorithm.
## Classification
"The problem of identifying to which of a set of categories (sub-populations) a new observation belongs, on the basis of a training set of data containing observations (or instances) whose category membership is known." (Wikipedia)
We've seen one classification example already, the iris dataset. In this dataset, iris flowers are classified based on their petal and sepal geometries.
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from sklearn.decomposition import PCA
def pca_plot(data):
pca = PCA(n_components=2)
pca.fit(data.data)
data_pca = pca.transform(data.data)
for label in range(len(data.target_names)):
plt.scatter(data_pca[data.target==label, 0],
data_pca[data.target==label, 1],
label=data.target_names[label])
plt.xlabel('Principal Component 1')
plt.ylabel('Principal Component 2')
plt.legend(loc='best')
plt.tight_layout()
plt.show()
from sklearn.datasets import load_iris
iris = load_iris()
pca_plot(iris)
```
Another dataset with more features is the wine classification dataset, which tries to determine the original cultivar, or plant family, of three different Italian wines. A chemical analysis determined the following samples:
1. Alcohol
2. Malic acid
3. Ash
4. Alcalinity of ash
5. Magnesium
6. Total phenols
7. Flavanoids
8. Nonflavanoid phenols
9. Proanthocyanins
10. Color intensity
11. Hue
12. OD280/OD315 of diluted wines
13. Proline
```
from sklearn.datasets import load_wine
wine = load_wine()
pca_plot(wine)
```
A final and more difficult dataset is a sample from the National Institute of Standards and Technology (NIST) dataset on handwritten numbers. A modified and larger version of this, Modified NIST or MNIST, is a current standard benchmark for state of the art machine learning algorithms. In this problem, each datapoint is an 8x8 pixel image (64 features) and the classification task is to label each image as the correct number.
```
from sklearn.datasets import load_digits
digits = load_digits()
images_and_labels = list(zip(digits.images, digits.target))
for index, (image, label) in enumerate(images_and_labels[:8]):
plt.subplot(2, 4, index + 1)
plt.axis('off')
plt.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest')
plt.title('Label: %i' % label)
plt.show()
pca_plot(digits)
```
## Regression
"In statistical modeling, regression analysis is a set of statistical processes for estimating the relationships among variables. It includes many techniques for modeling and analyzing several variables, when the focus is on the relationship between a dependent variable and one or more independent variables (or 'predictors'). More specifically, regression analysis helps one understand how the typical value of the dependent variable (or 'criterion variable') changes when any one of the independent variables is varied, while the other independent variables are held fixed." (Wikipedia)
In regression, each set of features doesn't correspond to a label but rather to a value. The task of the regression algorithm is to correctly predict this value based on the feature data. One way to think about regression and classification is that regression is continuous while classification is discrete.
Scikit learn also comes with a number of sample regression datasets.
In our example regression dataset, health metrics of diabetes patients were measured and then the progress of their diabetes was quantitatively measured after 1 year. The features are:
1. age
2. sex
3. body mass index
4. average blood pressure
+ 5-10 six blood serum measurements
```
from sklearn.datasets import load_diabetes
diabetes = load_diabetes()
y = diabetes.target
features = ["AGE", "SEX", "BMI", "BP", "BL1", "BL2", "BL3", "BL4", "BL5", "BL6"]
plt.figure(figsize=(20,20))
for i in range(10):
plt.subplot(4, 4, i + 1)
plt.scatter(diabetes.data[:, i], y, edgecolors=(0, 0, 0));
plt.title('Feature: %s' % features[i])
```
<div class="alert alert-success">
<b>EXERCISE: UCI datasets</b>
<ul>
<li>
Many of these datasets originally come from the UCI Machine Learning Repository. Visit https://archive.ics.uci.edu/ml/index.php and select a dataset. What is the dataset describing? What are the features? Is it classification or regression? How many data samples are there?
</li>
</ul>
</div>
| true |
code
| 0.608361 | null | null | null | null |
|
Lambda School Data Science
*Unit 2, Sprint 1, Module 3*
---
```
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
```
# Module Project: Ridge Regression
For this project, you'll return to the Tribecca Condo dataset. But this time, you'll look at the _entire_ dataset and try to predict property sale prices.
The [NYC Department of Finance](https://www1.nyc.gov/site/finance/taxes/property-rolling-sales-data.page) has a glossary of property sales terms and NYC Building Class Code Descriptions. The data comes from the [NYC OpenData](https://data.cityofnewyork.us/browse?q=NYC%20calendar%20sales) portal.
## Directions
The tasks for this project are the following:
- **Task 1:** Import `csv` file using `wrangle` function.
- **Task 2:** Conduct exploratory data analysis (EDA), and modify `wrangle` function to engineer two subset your dataset to one-family dwellings whose price is between \\$100,000 and \\$2,000,000.
- **Task 3:** Split data into feature matrix `X` and target vector `y`.
- **Task 4:** Split feature matrix `X` and target vector `y` into training and test sets.
- **Task 5:** Establish the baseline mean absolute error for your dataset.
- **Task 6:** Build and train a `OneHotEncoder`, and transform `X_train` and `X_test`.
- **Task 7:** Build and train a `LinearRegression` model.
- **Task 8:** Build and train a `Ridge` model.
- **Task 9:** Calculate the training and test mean absolute error for your `LinearRegression` model.
- **Task 10:** Calculate the training and test mean absolute error for your `Ridge` model.
- **Task 11:** Create a horizontal bar chart showing the 10 most influencial features for your `Ridge` model.
**Note**
You should limit yourself to the following libraries for this project:
- `category_encoders`
- `matplotlib`
- `pandas`
- `sklearn`
# I. Wrangle Data
```
def wrangle(filepath):
# Import csv file
cols = ['BOROUGH', 'NEIGHBORHOOD',
'BUILDING CLASS CATEGORY', 'GROSS SQUARE FEET',
'YEAR BUILT', 'SALE PRICE', 'SALE DATE']
df = pd.read_csv(filepath, usecols=cols)
return df
filepath = DATA_PATH+'condos/NYC_Citywide_Rolling_Calendar_Sales.csv'
```
**Task 1:** Use the above `wrangle` function to import the `NYC_Citywide_Rolling_Calendar_Sales.csv` file into a DataFrame named `df`.
```
df = ...
```
**Task 2:** Modify the above `wrangle` function so that:
- The column `'SALE DATE'` becomes the `DatetimeIndex`.
- The dtype for the `'BOROUGH'` column is `object`, not `int`.
- The dtype for the `'SALE PRICE'` column is `int`, not `object`.
- The dataset includes only one-family dwellings (`BUILDING CLASS CATEGORY == '01 ONE FAMILY DWELLINGS'`).
- The dataset includes only properties whose sale price is between \\$100,000 and \\$2,000,000.
```
# Perform your exploratory data analysis here and
# modify the wrangle function above
```
# II. Split Data
**Task 3:** Split your dataset into the feature matrix `X` and the target vector `y`. You want to predict `'SALE_PRICE'`.
```
X = ...
y = ...
```
**Task 4:** Split `X` and `y` into a training set (`X_train`, `y_train`) and a test set (`X_test`, `y_test`).
- Your training set should include data from January to March 2019.
- Your test set should include data from April 2019.
```
X_train, y_train = ..., ...
X_test, y_test = ..., ...
```
# III. Establish Baseline
**Task 5:** Since this is a **regression** problem, you need to calculate the baseline mean absolute error for your model.
```
baseline_mae = ...
print('Baseline MAE:', baseline_mae)
```
# IV. Build Model
**Task 6:** Build and train a `OneHotEncoder` and then use it to transform `X_train` and `X_test`.
```
ohe = ...
XT_train = ...
XT_test = ...
```
**Task 7:** Build and train a `LinearRegression` model named `model_lr`. Remember to train your model using your _transformed_ feature matrix.
```
model_lr = ...
```
**Task 8:** Build and train a `Ridge` model named `model_r`. Remember to train your model using your _transformed_ feature matrix.
```
model_r = ...
```
# V. Check Metrics
**Task 9:** Check the training and test metrics for `model_lr`.
```
training_mae_lr = ...
test_mae_lr = ...
print('Linear Training MAE:', training_mae_lr)
print('Linear Test MAE:', test_mae_lr)
```
**Task 10:** Check the training and test metrics for `model_r`.
```
training_mae_r = ...
test_mae_r = ...
print('Ridge Training MAE:', training_mae_r)
print('Ridge Test MAE:', test_mae_r)
```
**Stretch Goal:** Calculate the training and test $R^2$ scores `model_r`.
```
# Caculate R^2 score
```
# IV. Communicate Results
**Task 11:** Create a horizontal barchart that plots the 10 most important coefficients for `model_r`, sorted by absolute value. Your figure should look like our example from class:

**Note:** Your figure shouldn't be identical to the one above. Your model will have different coefficients since it's been trained on different data. Only the formatting should be the same.
| true |
code
| 0.387864 | null | null | null | null |
|
# Simple ARIMAX
This code template is for Time Series Analysis and Forecasting to make scientific predictions based on historical time stamped data with the help of ARIMAX algorithm
### Required Packages
```
import warnings
import numpy as np
import pandas as pd
import seaborn as se
import matplotlib.pyplot as plt
from statsmodels.tsa.stattools import adfuller
from statsmodels.graphics.tsaplots import plot_acf, plot_pacf
from statsmodels.tsa.statespace.sarimax import SARIMAX
from sklearn.metrics import mean_absolute_error, mean_squared_error
warnings.filterwarnings("ignore")
```
### Initialization
Filepath of CSV file
```
file_path = ""
```
Variable containing the date time column name of the Time Series data
```
date = ""
```
Target feature for prediction.
```
target = ""
```
### Data Fetching
Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
```
df = pd.read_csv(file_path)
df.head()
```
### Data Preprocessing
Since the majority of the machine learning models for Time Series Forecasting doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippets have functions, which removes the rows containing null value if any exists. And convert the string classes date column in the datasets to proper Date-time classes.
After the proper date conversions are done and null values are dropped, we set the Date column as the index value.
```
def data_preprocess(df, target, date):
df = df.dropna(axis=0, how = 'any')
df[date] = pd.to_datetime(df[date])
df = df.set_index(date)
return df
df = data_preprocess(df,target,date)
df.head()
df.plot(figsize = (15,8))
plt.show()
```
### Seasonality decomposition
Since Simple ARIMAX for non-seasonal data, we need to check for any seasonality in our time series and decompose it.
We use the Dickey Fuller Test for testing the seasonality and if the ADF Statistic value is positive, it means that the data has seasonality.
#### Dickey Fuller Test
The Dickey Fuller test is a common statistical test used to test whether a given Time series is stationary or not. The Augmented Dickey Fuller (ADF) test expands the Dickey-Fuller test equation to include high order regressive process in the model. We can implement the ADF test via the **adfuller()** function. It returns the following outputs:
1. adf : float
> The test statistic.
2. pvalue : float
> MacKinnon's approximate p-value based on MacKinnon(1994, 2010). It is used alongwith the test statistic to reject or accept the null hypothesis.
3. usedlag : int
> Number of lags considered for the test
4. critical values : dict
> Critical values for the test statistic at the 1 %, 5 %, and 10 % levels. Based on MacKinnon (2010).
For more information on the adfuller() function [click here](https://www.statsmodels.org/stable/generated/statsmodels.tsa.stattools.adfuller.html)
```
def dickeyFuller(df,target):
# Applying Dickey Fuller Test
X = df.values
result = adfuller(X)
print('ADF Statistic: %f' % result[0])
print('p-value: %f' % result[1])
print('Number of lags used: %d' % result[2])
print('Critical Values:')
for key, value in result[4].items():
print('\t%s: %.3f' % (key, value))
# Decomposing Seasonality if it exists
if result[0]>0:
df[target] = df[target].rolling(12).mean()
return df
```
To remove the seasonality we use the rolling mean technique for smoothing our data and decomposing any seasonality.
This method provides rolling windows over the data. On the resulting windows, we can perform calculations using a statistical function (in this case the mean) in order to decompose the seasonality.
For more information about rolling function [click here](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.rolling.html)
```
df = dickeyFuller(df,target)
```
### Autocorrelation Plot
We can calculate the correlation for time series observations with observations with previous time steps, called lags. Because the correlation of the time series observations is calculated with values of the same series at previous times, this is called a serial correlation, or an autocorrelation.
A plot of the autocorrelation of a time series by lag is called the AutoCorrelation Function, or the acronym ACF.
An autocorrelation plot shows whether the elements of a time series are positively correlated, negatively correlated, or independent of each other.
The plot shows the value of the autocorrelation function (acf) on the vertical axis ranging from –1 to 1.
There are vertical lines (a “spike”) corresponding to each lag and the height of each spike shows the value of the autocorrelation function for the lag.
[API](https://www.statsmodels.org/stable/generated/statsmodels.graphics.tsaplots.plot_acf.html)
```
x = plot_acf(df, lags=40)
x.set_size_inches(15, 10, forward=True)
plt.show()
```
### Partial Autocorrelation Plot
A partial autocorrelation is a summary of the relationship between an observation in a time series with observations at prior time steps with the relationships of intervening observations removed.
The partial autocorrelation at lag k is the correlation that results after removing the effect of any correlations due to the terms at shorter lags. By examining the spikes at each lag we can determine whether they are significant or not. A significant spike will extend beyond the significant limits, which indicates that the correlation for that lag doesn't equal zero.
[API](https://www.statsmodels.org/stable/generated/statsmodels.graphics.tsaplots.plot_pacf.html)
```
y = plot_pacf(df, lags=40)
y.set_size_inches(15, 10, forward=True)
plt.show()
```
### Data Splitting
Since we are using a univariate dataset, we can directly split our data into training and testing subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
```
size = int(len(df)*0.9)
df_train, df_test = df.iloc[:size], df.iloc[size:]
```
### Model
The ARIMAX model is an extended version of the ARIMA model. It includes also other independent (predictor) variables. The model is also referred to as the vector ARIMA or the dynamic regression model.
The ARIMAX model is similar to a multivariate regression model, but allows to take advantage of autocorrelation that may be present in residuals of the regression to improve the accuracy of a forecast.
The API used here is from the statsmodels library. Statsmodels does not have a dedicated API for ARIMAX but the model can be created via <Code>SARIMAX</Code> API by setting the parameter <Code>seasonal_order</Code> = (0,0,0,0) i.e., no seasonality
#### Model Tuning Parameters
1. endog: array_like
>The observed time-series process
2. exog: array_like, optional
>Array of exogenous regressors, shaped nobs x k.
3. order: iterable or iterable of iterables, optional
>The (p,d,q) order of the model for the number of AR parameters, differences, and MA parameters. d must be an integer indicating the integration order of the process, while p and q may either be an integers indicating the AR and MA orders (so that all lags up to those orders are included) or else iterables giving specific AR and / or MA lags to include. Default is an AR(1) model: (1,0,0).
4. seasonal_order: iterable, optional
>The (P,D,Q,s) order of the seasonal component of the model for the AR parameters, differences, MA parameters, and periodicity. D must be an integer indicating the integration order of the process, while P and Q may either be an integers indicating the AR and MA orders (so that all lags up to those orders are included) or else iterables giving specific AR and / or MA lags to include. s is an integer giving the periodicity (number of periods in season), often it is 4 for quarterly data or 12 for monthly data. Default is no seasonal effect.
5. trend: str{‘n’,’c’,’t’,’ct’} or iterable, optional
>Parameter controlling the deterministic trend polynomial . Can be specified as a string where ‘c’ indicates a constant (i.e. a degree zero component of the trend polynomial), ‘t’ indicates a linear trend with time, and ‘ct’ is both. Can also be specified as an iterable defining the non-zero polynomial exponents to include, in increasing order. For example, [1,1,0,1] denotes
. Default is to not include a trend component.
6. measurement_error: bool, optional
>Whether or not to assume the endogenous observations endog were measured with error. Default is False.
7. time_varying_regression: bool, optional
>Used when an explanatory variables, exog, are provided provided to select whether or not coefficients on the exogenous regressors are allowed to vary over time. Default is False.
8. mle_regression: bool, optional
>Whether or not to use estimate the regression coefficients for the exogenous variables as part of maximum likelihood estimation or through the Kalman filter (i.e. recursive least squares). If time_varying_regression is True, this must be set to False. Default is True.
Refer to the official documentation at [statsmodels](https://www.statsmodels.org/dev/generated/statsmodels.tsa.statespace.sarimax.SARIMAX.html) for more parameters and information
```
model=SARIMAX(df[target],order=(1, 0, 0),seasonal_order=(0,0,0,0))
result=model.fit()
```
### Model Summary
After fitting the training data into our ARIMAX and training it, we can take a look at a brief summary of our model by using the **summary()** function. The followings aspects are included in our model summary:
1. Basic Model Details: The first column of our summary table contains the basic details regarding our model such as:
a. Name of dependent variable
b. Model used along with parameters
c. Date and time of model deployment
d. Time Series sample used to train the model
2. Probablistic Statistical Measures: The second column gives the values of the probablistic measures obtained by our model:
a. Number of observations
b. Log-likelihood, which comes from Maximum Likelihood Estimation, a technique for finding or optimizing the
parameters of a model in response to a training dataset.
c. Standard Deviation of the innovations
d. Akaike Information Criterion (AIC), which is derived from frequentist probability.
e. Bayesian Information Criterion (BIC), which is derived from Bayesian probability.
f. Hannan-Quinn Information Criterion (HQIC), which is an alternative to AIC and is derived using the log-likelihood and
the number of observartions.
3. Statistical Measures and Roots: The summary table also consists of certain other statistical measures such as z-value, standard error as well as the information on the characteristic roots of the model.
```
result.summary()
```
#### Simple Forecasting
```
df_train.tail()
```
### Predictions
By specifying the start and end time for our predictions, we can easily predict the future points in our time series with the help of our model.
```
d = df.drop([target], axis = 1)
start_date = d.iloc[size].name
end_date = d.iloc[len(df)-1].name
df_pred = result.predict(start = start_date, end = end_date)
df_pred.head()
```
## Model Accuracy
We will use the three most popular metrics for model evaluation: Mean absolute error (MAE), Mean squared error (MSE), or Root mean squared error (RMSE).
```
test = df_test[target]
print("Mean Absolute Error {:.2f}".format(mean_absolute_error(test,df_pred)))
print("Mean Squared Error {:.2f}".format(mean_squared_error(test,df_pred)))
print("Root Mean Squared Error {:.2f}".format(np.sqrt(mean_squared_error(test,df_pred))))
```
## Predictions Plot
First we make use of plot to plot the predicted values returned by our model based on the test data.
After that we plot the actual test data to compare our predictions.
```
plt.figure(figsize=(18,5))
plt.plot(df_pred[start_date:end_date], color = "red")
plt.plot(df_test, color = "blue")
plt.title("Predictions vs Actual", size = 24)
plt.plot(fontsize="x-large")
plt.show()
```
#### Creator: Viraj Jayant, Github: [Profile](https://github.com/Viraj-Jayant)
| true |
code
| 0.636551 | null | null | null | null |
|
# Plotting with matplotlib
### Setup
```
%matplotlib inline
import numpy as np
import pandas as pd
pd.set_option('display.max_columns', 10)
pd.set_option('display.max_rows', 10)
```
### Getting the pop2019 DataFrame
```
csv ='../csvs/nc-est2019-agesex-res.csv'
pops = pd.read_csv(csv, usecols=['SEX', 'AGE', 'POPESTIMATE2019'])
def fix_sex(sex):
if sex == 0:
return 'T'
elif sex == 1:
return 'M'
else: # 2
return 'F'
pops.SEX = pops.SEX.apply(fix_sex)
pops = pops.pivot(index='AGE', columns='SEX', values='POPESTIMATE2019')
pops
pops.plot();
```
### Create a Line Plot
```
# Create the plot.
plt_pop = pops.plot(
title = "Population by Age: 2019",
style=['b--', 'm^', 'k-'],
figsize=(12, 6),
lw=2
)
# Include gridlines.
plt_pop.grid(True)
# Set the x and y labels.
plt_pop.set_xlabel('Age')
plt_pop.set_ylabel('Population')
# Create the legend.
plt_pop.legend(['M', 'F', 'A'], loc="lower left")
# Set x and y ticks.
plt_pop.set_xticks(np.arange(0, 101, 10))
yticks = np.arange(500000, 5000001, 500000)
ytick_labels = pd.Series(yticks).apply(lambda y: "{:,}".format(y))
plt_pop.set_yticks(yticks)
plt_pop.set_yticklabels(ytick_labels);
```
### Create a Bar Plot
```
csv ='../csvs/mantle.csv'
mantle = pd.read_csv(csv, index_col='Year',
usecols=['Year', '2B', '3B', 'HR'])
mantle
# Create the plot.
plt_mantle = mantle.plot(
kind='bar',
title = 'Mickey Mantle: Doubles, Triples, and Home Runs',
figsize=(12, 6),
width=.8,
fontsize=16
)
# Include gridlines.
plt_mantle.grid(True)
# Set the x and y labels.
plt_mantle.set_ylabel('Number', fontsize=20)
plt_mantle.set_xlabel('Year', fontsize=20)
# Hatch the bars.
bars = plt_mantle.patches
for i in np.arange(0, 18):
bars[i].set_hatch('+')
for i in np.arange(18, 36):
bars[i].set_hatch('o')
for i in np.arange(36, 54):
bars[i].set_hatch('/')
# Create the legend.
plt_mantle.legend(['Doubles', 'Triples', 'Home Runs'],
loc="upper right", fontsize='xx-large');
plt_mantle = mantle.plot(kind='bar',
title = 'Mickey Mantle: Doubles, Triples, and Home Runs',
figsize=(12, 6),
width=.8,
fontsize=16,
stacked=True)
plt_mantle.set_ylabel('Number', fontsize=20)
plt_mantle.set_xlabel('Year', fontsize=20)
plt_mantle.grid(True)
bars = plt_mantle.patches
for i in np.arange(0, 18):
bars[i].set_hatch('-')
for i in np.arange(18, 36):
bars[i].set_hatch('o')
for i in np.arange(36, 54):
bars[i].set_hatch('/')
plt_mantle.legend(['Doubles','Triples','Home Runs'],
loc="upper right", fontsize='xx-large');
```
| true |
code
| 0.546496 | null | null | null | null |
|
# Section 2.1 `xarray`, `az.InferenceData`, and NetCDF for Markov Chain Monte Carlo
_How do we generate, store, and save Markov chain Monte Carlo results_
```
import numpy as np
import pandas as pd
import scipy.stats as stats
import matplotlib.pyplot as plt
import arviz as az
import pystan
import xarray as xr
from IPython.display import Video
np.random.seed(0)
plt.style.use('arviz-white')
```
## Learning Objectives
* Understand Markov chain Monte Carlo fundamentals
* Recognize the meaning of sample, draws, and chains in MCMC context
* Understand relationship between Xarray, az.InferenceData, and NetCDF
* Gain profiency with Xarray, NetCDF, and az.InferenceData objects
## Markov Chain Monte Carlo
**Pop quiz**: Why do we use Markov chain Monte Carlo in Bayesian inference?
**Highlight for answer:** C<span style="color:white">alculating the posterior distribution is hard</span>!
**Example:** If a flight has cancellation rate $r$, alternate tickets cost you $c$, and these distributions are modelled by $p(r, c)$, then expected cost of insuring a flight is
$$
\text{risk} = \int_{r=0}^{1}\int_{c=0}^{\infty} r\cdot c~dp(r, c)
$$
This can be hard to calculate for any number of reasons! If, instead, we have samples
$$
\{r_j, c_j\}_{j=1}^N \sim p(r, c)
$$
then
$$
\text{risk} \approx \frac{1}{N}\sum_{j=1}^N r_j \cdot c_j
$$
In python code, this would just be
```
risk = np.dot(r, c) / N
```
## Markov Chain Monte Carlo algorithm (greatly simplified)
Step 1: Start at a random spot
Step 2: Propose a new spot, possibly based on the previous spot
Step 3: Accept or reject this proposal based on some mathematical book keeping
Step 4: If accepted, move to proposed spot, if rejected, stay where you are
Step 5: Write down where you're standing
Step 6: Go back to step 2
The accepted proposals are called draws (or samples).
When animated this algorithm looks like this:
```
Video("../../img/medium_steps.mp4")
```
In MCMC Step 2 and Step 4 is where most MCMC variants differentiate themselves. Algorithms like Hamiltonian Monte Carlo and Sequential Monte Carlo are better at picking that next step for certain tasks. Richard McElreath has a great visual explainer [on his blog]([http://elevanth.org/blog/2017/11/28/build-a-better-markov-chain/)
Chain: A Markov chain
Sample/Draw: A single element of that chain
Regardless of algorithm in MCMC we end up with the same thing, a chain of accepted proposals with a fixed size. There is a rich literature to show that these algorithms produce samples that are eventually distributed according to the distribution we care about.
## Markov chain Monte Carlo with Metropolis-Hastings
Below is a working Metropolis-Hastings sampler, taken from [Thomas Wiecki's blog](https://twiecki.io/blog/2015/11/10/mcmc-sampling/). For the purposes of this tutorial focus more on the return value than the algorithm details.
It is important to note that this for simplicity's sake we have also hard coded the likelihood and prior in the sampler below. In mathematical notation our model looks like this. We are adding 20 to the estimation of mu to make it easier to recognize the distribution of **parameters** from the distribution of **observed data**
$$
\mu \sim \mathcal{N}(0, 1) \\
y \sim \mathcal{N}(\mu+20, 1)
$$
```
def mh_sampler(data, samples=4, mu_init=.5):
mu_current = mu_init
posterior = []
prior_logpdf = stats.norm(0, 1).logpdf
for i in range(samples):
# suggest new position
mu_proposal = stats.norm(mu_current, 0.5).rvs()
# Compute likelihood by multiplying probabilities of each data point
likelihood_current = stats.norm(mu_current + 20, 1).logpdf(data).sum()
likelihood_proposal = stats.norm(mu_proposal + 20, 1).logpdf(data).sum()
# Compute prior probability of current and proposed mu
prior_current = prior_logpdf(mu_current)
prior_proposal = prior_logpdf(mu_proposal)
# log(p(x|θ) p(θ)) = log(p(x|θ)) + log(p(θ))
p_current = likelihood_current + prior_current
p_proposal = likelihood_proposal + prior_proposal
# Accept proposal?
p_accept = np.exp(p_proposal - p_current)
accept = np.random.rand() < p_accept
if accept:
# Update position
mu_current = mu_proposal
else:
# don't move
pass
posterior.append(mu_current)
return np.array(posterior)
```
## Setup
Before using the sampler let's generate some data to test our Metropolis Hasting Implementation. In the code block below we are generating a bimodal distribution for the sampler.
```
data = stats.norm.rvs(loc=30, scale=1, size=1000).flatten()
```
We'll also plot our samples to get a sense of what the distribution of data looks like. Note how the histogram centers around 30. This should intuitively make sense as we're specified a mean of 30 when generating random values.
```
fig, ax = plt.subplots()
ax.hist(data)
fig.suptitle("Histogram of observed data");
```
As humans we can intuit *data mean* of **30** + an offset of **20** will lead to a parameter mean for *mu* of **10**. We want to see if our inference algorithm can recover our parameters.
## Single Variable Single Chain Inference Run
The simplest MCMC run we can perform is with a single variable and a single chain. We'll do so by putting our sampler function and data to use.
```
samples = 200
chain = mh_sampler(data=data, samples=samples)
chain[:100]
```
And just like that we've performed an inference run! We can generate a traceplot
```
fig, ax = plt.subplots(figsize=(10, 7))
x = np.arange(samples)
ax.plot(x, chain);
```
In terms of data structures, for a **single** variable **single** chain inference run, an array suffices for storing samples.
## Single Variable Multiple Chain Inference Run
As Bayesian modelers, life would be relatively easy if a single chain worked well every time, but unfortunately this is not the case. To understand why look at the above inference run. While the sampler started at *mu=8*, it took a 50 or so steps before the sampler honed in on the "correct" value of 10.
MCMC algorithms are sensitive to their starting points and in finite runs it's **not** guaranteed that the Markov Chain will approach the true underlying distribution. A common method to get around this is to sample from many chains in parallel and see if we get to the same place. We will discuss this further when we get to single model diagnostics.
```
chain_0 = mh_sampler(data=data, samples=samples)
chain_1 = mh_sampler(data=data, samples=samples, mu_init=13)
data_df = pd.DataFrame({"x_0":chain_0, "x_1":chain_1})
fig, ax = plt.subplots()
x = np.arange(samples)
ax.plot(x, data_df["x_0"], c="g")
ax.plot(x, data_df["x_1"])
```
With two chains converging to approximately a single value we can be more confident that the sampler reached the true underlying parameter. We can also store the results in a 2D data structures, such as Pandas Dataframes in python memory, and csvs or sql tables for persistent on disk storage.
## Multiple Variable Multiple Chain Inference Runs
A Bayesian modelers, life would be relatively easy if all models only had one variable (univariate models in math speak). Unfortunately many types of models require 2 or more variables. For example in a linear regression we are interested in estimating both <b>m</b> and <b>b</b>:
$$ y \sim mx+b$$
With at least 3 things to track (chains, samples, and variables) a 2d data structures become limiting. This problem exists in many domains and is the focus of the *xarray* project.
A motivating example comes from climate sciences. In this image from the xarray documentation the researcher might want to measure the temperature and humidity, across a 2D region at a point in time. Or they may want to plot the temperature over a time interval. xarray simplifies the data handling in cases like these.

### Xarray
In ArviZ an xarray DataSet object would look like the one below, where the variables are the Inference run variables, and the coordinates are at a minimum chains, draws.
```
posterior = xr.Dataset(
{"mu": (["chain", "draw"], [[11,12,13],[22,23,24]]), "sd": (["chain", "draw"], [[33,34,35],[44,45,46]])},
coords={"draw": [1,2,3], "chain": [0,1]},
)
posterior
```
## Multiple Variable Multiple Chain Inference runs and associated datasets
As a Bayesian modelers, life would be relatively easy if we were only concerned about posterior distributions. Looking back at the full end to end workflow, recall that there are other datasets, such as prior predictive samples, posterior predictive samples, among others. To aid the ArviZ user we present `az.InferenceData`.
### az.InferenceData
az.InferenceData serves as a data container for the various xarray datasets that are generated from an end-to-end Bayesian workflow. Consider our earlier simple model, and this time let's use `stan` to run a full analysis with multiple chains, multiple runs, and generate all sorts of datasets common in Bayesian analysis.
### Calculating prior
```
stan_code_prior = """
data {
int<lower=1> N;
}
parameters {
real mu; // Estimated parameter
}
model {
mu ~ normal(0, 1);
}
generated quantities {
real y_hat[N]; // prior prediction
for (n in 1:N) {
y_hat[n] = normal_rng(mu+20, 1);
}
}
"""
stan_prior = pystan.StanModel(model_code=stan_code_prior)
stan_data_prior = {"N" : len(data)}
stan_fit_prior = stan_prior.sampling(data=stan_data_prior)
stan_code_posterior = """
data {
int N;
real y[N]; // Observed data
}
parameters {
real mu; // Estimated parameter
}
model {
mu ~ normal(0, 1);
y ~ normal(mu+20, 1);
}
generated quantities {
real y_hat[N]; // posterior prediction
real log_lik[N]; // log_likelihood
for (n in 1:N) {
// Stan normal functions https://mc-stan.org/docs/2_19/functions-reference/normal-distribution.html
y_hat[n] = normal_rng(mu, 1);
log_lik[n] = normal_lpdf(y[n] | mu, 1);
}
}
"""
stan_model_posterior = pystan.StanModel(model_code=stan_code_posterior)
stan_data_posterior = dict(
y=data,
N=len(data)
)
stan_fit_posterior = stan_model_posterior.sampling(data=stan_data_posterior)
stan_inference_data = az.from_pystan(posterior=stan_fit_posterior,
observed_data="y",
# Other Bayesian Datasets that we have not discussed yet!
posterior_predictive="y_hat",
prior=stan_fit_prior,
prior_predictive="y_hat",
log_likelihood="log_lik",
)
```
### NetCDF
Calculating the various datasets is usually not trivial. Network Common Data Form (NetCDF) is an open standard for storing multidimensional datasets, and `xarray` is a library for doing high performance analysis on those datasets. NetCDF even comes with "group" support, making it easy to serialize az.InferenceData straight to disk. ArviZ uses NetCDF to save the results to disk, allowing reproducible analyses, multiple experiments, and sharing with others.
ArviZ even ships with sample datasets, serialized in NetCDF
https://github.com/arviz-devs/arviz/tree/master/arviz/data/_datasets
In short: like SQL is to Pandas DataFrame, NetCDF is to az.InferenceData.
```
data = az.load_arviz_data("centered_eight")
data
```
## The benefits of az.InferenceData
One of the goals for the ArviZ developers is to ensure that Bayesian practioners can share and reproduce analyses regardless of PPl, regardless of language and az.InferenceData was the implementation of this idea.
In summary az.InferenceData
* provides a consistent format for Bayesian datasets.
* makes it easy to save results
* makes use of ArviZ plotting and statistics functions simpler
* stores metadata for ease of reproducibility
## InferenceData in practice
In practice it's rare to ever generate a xarray manually for use in ArviZ. Instead ArviZ provides methods for instantiating InferenceData from plain Python objects, mappings to various PPLs, as well as methods to save and load NetCDF files.
For further references consider the ArviZ cookbook, and data structure tutorial.
https://arviz-devs.github.io/arviz/notebooks/InferenceDataCookbook.html
https://arviz-devs.github.io/arviz/notebooks/XarrayforArviZ.html
## Examples
See below for some useful methods of interacting with az.InferenceData, Xarray, and NetCDF
For Xarray methods we only demo a subset of the available API. For a much more comprehensive explanation view the indexing and selection page from the xarray docs
http://xarray.pydata.org/en/stable/indexing.html
### Creating InferenceData objects
We can create an InferenceData objects from our "home built" chain, not just from the output of supported PPLs
```
data_dict = {"mu": [chain_0, chain_1]}
home_built_data = az.from_dict(data_dict)
home_built_data
# Load NetCDF from disk into memory
## Replace with NetCDF that's "visible"
data = az.load_arviz_data("centered_eight")
# Reference posterior directly
posterior = data.posterior
posterior
# Select specific variables
posterior[["mu", "tau"]]
# Select specific chains and draws
posterior.sel(chain=[0,2], draw=slice(0,5))
# Get first 10 samples of mu from chain 0
posterior["mu"].sel(chain=0, draw=slice(0,10)).values
```
## Extra Credit
* xarray supports numpy "ufuncs" (https://docs.scipy.org/doc/numpy/reference/ufuncs.html). ArviZ uses these under the hood for efficient calculations.
| true |
code
| 0.656823 | null | null | null | null |
|
```
from bokeh.io import output_notebook, show, reset_output
import numpy as np
output_notebook()
from IPython.display import IFrame
IFrame('https://demo.bokehplots.com/apps/sliders', width=900, height=500)
```
### Basic scatterplot
```
from bokeh.io import output_notebook, show
from bokeh.plotting import figure
# create a new plot with default tools, using figure
p = figure(plot_width=400, plot_height=400)
# add a circle renderer with a size, color, and alpha
p.circle([1, 2, 3, 4, 5], [6, 7, 2, 4, 5], size=15, line_color="navy", fill_color="orange", fill_alpha=0.5)
show(p) # show the results
```
### Interactive visualization using sliders
```
from bokeh.layouts import row, column
from bokeh.models import CustomJS, ColumnDataSource, Slider
import matplotlib.pyplot as plt
x = [x*0.005 for x in range(0, 201)]
output_notebook()
source = ColumnDataSource(data=dict(x=x, y=x))
plot = figure(plot_width=400, plot_height=400)
plot.scatter('x', 'y', source=source, line_width=3, line_alpha=0.6)
slider = Slider(start=0.1, end=6, value=1, step=.1, title="power")
update_curve = CustomJS(args=dict(source=source, slider=slider), code="""
var data = source.get('data');
var f = slider.value;
x = data['x']
y = data['y']
for (i = 0; i < x.length; i++) {
y[i] = Math.pow(x[i], f)
}
source.change.emit();
""")
slider.js_on_change('value', update_curve)
show(row(slider, plot))
#scatterplot using sliders
x = [x*0.005 for x in range(0, 21)]
output_notebook()
source = ColumnDataSource(data=dict(x=x, y=x))
plot = figure(plot_width=400, plot_height=400)
plot.scatter('x', 'y', source=source, line_width=3, line_alpha=0.6)
slider = Slider(start=0.1, end=6, value=1, step=.1, title="power")
update_curve = CustomJS(args=dict(source=source, slider=slider), code="""
var data = source.get('data');
var f = slider.value;
x = data['x']
y = data['y']
for (i = 0; i < x.length; i++) {
y[i] = Math.pow(x[i], f)
}
source.change.emit();
""")
slider.js_on_change('value', update_curve)
print source.data['y']
show(row(slider, plot))
#Making equivalent of diffusion
Arr = np.random.rand(2,100)
source = ColumnDataSource(data=dict(x=Arr[0,], y=Arr[1,]))
plot = figure(plot_width=400, plot_height=400)
plot.scatter('x', 'y', source=source, line_width=3, line_alpha=0.6)
slider = Slider(start=1, end=8, value=1, step=1, title="Diffusion_steps")
slider2 = Slider(start=1, end=8, value=1, step=1, title="Anti_Diffusion_steps")
update_curve = CustomJS(args=dict(source=source, slider=slider), code="""
var data = source.get('data');
var f = slider.value;
x = data['x']
y = data['y']
for (i = 0; i < x.length; i++) {
x[i] = Math.pow(x[i], f)
y[i] = Math.pow(y[i], f)
}
source.change.emit();
""")
update_curve2 = CustomJS(args=dict(source=source, slider=slider2), code="""
var data = source.get('data');
var f = slider.value;
x = data['x']
y = data['y']
for (i = 0; i < x.length; i++) {
x[i] = Math.pow(x[i], 1/f)
y[i] = Math.pow(y[i], 1/f)
}
source.change.emit();
""")
slider.js_on_change('value', update_curve)
slider2.js_on_change('value', update_curve2)
show(row(column(slider,slider2), plot))
from bokeh.models import TapTool, CustomJS, ColumnDataSource
callback = CustomJS(code="alert('hello world')")
tap = TapTool(callback=callback)
p = figure(plot_width=600, plot_height=300, tools=[tap])
p.circle(x=[1, 2, 3, 4, 5], y=[2, 5, 8, 2, 7], size=20)
show(p)
from bokeh.models import ColumnDataSource, OpenURL, TapTool
from bokeh.plotting import figure, output_file, show
output_file("openurl.html")
p = figure(plot_width=400, plot_height=400,
tools="tap", title="Click the Dots")
source = ColumnDataSource(data=dict(
x=[1, 2, 3, 4, 5],
y=[2, 5, 8, 2, 7],
color=["navy", "orange", "olive", "firebrick", "gold"]
))
p.circle('x', 'y', color='color', size=20, source=source)
url = "http://www.colors.commutercreative.com/@color/"
taptool = p.select(type=TapTool)
taptool.callback = OpenURL(url=url)
show(p)
from bokeh.models import ColumnDataSource, TapTool, DataRange1d, Plot, LinearAxis, Grid, HoverTool
from bokeh.plotting import figure, output_file, show
from bokeh.models.glyphs import HBar
p = figure(plot_width=400, plot_height=400,
tools="tap", title="Click the Dots")
source = ColumnDataSource(data=dict(
x=[1, 2, 3, 4, 5],
y=[2, 5, 8, 2, 7],
color=["navy", "orange", "olive", "firebrick", "gold"]
))
p.circle('x', 'y', color='color', size=20, source=source)
source2 = ColumnDataSource(data=dict(
x=[1,2],
y=[1,2]))
callback = CustomJS(args=dict(source2=source2), code="""
var data = source2.get('data');
var geom = cb_data['geometries'];
data['x'] = [geom[0].x+1,geom[0].x-1]
data['y'] = [geom[0].y+1,geom[0].y-1]
source2.trigger('change');
""")
def callback2(source2 = source2):
data = source2.get('data')
geom = cb_obj.get('geometries')
data['x'] = [geom['x']+1,geom['x']-1]
data['y'] = [geom['y']+1,geom['y']-1]
source2.trigger('change')
taptool = p.select(type=TapTool)
taptool.callback = CustomJS.from_py_func(callback2);
xdr = DataRange1d()
ydr = DataRange1d()
p2 = figure(plot_width=400, plot_height=400)
p2.vbar(x=source2.data['x'], width=0.5, bottom=0,
top=source2.data['y'], color="firebrick")
#glyph = HBar(source2.data['x'], source2.data['y'], left=0, height=0.5, fill_color="#b3de69")
#p2.add_glyph(source2, glyph)
#p2.add_glyph(source, glyph)
show(row(p,p2))
update()
source = ColumnDataSource(data=dict(
x=[1, 2, 3, 4, 5],
y=[2, 5, 8, 2, 7],
color=["navy", "orange", "olive", "firebrick", "gold"]
))
source2.data['x']
```
| true |
code
| 0.523968 | null | null | null | null |
|
```
%reload_ext autoreload
%autoreload 2
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
from sklearn.datasets import load_digits, load_iris
from sklearn.model_selection import train_test_split
from pca import pca as MyPCA
```
# Load Digit Dataset
```
digits = load_digits()
def draw_digits(X, y):
fig = plt.figure(1, figsize=(8, 8))
plt.scatter(X[:, 0], X[:, 1],
c=y, edgecolor='none', alpha=0.5,
cmap=plt.cm.get_cmap('Spectral', 10))
plt.xlabel('component 1')
plt.ylabel('component 2')
plt.colorbar()
plt.show();
```
# sklearn PCA
```
pca = PCA(n_components=2, random_state=17).fit(digits.data)
data_pca = pca.transform(digits.data)
pca.explained_variance_ratio_, pca.explained_variance_, pca.singular_values_, pca.components_
data_pca
draw_digits(data_pca, digits.target)
```
# Our Implementation
```
pca1 = MyPCA(n_components=2, solver='svd')
pca1.fit(digits.data)
data_pca1 = pca1.transform(digits.data)
pca1.explained_variance_ratio_, pca1.explained_variance_, pca1.singular_values_, pca1.components_
data_pca1
draw_digits(data_pca1, digits.target)
```
### eig solver
```
pca_eig = MyPCA(n_components=2, solver='eig')
pca_eig.fit(digits.data)
data_eig = pca_eig.transform(digits.data)
pca_eig.explained_variance_ratio_, pca_eig.explained_variance_, pca_eig.singular_values_, pca_eig.components_
data_eig
draw_digits(data_eig, digits.target)
```
# Iris Dataset
Let's try to plot 3 components after PCA.<br>
https://scikit-learn.org/stable/auto_examples/decomposition/plot_pca_iris.html#sphx-glr-auto-examples-decomposition-plot-pca-iris-py
```
from mpl_toolkits.mplot3d import Axes3D
def plot_components(X, y):
fig = plt.figure(1, figsize=(12, 8))
plt.clf()
ax = Axes3D(fig, rect=[0, 0, .95, 1], elev=48, azim=134)
for name, label in [('Setosa', 0), ('Versicolour', 1), ('Virginica', 2)]:
ax.text3D(X[y == label, 0].mean(),
X[y == label, 1].mean() + 1.5,
X[y == label, 2].mean(), name,
horizontalalignment='center',
bbox=dict(alpha=.5, edgecolor='w', facecolor='w'))
# Reorder the labels to have colors matching the cluster results
y = np.choose(y, [1, 2, 0]).astype(np.float)
ax.scatter(X[:, 0], X[:, 1], X[:, 2], c=y, cmap=plt.cm.nipy_spectral,
edgecolor='k')
ax.w_xaxis.set_ticklabels([])
ax.w_yaxis.set_ticklabels([])
ax.w_zaxis.set_ticklabels([])
plt.show()
iris = load_iris()
X, y = iris.data, iris.target
```
# sklearn
```
pca_3d = PCA(n_components=3, random_state=17).fit(X)
X_3d = pca_3d.transform(X)
plot_components(X_3d, y)
```
# Our's: Solver:svd
```
pca_3d_svd = MyPCA(n_components=3)
pca_3d_svd.fit(X)
X_3d_svd = pca_3d_svd.transform(X)
plot_components(X_3d_svd, y)
```
# Our's: Solver:eig fit_transform
```
pca_3d_eig = MyPCA(n_components=3, solver='eig')
X_3d_eig = pca_3d_eig.fit_transform(X)
plot_components(X_3d_eig, y)
```
| true |
code
| 0.769015 | null | null | null | null |
|
# Pandas Exercise
```
import matplotlib.pyplot as plt
import numpy as np
np.random.seed(0)
import pandas as pd
def df_info(df: pd.DataFrame) -> None:
return df.head(n=20).style
```
## Cars Auction Dataset
| Feature | Type | Description |
|--------------|---------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Price | Integer | The sale price of the vehicle in the ad |
| Years | Integer | The vehicle registration year |
| Brand | String | The brand of car |
| Model | String | model of the vehicle |
| Color | String | Color of the vehicle |
| State/City | String | The location in which the car is being available for purchase |
| Mileage | Float | miles traveled by vehicle |
| Title Status | String | This feature included binary classification, which are clean title vehicles and salvage insurance |
| Condition | String | Time |
```
df = pd.read_csv("../data/USA_cars_datasets.csv")
print(df.columns)
df.head()
```
## Exercise 1
- Get the counts for the us states
## Exercise 2
- Get all cars from the state of new mexico
## Exercise 3
- Compute the mean mileage of all cars from new york
## Exercise 4
- Remove all entries where the year is below 2019
## Exercise 5
- Replace all color values by the first character of the color name
E.g.: 'blue' => 'b'
| true |
code
| 0.373233 | null | null | null | null |
|
## inference in simple model using synthetic data
population size 10^6, inference window 2x4 = 8 days, to be compared with ``-win5`` analogous notebook
```
%env OMP_NUM_THREADS=1
%matplotlib inline
import numpy as np
import os
import pickle
import pprint
import time
import pyross
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
#from matplotlib import rc; rc('text', usetex=True)
import synth_fns
```
(cell 3 was removed to hide local file info)
### main settings
```
## for dataFiles : needs a fresh value in every notebook
fileRoot = 'dataSynthInfTest-pop1e6-win2'
## total population
popN = 1e6
## tau-leaping param, take this negative to force gillespie
## or set a small value for high-accuracy tau-leap (eg 1e-4 or 1e-5)
leapEps = -1
## do we use small tolerances for the likelihood computations? (use False for debug etc)
isHighAccuracy = True
# absolute tolerance for logp for MAP
inf_atol = 1.0
## prior mean of beta, divided by true value (set to 1.0 for the simplest case)
betaPriorOffset = 0.8
betaPriorLogNorm = False
## mcmc
mcSamples = 5000
nProcMCMC = 2 # None ## take None to use default but large numbers are not efficient in this example
trajSeed = 18
infSeed = 21
mcSeed = infSeed+2
loadTraj = False
saveMC = True
```
### model
```
model_dict = synth_fns.get_model(popN)
model_spec = model_dict['mod']
contactMatrix = model_dict['CM']
parameters_true = model_dict['params']
cohortsM = model_dict['cohortsM']
Ni = model_dict['cohortsPop']
```
#### more settings
```
## total trajectory time (bare units)
Tf_bare = 20
## total inf time
Tf_inf_bare = 2
## inference period starts when the total deaths reach this amount (as a fraction)
fracDeaths = 2e-3 # int(N*200/1e5)
## hack to get higher-frequency data
## how many data points per "timestep" (in original units)
fineData = 4
## this assumes that all parameters are rates !!
for key in parameters_true:
#print(key,parameters_true[key])
parameters_true[key] /= fineData
Tf = Tf_bare * fineData;
Nf = Tf+1
Tf_inference = Tf_inf_bare * fineData
Nf_inference = Tf_inference+1
```
### plotting helper functions
```
def plotTraj(M,data_array,Nf_start,Tf_inference,fineData):
fig = plt.figure(num=None, figsize=(6, 4), dpi=80, facecolor='w', edgecolor='k')
#plt.rc('text', usetex=True)
plt.rc('font', family='serif', size=12)
t = np.linspace(0, Tf/fineData, Nf)
# plt.plot(t, np.sum(data_array[:, :M], axis=1), '-o', label='S', lw=4)
plt.plot(t, np.sum(data_array[:, M:2*M], axis=1), '-o', label='Exposed', lw=2)
plt.plot(t, np.sum(data_array[:, 2*M:3*M], axis=1), '-o', label='Infected', lw=2)
plt.plot(t, np.sum(data_array[:, 3*M:4*M], axis=1), '-o', label='Deaths', lw=2)
#plt.plot(t, N-np.sum(data_array[:, 0:4*M], axis=1), '-o', label='Rec', lw=2)
plt.axvspan(Nf_start/fineData, (Nf_start+Tf_inference)/fineData,alpha=0.3, color='dodgerblue')
plt.legend()
plt.show()
fig,axs = plt.subplots(1,2, figsize=(12, 5), dpi=80, facecolor='w', edgecolor='k')
ax = axs[0]
ax.plot(t[1:],np.diff(np.sum(data_array[:, 3*M:4*M], axis=1)),'o-',label='death increments', lw=1)
ax.legend(loc='upper right') ; # plt.show()
ax = axs[1]
ax.plot(t,np.sum(data_array[:, 3*M:4*M], axis=1),'o-',label='deaths',ms=3)
ax.legend() ;
plt.show()
def plotMAP(res,data_array,M,N,estimator,Nf_start,Tf_inference,fineData):
print('**beta(bare units)',res['params_dict']['beta']*fineData)
print('**logLik',res['log_likelihood'],'true was',logpTrue)
print('\n')
print(res)
fig,axs = plt.subplots(1,3, figsize=(15, 7), dpi=80, facecolor='w', edgecolor='k')
plt.subplots_adjust(wspace=0.3)
#plt.rc('text', usetex=True)
plt.rc('font', family='serif', size=12)
t = np.linspace(0, Tf/fineData, Nf)
ax = axs[0]
#plt.plot(t, np.sum(data_array[:, :M], axis=1), '-o', label='S', lw=4)
ax.plot(t, np.sum(data_array[:, M:2*M], axis=1), 'o', label='Exposed', lw=2)
ax.plot(t, np.sum(data_array[:, 2*M:3*M], axis=1), 'o', label='Infected', lw=2)
ax.plot(t, np.sum(data_array[:, 3*M:4*M], axis=1), 'o', label='Deaths', lw=2)
#plt.plot(t, N-np.sum(data_array[:, 0:4*M], axis=1), '-o', label='Rec', lw=2)
tt = np.linspace(Nf_start, Tf, Nf-Nf_start,)/fineData
xm = estimator.integrate(res['x0'], Nf_start, Tf, Nf-Nf_start, dense_output=False)
#plt.plot(tt, np.sum(xm[:, :M], axis=1), '-x', label='S-MAP', lw=2, ms=3)
ax.plot(tt, np.sum(xm[:, M:2*M], axis=1), '-x', color='C0',label='E-MAP', lw=2, ms=3)
ax.plot(tt, np.sum(xm[:, 2*M:3*M], axis=1), '-x', color='C1',label='I-MAP', lw=2, ms=3)
ax.plot(tt, np.sum(xm[:, 3*M:4*M], axis=1), '-x', color='C2',label='D-MAP', lw=2, ms=3)
#plt.plot(tt, N-np.sum(xm[:, :4*M], axis=1), '-o', label='R-MAP', lw=2)
ax.axvspan(Nf_start/fineData, (Nf_start+Tf_inference)/fineData,alpha=0.3, color='dodgerblue')
ax.legend()
ax = axs[1]
ax.plot(t[1:], np.diff(np.sum(data_array[:, 3*M:4*M], axis=1)), '-o', label='death incs', lw=2)
ax.plot(tt[1:], np.diff(np.sum(xm[:, 3*M:4*M], axis=1)), '-x', label='MAP', lw=2, ms=3)
ax.axvspan(Nf_start/fineData, (Nf_start+Tf_inference)/fineData,alpha=0.3, color='dodgerblue')
ax.legend()
ax = axs[2]
ax.plot(t, np.sum(data_array[:, :M], axis=1), '-o', label='Sus', lw=1.5, ms=3)
#plt.plot(t, np.sum(data_array[:, M:2*M], axis=1), '-o', label='Exposed', lw=2)
#plt.plot(t, np.sum(data_array[:, 2*M:3*M], axis=1), '-o', label='Infected', lw=2)
#plt.plot(t, np.sum(data_array[:, 3*M:4*M], axis=1), '-o', label='Deaths', lw=2)
ax.plot(t, N-np.sum(data_array[:, 0:4*M], axis=1), '-o', label='Rec', lw=1.5, ms=3)
#infResult = res
tt = np.linspace(Nf_start, Tf, Nf-Nf_start,)/fineData
xm = estimator.integrate(res['x0'], Nf_start, Tf, Nf-Nf_start, dense_output=False)
ax.plot(tt, np.sum(xm[:, :M], axis=1), '-x', label='S-MAP', lw=2, ms=3)
#plt.plot(tt, np.sum(xm[:, M:2*M], axis=1), '-x', label='E-MAP', lw=2, ms=3)
#plt.plot(tt, np.sum(xm[:, 2*M:3*M], axis=1), '-x', label='I-MAP', lw=2, ms=3)
#plt.plot(tt, np.sum(xm[:, 3*M:4*M], axis=1), '-x', label='D-MAP', lw=2, ms=3)
ax.plot(tt, N-np.sum(xm[:, :4*M], axis=1), '-x', label='R-MAP', lw=1.5, ms=3)
ax.axvspan(Nf_start/fineData, (Nf_start+Tf_inference)/fineData,alpha=0.3, color='dodgerblue')
ax.legend()
plt.show()
def plotMCtrace(selected_dims, sampler, numTrace=None):
# Plot the trace for these dimensions:
plot_dim = len(selected_dims)
plt.rcParams.update({'font.size': 14})
fig, axes = plt.subplots(plot_dim, figsize=(12, plot_dim), sharex=True)
samples = sampler.get_chain()
if numTrace == None : numTrace = np.shape(samples)[1] ## corrected index
for ii,dd in enumerate(selected_dims):
ax = axes[ii]
ax.plot(samples[:, :numTrace , dd], "k", alpha=0.3)
ax.set_xlim(0, len(samples))
axes[-1].set_xlabel("step number");
plt.show(fig)
plt.close()
def plotPosteriors(estimator,obsData, fltrDeath, Tf_inference,param_priors, init_priors,contactMatrix,
infResult,parameters_true,trueInit) :
## used for prior pdfs
(likFun,priFun,dimFlat) = pyross.evidence.latent_get_parameters(estimator,
obsData, fltrDeath, Tf_inference,
param_priors, init_priors,
contactMatrix,
#intervention_fun=interventionFn,
tangent=False,
)
xVals = np.linspace(parameters_true['beta']*0.5,parameters_true['beta']*1.5,100)
betas = [ rr['params_dict']['beta'] for rr in result_mcmc ]
plt.hist(betas,density=True,color='lightblue',label='posterior')
yVal=2
plt.plot([infResult['params_dict']['beta']],[2*yVal],'bs',label='MAP',ms=10)
plt.plot([parameters_true['beta']],[yVal],'ro',label='true',ms=10)
## this is a bit complicated, it just finds the prior for beta from the infResult
var='beta'
jj = infResult['param_keys'].index(var)
xInd = infResult['param_guess_range'][jj]
#print(jj,xInd)
pVals = []
for xx in xVals :
flatP = np.zeros( dimFlat )
flatP[xInd] = xx
pdfAll = np.exp( priFun.logpdf(flatP) )
pVals.append( pdfAll[xInd] )
plt.plot(xVals,pVals,color='darkgreen',label='prior')
plt.xlabel(var)
plt.ylabel('pdf')
plt.legend()
labs=['init S','init E','init I']
nPanel=3
fig,axs = plt.subplots(1,nPanel,figsize=(14,4))
for ii in range(nPanel) :
ax = axs[ii]
yVal=1.0/popN
xs = [ rr['x0'][ii] for rr in result_mcmc ]
ax.hist(xs,color='lightblue',density=True)
ax.plot([infResult['x0'][ii]],yVal,'bs',label='true')
ax.plot([trueInit[ii]],yVal,'ro',label='true')
## this is a bit complicated, it just finds the prior for beta from the infResult
## axis ranges
xMin = np.min(xs)*0.8
xMax = np.max(xs)*1.2
xVals = np.linspace(xMin,xMax,100)
## this ID is a negative number because the init params are the end of the 'flat' param array
paramID = ii-nPanel
pVals = []
for xx in xVals :
flatP = np.zeros( dimFlat )
flatP[paramID] = xx
pdfAll = np.exp( priFun.logpdf(flatP) )
pVals.append( pdfAll[paramID] )
ax.plot(xVals,pVals,color='darkgreen',label='prior')
#plt.xlabel(var)
ax.set_xlabel(labs[ii])
ax.set_ylabel('pdf')
ax.yaxis.set_ticklabels([])
plt.show()
```
### synthetic data
```
if loadTraj :
ipFile = fileRoot+'-stochTraj.npy'
syntheticData = np.load(ipFile)
print('loading trajectory from',ipFile)
else :
ticTime = time.time()
syntheticData = synth_fns.make_stochastic_traj(Tf,Nf,trajSeed,model_dict,leapEps)
tocTime = time.time() - ticTime
print('traj generation time',tocTime,'secs')
np.save(fileRoot+'-stochTraj.npy',syntheticData)
Nf_start = synth_fns.get_start_time(syntheticData, popN, fracDeaths)
print('inf starts at timePoint',Nf_start)
plotTraj(cohortsM,syntheticData,Nf_start,Tf_inference,fineData)
```
### basic inference (estimator) setup
(including computation of likelihood for the true parameters)
```
[estimator,fltrDeath,obsData,trueInit] = synth_fns.get_estimator(isHighAccuracy,model_dict,syntheticData, popN, Nf_start, Nf_inference,)
## compute log-likelihood of true params
logpTrue = -estimator.minus_logp_red(parameters_true, trueInit, obsData, fltrDeath, Tf_inference,
contactMatrix, tangent=False)
print('**logLikTrue',logpTrue,'\n')
print('death data\n',obsData,'length',np.size(obsData),Nf_inference)
```
### priors
```
[param_priors,init_priors] = synth_fns.get_priors(model_dict,betaPriorOffset,betaPriorLogNorm,fracDeaths,estimator)
print('Prior Params:',param_priors)
print('Prior Inits:')
pprint.pprint(init_priors)
print('trueBeta',parameters_true['beta'])
print('trueInit',trueInit)
```
### inference (MAP)
```
infResult = synth_fns.do_inf(estimator, obsData, fltrDeath, syntheticData,
popN, Tf_inference, infSeed, param_priors,init_priors, model_dict, inf_atol)
#pprint.pprint(infResult)
print('MAP likelihood',infResult['log_likelihood'],'true',logpTrue)
print('MAP beta',infResult['params_dict']['beta'],'true',parameters_true['beta'])
```
### plot MAP trajectory
```
plotMAP(infResult,syntheticData,cohortsM,popN,estimator,Nf_start,Tf_inference,fineData)
```
#### slice of likelihood
(note this is not the posterior, hence MAP is not exactly at the peak)
```
## range for beta (relative to MAP)
rangeParam = 0.1
[bVals,likVals] = synth_fns.sliceLikelihood(rangeParam,infResult,
estimator,obsData,fltrDeath,contactMatrix,Tf_inference)
#print('logLiks',likVals,logp)
plt.plot(bVals , likVals, 'o-')
plt.plot(infResult['params_dict']['beta'],infResult['log_likelihood'],'s',ms=6)
plt.show()
```
### MCMC
```
sampler = synth_fns.do_mcmc(mcSamples, nProcMCMC, estimator, Tf_inference, infResult,
obsData, fltrDeath, param_priors, init_priors,
model_dict,infSeed)
plotMCtrace([0,2,3], sampler)
result_mcmc = synth_fns.load_mcmc_result(estimator, obsData, fltrDeath, sampler, param_priors, init_priors, model_dict)
print('result shape',np.shape(result_mcmc))
print('last sample\n',result_mcmc[-1])
```
#### save the result
```
if saveMC :
opFile = fileRoot + "-mcmc.pik"
print('opf',opFile)
with open(opFile, 'wb') as f:
pickle.dump([infResult,result_mcmc],f)
```
#### estimate MCMC autocorrelation
```
# these are the estimated autocorrelation times for the sampler
# (it likes runs ~50 times longer than this...)
pp = sampler.get_log_prob()
nSampleTot = np.shape(pp)[0]
#print('correl',sampler.get_autocorr_time(discard=int(nSampleTot/3)))
print('nSampleTot',nSampleTot)
```
#### plot posterior distributions
```
plotPosteriors(estimator,obsData, fltrDeath, Tf_inference,param_priors, init_priors,contactMatrix,
infResult,parameters_true,trueInit)
```
### analyse posterior for beta
```
betas = [ rr['params_dict']['beta'] for rr in result_mcmc ]
postMeanBeta = np.mean(betas)
postStdBeta = np.std(betas)
postCIBeta = [ np.percentile(betas,2.5) , np.percentile(betas,97.5)]
print("beta: true {b:.5f} MAP {m:.5f}".format(b=parameters_true['beta'],m=infResult['params_dict']['beta']))
print("post: mean {m:.5f} std {s:.5f} CI95: {l:.5f} {u:.5f}".format(m=postMeanBeta,
s=postStdBeta,
l=postCIBeta[0],u=postCIBeta[1]))
```
### posterior correlations for initial conditions
```
sis = np.array( [ rr['x0'][0] for rr in result_mcmc ] )/popN
eis = np.array( [ rr['x0'][1] for rr in result_mcmc ] )/popN
iis = np.array( [ rr['x0'][2] for rr in result_mcmc ] )/popN
betas = [ rr['params_dict']['beta'] for rr in result_mcmc ]
fig,axs = plt.subplots(1,3,figsize=(15,4))
plt.subplots_adjust(wspace=0.35)
ax = axs[0]
ax.plot(eis,iis,'o',ms=2)
ax.set_xlabel('E0')
ax.set_ylabel('I0')
ax = axs[1]
ax.plot(1-eis-iis-sis,sis,'o',ms=2)
ax.set_ylabel('S0')
ax.set_xlabel('R0')
ax = axs[2]
ax.plot(1-eis-iis-sis,betas,'o',ms=2)
ax.set_ylabel('beta')
ax.set_xlabel('R0')
plt.show()
def forecast(result_mcmc, nsamples, Nf_start, Tf_inference, Nf_inference, estimator, obs, fltr, contactMatrix):
trajs = []
#x = (data_array[Nf_start:Nf_start+Nf_inference])
#obs=np.einsum('ij,kj->ki', fltr, x)
# this should pick up the right number of traj, equally spaced
totSamples = len(result_mcmc)
skip = int(totSamples/nsamples)
modulo = totSamples % skip
#print(modulo,skip)
for sample_res in result_mcmc[modulo::skip]:
endpoints = estimator.sample_endpoints(obs, fltr, Tf_inference, sample_res, 1, contactMatrix=contactMatrix)
xm = estimator.integrate(endpoints[0], Nf_start+Tf_inference, Tf, Nf-Tf_inference-Nf_start, dense_output=False)
trajs.append(xm)
return trajs
def plot_forecast(allTraj, data_array, nsamples, Tf,Nf, Nf_start, Tf_inference, Nf_inference, M,
estimator, obs, contactMatrix):
#x = (data_array[Tf_start:Tf_start+Nf_inference]).astype('float')
#obs=np.einsum('ij,kj->ki', fltr, x)
#samples = estimator.sample_endpoints(obs, fltr, Tf_inference, res, nsamples, contactMatrix=contactMatrix)
time_points = np.linspace(0, Tf, Nf)
fig = plt.figure(num=None, figsize=(10, 8), dpi=80, facecolor='w', edgecolor='k')
plt.rcParams.update({'font.size': 22})
#for x_start in samples:
for traj in allTraj:
#xm = estimator.integrate(x_start, Tf_start+Tf_inference, Tf, Nf-Tf_inference-Tf_start, dense_output=False)
# plt.plot(time_points[Tf_inference+Tf_start:], np.sum(xm[:, M:2*M], axis=1), color='grey', alpha=0.1)
# plt.plot(time_points[Tf_inference+Tf_start:], np.sum(xm[:, 2*M:3*M], axis=1), color='grey', alpha=0.1)
incDeaths = np.diff( np.sum(traj[:, 3*M:4*M], axis=1) )
plt.plot(time_points[1+Tf_inference+Nf_start:], incDeaths, color='grey', alpha=0.2)
# plt.plot(time_points, np.sum(data_array[:, M:2*M], axis=1), label='True E')
# plt.plot(time_points, np.sum(data_array[:, 2*M:3*M], axis=1), label='True I')
incDeathsObs = np.diff( np.sum(data_array[:, 3*M:4*M], axis=1) )
plt.plot(time_points[1:],incDeathsObs, 'ko', label='True D')
plt.axvspan(Nf_start, Tf_inference+Nf_start,
label='Used for inference',
alpha=0.3, color='dodgerblue')
plt.xlim([0, Tf])
plt.legend()
plt.show()
nsamples = 40
foreTraj = forecast(result_mcmc, nsamples, Nf_start, Tf_inference, Nf_inference,
estimator, obsData, fltrDeath, contactMatrix)
print(len(foreTraj))
foreTraj = np.array( foreTraj )
np.save(fileRoot+'-foreTraj.npy',foreTraj)
plot_forecast(foreTraj, syntheticData, nsamples, Tf,Nf, Nf_start, Tf_inference, Nf_inference, cohortsM,
estimator, obsData, contactMatrix)
print(Nf_inference)
print(len(result_mcmc))
```
| true |
code
| 0.614568 | null | null | null | null |
|
## Passing Messages to Processes
As with threads, a common use pattern for multiple processes is to divide a job up among several workers to run in parallel. Effective use of multiple processes usually requires some communication between them, so that work can be divided and results can be aggregated. A simple way to communicate between processes with multiprocessing is to use a Queue to pass messages back and forth. **Any object that can be serialized with pickle can pass through a Queue.**
```
import multiprocessing
class MyFancyClass:
def __init__(self, name):
self.name = name
def do_something(self):
proc_name = multiprocessing.current_process().name
print('Doing something fancy in {} for {}!'.format(
proc_name, self.name))
def worker(q):
obj = q.get()
obj.do_something()
if __name__ == '__main__':
queue = multiprocessing.Queue()
p = multiprocessing.Process(target=worker, args=(queue,))
p.start()
queue.put(MyFancyClass('Fancy Dan'))
# Wait for the worker to finish
queue.close()
queue.join_thread()
p.join()
```
A more complex example shows how to manage several workers consuming data from a JoinableQueue and passing results back to the parent process. The poison pill technique is used to stop the workers. After setting up the real tasks, the main program adds one “stop” value per worker to the job queue. When a worker encounters the special value, it breaks out of its processing loop. The main process uses the task queue’s join() method to wait for all of the tasks to finish before processing the results.
```
import multiprocessing
import time
class Consumer(multiprocessing.Process):
def __init__(self, task_queue, result_queue):
multiprocessing.Process.__init__(self)
self.task_queue = task_queue
self.result_queue = result_queue
def run(self):
proc_name = self.name
while True:
next_task = self.task_queue.get()
if next_task is None:
# Poison pill means shutdown
print('{}: Exiting'.format(proc_name))
self.task_queue.task_done()
break
print('{}: {}'.format(proc_name, next_task))
answer = next_task()
self.task_queue.task_done()
self.result_queue.put(answer)
class Task:
def __init__(self, a, b):
self.a = a
self.b = b
def __call__(self):
time.sleep(0.1) # pretend to take time to do the work
return '{self.a} * {self.b} = {product}'.format(
self=self, product=self.a * self.b)
def __str__(self):
return '{self.a} * {self.b}'.format(self=self)
if __name__ == '__main__':
# Establish communication queues
tasks = multiprocessing.JoinableQueue()
results = multiprocessing.Queue()
# Start consumers
num_consumers = multiprocessing.cpu_count() * 2
print('Creating {} consumers'.format(num_consumers))
consumers = [
Consumer(tasks, results)
for i in range(num_consumers)
]
for w in consumers:
w.start()
# Enqueue jobs
num_jobs = 10
for i in range(num_jobs):
tasks.put(Task(i, i))
# Add a poison pill for each consumer
for i in range(num_consumers):
tasks.put(None)
# Wait for all of the tasks to finish
tasks.join()
# Start printing results
while num_jobs:
result = results.get()
print('Result:', result)
num_jobs -= 1
```
## Signaling between Processes
The Event class provides a simple way to communicate state information between processes. An event can be toggled between set and unset states. Users of the event object can wait for it to change from unset to set, using an optional timeout value.
```
import multiprocessing
import time
def wait_for_event(e):
"""Wait for the event to be set before doing anything"""
print('wait_for_event: starting')
e.wait()
print('wait_for_event: e.is_set()->', e.is_set())
def wait_for_event_timeout(e, t):
"""Wait t seconds and then timeout"""
print('wait_for_event_timeout: starting')
e.wait(t)
print('wait_for_event_timeout: e.is_set()->', e.is_set())
if __name__ == '__main__':
e = multiprocessing.Event()
w1 = multiprocessing.Process(
name='block',
target=wait_for_event,
args=(e,),
)
w1.start()
w1 = multiprocessing.Process(
name='block',
target=wait_for_event,
args=(e,),
)
w1.start()
w2 = multiprocessing.Process(
name='nonblock',
target=wait_for_event_timeout,
args=(e, 2),
)
w2.start()
print('main: waiting before calling Event.set()')
time.sleep(3)
e.set()
print('main: event is set')
```
* When wait() times out it returns without an error. The caller is responsible for checking the state of the event using is_set().
* a event.set() will set off all process that are waiting for this event
## Controlling Access to Resources
In situations when a single resource needs to be shared between multiple processes, a Lock can be used to avoid conflicting accesses.
```
import multiprocessing
import sys
def worker_with(lock, stream):
with lock:
stream.write('Lock acquired via with\n')
def worker_no_with(lock, stream):
lock.acquire()
try:
stream.write('Lock acquired directly\n')
finally:
lock.release()
lock = multiprocessing.Lock()
w = multiprocessing.Process(
target=worker_with,
args=(lock, sys.stdout),
)
nw = multiprocessing.Process(
target=worker_no_with,
args=(lock, sys.stdout),
)
w.start()
nw.start()
w.join()
nw.join()
```
## Synchronizing Operations
### Condition
Condition objects can be used to synchronize parts of a workflow so that some run in parallel but others run sequentially, even if they are in separate processes.
```
import multiprocessing
import time
def stage_1(cond):
"""perform first stage of work,
then notify stage_2 to continue
"""
name = multiprocessing.current_process().name
print('Starting', name)
with cond:
print('{} done and ready for stage 2'.format(name))
cond.notify_all()
def stage_2(cond):
"""wait for the condition telling us stage_1 is done"""
name = multiprocessing.current_process().name
print('Starting', name)
with cond:
cond.wait()
print('{} running'.format(name))
if __name__ == '__main__':
condition = multiprocessing.Condition()
s1 = multiprocessing.Process(name='s1',
target=stage_1,
args=(condition,))
s2_clients = [
multiprocessing.Process(
name='stage_2[{}]'.format(i),
target=stage_2,
args=(condition,),
)
for i in range(1, 3)
]
for c in s2_clients:
c.start()
time.sleep(1)
s1.start()
s1.join()
for c in s2_clients:
c.join()
```
In this example, two process run the second stage of a job in parallel, but only after the first stage is done.
## Controlling Concurrent Access to Resources
Sometimes it is useful to allow more than one worker access to a resource at a time, while still limiting the overall number. For example, a connection pool might support a fixed number of simultaneous connections, or a network application might support a fixed number of concurrent downloads. A Semaphore is one way to manage those connections.
```
import random
import multiprocessing
import time
class ActivePool:
def __init__(self):
super(ActivePool, self).__init__()
self.mgr = multiprocessing.Manager()
self.active = self.mgr.list()
self.lock = multiprocessing.Lock()
def makeActive(self, name):
with self.lock:
self.active.append(name)
def makeInactive(self, name):
with self.lock:
self.active.remove(name)
def __str__(self):
with self.lock:
return str(self.active)
def worker(s, pool):
name = multiprocessing.current_process().name
with s:
pool.makeActive(name)
print('Activating {} now running {}'.format(
name, pool))
time.sleep(random.random())
pool.makeInactive(name)
if __name__ == '__main__':
pool = ActivePool()
s = multiprocessing.Semaphore(3)
jobs = [
multiprocessing.Process(
target=worker,
name=str(i),
args=(s, pool),
)
for i in range(10)
]
for j in jobs:
j.start()
while True:
alive = 0
for j in jobs:
if j.is_alive():
alive += 1
j.join(timeout=0.1)
print('Now running {}'.format(pool))
if alive == 0:
# all done
break
```
## Managing Shared State
In the previous example, the list of active processes is maintained centrally in the ActivePool instance via a special type of list object created by a Manager. The Manager is responsible for coordinating shared information state between all of its users.
```
import multiprocessing
import pprint
def worker(d, key, value):
d[key] = value
if __name__ == '__main__':
mgr = multiprocessing.Manager()
d = mgr.dict()
jobs = [
multiprocessing.Process(
target=worker,
args=(d, i, i * 2),
)
for i in range(10)
]
for j in jobs:
j.start()
for j in jobs:
j.join()
print('Results:', d)
```
By creating the list through the manager, it is shared and updates are seen in all processes. Dictionaries are also supported.
## Shared Namespaces
In addition to dictionaries and lists, a Manager can create a shared Namespace.
```
import multiprocessing
def producer(ns, event):
ns.value = 'This is the value'
event.set()
def consumer(ns, event):
try:
print('Before event: {}'.format(ns.value))
except Exception as err:
print('Before event, error:', str(err))
event.wait()
print('After event:', ns.value)
if __name__ == '__main__':
mgr = multiprocessing.Manager()
namespace = mgr.Namespace()
event = multiprocessing.Event()
p = multiprocessing.Process(
target=producer,
args=(namespace, event),
)
c = multiprocessing.Process(
target=consumer,
args=(namespace, event),
)
c.start()
p.start()
c.join()
p.join()
```
Any named value added to the Namespace is visible to all of the clients that receive the Namespace instance.
**It is important to know that updates to the contents of mutable values in the namespace are not propagated automatically.**
```
import multiprocessing
def producer(ns, event):
# DOES NOT UPDATE GLOBAL VALUE!
ns.my_list.append('This is the value')
event.set()
def consumer(ns, event):
print('Before event:', ns.my_list)
event.wait()
print('After event :', ns.my_list)
if __name__ == '__main__':
mgr = multiprocessing.Manager()
namespace = mgr.Namespace()
namespace.my_list = []
event = multiprocessing.Event()
p = multiprocessing.Process(
target=producer,
args=(namespace, event),
)
c = multiprocessing.Process(
target=consumer,
args=(namespace, event),
)
c.start()
p.start()
c.join()
p.join()
```
## Process Pools
The Pool class can be used to manage a fixed number of workers for simple cases where the work to be done can be broken up and distributed between workers independently. The return values from the jobs are collected and returned as a list. The pool arguments include the number of processes and a function to run when starting the task process (invoked once per child).
```
import multiprocessing
def do_calculation(data):
return data * 2
def start_process():
print('Starting', multiprocessing.current_process().name)
if __name__ == '__main__':
inputs = list(range(10))
print('Input :', inputs)
builtin_outputs = map(do_calculation, inputs)
print('Built-in:', [i for i in builtin_outputs])
pool_size = multiprocessing.cpu_count() * 2
pool = multiprocessing.Pool(
processes=pool_size,
initializer=start_process,
)
pool_outputs = pool.map(do_calculation, inputs)
pool.close() # no more tasks
pool.join() # wrap up current tasks
print('Pool :', pool_outputs)
```
By default, Pool creates a fixed number of worker processes and passes jobs to them until there are no more jobs. Setting the maxtasksperchild parameter tells the pool to restart a worker process after it has finished a few tasks, preventing long-running workers from consuming ever more system resources.
```
import multiprocessing
def do_calculation(data):
return data * 2
def start_process():
print('Starting', multiprocessing.current_process().name)
if __name__ == '__main__':
inputs = list(range(10))
print('Input :', inputs)
builtin_outputs = map(do_calculation, inputs)
print('Built-in:', builtin_outputs)
pool_size = multiprocessing.cpu_count() * 2
pool = multiprocessing.Pool(
processes=pool_size,
initializer=start_process,
maxtasksperchild=2,
)
pool_outputs = pool.map(do_calculation, inputs)
pool.close() # no more tasks
pool.join() # wrap up current tasks
print('Pool :', pool_outputs)
```
The pool restarts the workers when they have completed their allotted tasks, even if there is no more work. In this output, eight workers are created, even though there are only 10 tasks, and each worker can complete two of them at a time.
| true |
code
| 0.232528 | null | null | null | null |
|
# Chainer MNIST Model Deployment
* Wrap a Chainer MNIST python model for use as a prediction microservice in seldon-core
* Run locally on Docker to test
* Deploy on seldon-core running on minikube
## Dependencies
* [Helm](https://github.com/kubernetes/helm)
* [Minikube](https://github.com/kubernetes/minikube)
* [S2I](https://github.com/openshift/source-to-image)
```bash
pip install seldon-core
pip install chainer==6.2.0
```
## Train locally
```
#!/usr/bin/env python
import argparse
import chainer
import chainer.functions as F
import chainer.links as L
from chainer import training
from chainer.training import extensions
import chainerx
# Network definition
class MLP(chainer.Chain):
def __init__(self, n_units, n_out):
super(MLP, self).__init__()
with self.init_scope():
# the size of the inputs to each layer will be inferred
self.l1 = L.Linear(None, n_units) # n_in -> n_units
self.l2 = L.Linear(None, n_units) # n_units -> n_units
self.l3 = L.Linear(None, n_out) # n_units -> n_out
def forward(self, x):
h1 = F.relu(self.l1(x))
h2 = F.relu(self.l2(h1))
return self.l3(h2)
def main():
parser = argparse.ArgumentParser(description='Chainer example: MNIST')
parser.add_argument('--batchsize', '-b', type=int, default=100,
help='Number of images in each mini-batch')
parser.add_argument('--epoch', '-e', type=int, default=20,
help='Number of sweeps over the dataset to train')
parser.add_argument('--frequency', '-f', type=int, default=-1,
help='Frequency of taking a snapshot')
parser.add_argument('--device', '-d', type=str, default='-1',
help='Device specifier. Either ChainerX device '
'specifier or an integer. If non-negative integer, '
'CuPy arrays with specified device id are used. If '
'negative integer, NumPy arrays are used')
parser.add_argument('--out', '-o', default='result',
help='Directory to output the result')
parser.add_argument('--resume', '-r', type=str,
help='Resume the training from snapshot')
parser.add_argument('--unit', '-u', type=int, default=1000,
help='Number of units')
parser.add_argument('--noplot', dest='plot', action='store_false',
help='Disable PlotReport extension')
group = parser.add_argument_group('deprecated arguments')
group.add_argument('--gpu', '-g', dest='device',
type=int, nargs='?', const=0,
help='GPU ID (negative value indicates CPU)')
args = parser.parse_args(args=[])
device = chainer.get_device(args.device)
print('Device: {}'.format(device))
print('# unit: {}'.format(args.unit))
print('# Minibatch-size: {}'.format(args.batchsize))
print('# epoch: {}'.format(args.epoch))
print('')
# Set up a neural network to train
# Classifier reports softmax cross entropy loss and accuracy at every
# iteration, which will be used by the PrintReport extension below.
model = L.Classifier(MLP(args.unit, 10))
model.to_device(device)
device.use()
# Setup an optimizer
optimizer = chainer.optimizers.Adam()
optimizer.setup(model)
# Load the MNIST dataset
train, test = chainer.datasets.get_mnist()
train_iter = chainer.iterators.SerialIterator(train, args.batchsize)
test_iter = chainer.iterators.SerialIterator(test, args.batchsize,
repeat=False, shuffle=False)
# Set up a trainer
updater = training.updaters.StandardUpdater(
train_iter, optimizer, device=device)
trainer = training.Trainer(updater, (args.epoch, 'epoch'), out=args.out)
# Evaluate the model with the test dataset for each epoch
trainer.extend(extensions.Evaluator(test_iter, model, device=device))
# Dump a computational graph from 'loss' variable at the first iteration
# The "main" refers to the target link of the "main" optimizer.
# TODO(niboshi): Temporarily disabled for chainerx. Fix it.
if device.xp is not chainerx:
trainer.extend(extensions.DumpGraph('main/loss'))
# Take a snapshot for each specified epoch
frequency = args.epoch if args.frequency == -1 else max(1, args.frequency)
trainer.extend(extensions.snapshot(), trigger=(frequency, 'epoch'))
# Write a log of evaluation statistics for each epoch
trainer.extend(extensions.LogReport())
# Save two plot images to the result dir
if args.plot and extensions.PlotReport.available():
trainer.extend(
extensions.PlotReport(['main/loss', 'validation/main/loss'],
'epoch', file_name='loss.png'))
trainer.extend(
extensions.PlotReport(
['main/accuracy', 'validation/main/accuracy'],
'epoch', file_name='accuracy.png'))
# Print selected entries of the log to stdout
# Here "main" refers to the target link of the "main" optimizer again, and
# "validation" refers to the default name of the Evaluator extension.
# Entries other than 'epoch' are reported by the Classifier link, called by
# either the updater or the evaluator.
trainer.extend(extensions.PrintReport(
['epoch', 'main/loss', 'validation/main/loss',
'main/accuracy', 'validation/main/accuracy', 'elapsed_time']))
# Print a progress bar to stdout
trainer.extend(extensions.ProgressBar())
if args.resume is not None:
# Resume from a snapshot
chainer.serializers.load_npz(args.resume, trainer)
# Run the training
trainer.run()
if __name__ == '__main__':
main()
```
Wrap model using s2i
```
!s2i build . seldonio/seldon-core-s2i-python3:1.3.0-dev chainer-mnist:0.1
!docker run --name "mnist_predictor" -d --rm -p 5000:5000 chainer-mnist:0.1
```
Send some random features that conform to the contract
```
!seldon-core-tester contract.json 0.0.0.0 5000 -p
!docker rm mnist_predictor --force
```
# Test using Minikube
**Due to a [minikube/s2i issue](https://github.com/SeldonIO/seldon-core/issues/253) you will need [s2i >= 1.1.13](https://github.com/openshift/source-to-image/releases/tag/v1.1.13)**
```
!minikube start --memory 4096
```
## Setup Seldon Core
Use the setup notebook to [Setup Cluster](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html#Setup-Cluster) with [Ambassador Ingress](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html#Ambassador) and [Install Seldon Core](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html#Install-Seldon-Core). Instructions [also online](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html).
```
!eval $(minikube docker-env) && s2i build . seldonio/seldon-core-s2i-python3:1.3.0-dev chainer-mnist:0.1
!kubectl create -f chainer_mnist_deployment.json
!kubectl rollout status deploy/chainer-mnist-deployment-chainer-mnist-predictor-76478b2
!seldon-core-api-tester contract.json `minikube ip` `kubectl get svc ambassador -o jsonpath='{.spec.ports[0].nodePort}'` \
seldon-deployment-example --namespace default -p
!minikube delete
```
| true |
code
| 0.545286 | null | null | null | null |
|
# Teste para Duas Médias - ANOVA (Analysis of Variance)
Análise de variância é a técnica estatística que permite avaliar afirmações sobre as médias de populações. A análise visa, fundamentalmente, verificar se existe uma diferença significativa entre as médias e se os fatores exercem influência em alguma variável dependente, com $k$ populaçõess com médias $\mu_i$ desconhecidas.
Os pressupostos básicos da análise de variância são:
- As amostras são aleatórias e independentes
- As populações têm distribuição normal (o teste é paramétrico)
- As variâncias populacionais são iguais
Na prática, esses pressupostos não precisam ser todos rigorosamente satisfeitos. Os resultados são empiricamente verdadeiros sempre que as populações são aproximadamente normais (isso é, não muito assimétricas) e têm variâncias próximas.
Queremos testar se as $k$ médias são iguais, para isto vamos utilizara tabela **ANOVA - Analysis of Variance**
Variação dos dados:
<br>
$$SQT = \sum_{i=1}^{k}\sum_{j=1}^{n_i} (x_{ij}- \overline x)^2 =
\sum_{i=1}^{k}\sum_{j=1}^{n_i} x_{ij}^2 -
\frac{1}{n}\Big(\sum_{i=1}^{k}\sum_{j=1}^{n_i} x_{ij}\Big)^2 $$
<br><br>
$$SQE = \sum_{i=1}^{k} n_i(\overline x_{i}- \overline x)^2 =
\sum_{i=1}^{k} \frac{1}{n_i}\Big (\sum_{j=1}^{n_i} x_{ij}\Big)^2 -
\frac{1}{n}\Big(\sum_{i=1}^{k}\sum_{j=1}^{n_i} x_{ij}\Big)^2 $$
<br><br>
$$SQR = \sum_{i=1}^{k}\sum_{j=1}^{n_i} x_{ij}^2 -
\sum_{i=1}^{k} \frac{1}{n_i}\Big (\sum_{j=1}^{n_i} x_{ij}\Big)^2$$
<br><br>
Verifica-se que:
$$SQT=SQE+SQR$$
onde:
- SQT: Soma dos Quadrados Total
- SQE: Soma dos Quadrados Explicada
- SQR: Soma dos Quadrados dos Resíduos
<br><br>
<img src="img/anova.png" width="450" />
<br><br>
Dentro das premissas de variáveis aleatórias e independentes, o ideal é que cada uma das variáveis de um modelo explique uma determinadda parte da variável dependente. Com isso, podemos imaginar como o *fit* desejado, veriáveis independentes entre si conforme ilustrado na figura abaixo.
<br><br>
<img src="img/anova_explicada.png" width="350" />
<br><br>
# Exemplo: DataSet de crescimento de dentes com duas terapias diferentes
O DataSet representa o crescimento de dentes em animais submetidos a duas terapias alternativas, onde a resposta é o comprimento dos odontoblastos (células responsáveis pelo crescimento dentário) em 60 porquinhos-da-índia. Cada animal recebeu um dos três níveis de dose de vitamina C (0,5, 1 e 2 mg / dia) por um dos dois métodos de entrega (suco de laranja "OJ" ou ácido ascórbico (uma forma de vitamina C e codificada como "CV").
Uma vantagem importante do ANOVA de duas vias é que ele é mais eficiente em comparação com o one-way. Existem duas fontes de variação designáveis supp e dose em nosso exemplo - e isso ajuda a reduzir a variação de erros, tornando esse design mais eficiente. A ANOVA bidirecional (fatorial) pode ser usada para, por exemplo, comparar as médias das populações que são diferentes de duas maneiras. Também pode ser usado para analisar as respostas médias em um experimento com dois fatores. Ao contrário do One-Way ANOVA, ele nos permite testar o efeito de dois fatores ao mesmo tempo. Pode-se também testar a independência dos fatores, desde que haja mais de uma observação em cada célula. A única restrição é que o número de observações em cada célula deve ser igual (não existe tal restrição no caso de ANOVA unidirecional).
Discutimos modelos lineares mais cedo - e ANOVA é de fato um tipo de modelo linear - a diferença é que ANOVA é onde você tem fatores discretos cujo efeito em um resultado contínuo (variável) você quer entender.
## Importando as bibliotecas
```
import pandas as pd
import statsmodels.api as sm
from statsmodels.formula.api import ols
from statsmodels.stats.anova import anova_lm
from statsmodels.graphics.factorplots import interaction_plot
import matplotlib.pyplot as plt
from scipy import stats
```
## Importando os dados
```
datafile = "../../99 Datasets/ToothGrowth.csv.zip"
data = pd.read_csv(datafile)
data.head()
data.info()
data.describe()
fig = interaction_plot(data.dose, data.supp, data.len,
colors=['red','blue'], markers=['D','^'], ms=10)
```
## Calculando a soma dos quadrados
<br>
<img src="img/SS.png">
<br>
```
# Graus de liberdade
N = len(data.len)
df_a = len(data.supp.unique()) - 1
df_b = len(data.dose.unique()) - 1
df_axb = df_a*df_b
df_w = N - (len(data.supp.unique())*len(data.dose.unique()))
grand_mean = data['len'].mean()
# SS para o fator A
ssq_a = sum([(data[data.supp ==l].len.mean()-grand_mean)**2 for l in data.supp])
# SS para o fator B
ssq_b = sum([(data[data.dose ==l].len.mean()-grand_mean)**2 for l in data.dose])
# SS total
ssq_t = sum((data.len - grand_mean)**2)
## SS do resíduo
vc = data[data.supp == 'VC']
oj = data[data.supp == 'OJ']
vc_dose_means = [vc[vc.dose == d].len.mean() for d in vc.dose]
oj_dose_means = [oj[oj.dose == d].len.mean() for d in oj.dose]
ssq_w = sum((oj.len - oj_dose_means)**2) +sum((vc.len - vc_dose_means)**2)
# SS de AxB (iterativa)
ssq_axb = ssq_t-ssq_a-ssq_b-ssq_w
```
## Média dos Quadrados
```
# MQ da A
ms_a = ssq_a/df_a
# MQ de B
ms_b = ssq_b/df_b
# MQ de AxB
ms_axb = ssq_axb/df_axb
# MQ do resíduo
ms_w = ssq_w/df_w
```
## F-Score
```
# F-Score de A
f_a = ms_a/ms_w
# F-Score de B
f_b = ms_b/ms_w
# F-Score de C
f_axb = ms_axb/ms_w
```
## p-Value
```
# p-Value de A
p_a = stats.f.sf(f_a, df_a, df_w)
# p-Value de B
p_b = stats.f.sf(f_b, df_b, df_w)
# p-Value de C
p_axb = stats.f.sf(f_axb, df_axb, df_w)
```
## Resultados
```
# Colocando os resultados em um DataFrame
results = {'sum_sq':[ssq_a, ssq_b, ssq_axb, ssq_w],
'df':[df_a, df_b, df_axb, df_w],
'F':[f_a, f_b, f_axb, 'NaN'],
'PR(>F)':[p_a, p_b, p_axb, 'NaN']}
columns=['sum_sq', 'df', 'F', 'PR(>F)']
aov_table1 = pd.DataFrame(results, columns=columns,
index=['supp', 'dose',
'supp:dose', 'Residual'])
# Calculando Eta-Squared e Omega-Squared, e imprimindo a tabela
def eta_squared(aov):
aov['eta_sq'] = 'NaN'
aov['eta_sq'] = aov[:-1]['sum_sq']/sum(aov['sum_sq'])
return aov
def omega_squared(aov):
mse = aov['sum_sq'][-1]/aov['df'][-1]
aov['omega_sq'] = 'NaN'
aov['omega_sq'] = (aov[:-1]['sum_sq']-(aov[:-1]['df']*mse))/(sum(aov['sum_sq'])+mse)
return aov
eta_squared(aov_table1)
omega_squared(aov_table1)
print(aov_table1)
```
### Comentários
Os resultados da variável dose tem a maior distância do valor médio (sum_sq) e portanto a maior variância relatica (F-Score). Isto pode ser comprovado pelo Eta-Squared e Omega-Squared (definição abaixo).
### Mais sobre Eta-Squared e Omega-Squared
Outro conjunto de medidas de tamanho de efeito para variáveis independentes categóricas tem uma interpretação mais intuitiva e é mais fácil de avaliar. Eles incluem o Eta Squared, o Parcial Eta Squared e o Omega Squared. Como a estatística R Squared, todos eles têm a interpretação intuitiva da proporção da variância contabilizada.
Eta Squared é calculado da mesma forma que R Squared, e tem a interpretação mais equivalente: da variação total em Y, a proporção que pode ser atribuída a um X específico.
O Eta Squared, no entanto, é usado especificamente em modelos ANOVA. Cada efeito categórico no modelo tem seu próprio Eta Squared, de modo que você obtenha uma medida específica e intuitiva do efeito dessa variável.
A desvantagem do Eta Squared é que é uma medida tendenciosa da variância da população explicada (embora seja exata para a amostra), sempre superestima.
Esse viés fica muito pequeno à medida que o tamanho da amostra aumenta, mas para amostras pequenas, uma medida de tamanho de efeito imparcial é Omega Squared. Omega Squared tem a mesma interpretação básica, mas usa medidas imparciais dos componentes de variância. Por ser uma estimativa imparcial das variâncias populacionais, o Omega Squared é sempre menor que o Eta Squared (ES).
Não há padrões acordados sobre como interpretar um ES. A interpretação é basicamente subjetiva. Melhor abordagem é comparar com outros estudos.
Cohen (1977):
- 0.2 = pequeno
- 0.5 = moderado
- 0.8 = grande
## ANOVA com Statsmodels
```
formula = 'len ~ C(supp) + C(dose) + C(supp):C(dose)'
model = ols(formula, data).fit()
aov_table = anova_lm(model, typ=2)
eta_squared(aov_table)
omega_squared(aov_table)
print(aov_table)
```
## Quantile-Quantile (QQplot)
```
res = model.resid
fig = sm.qqplot(res, line='s')
plt.show()
```
| true |
code
| 0.60679 | null | null | null | null |
|
# Import development libraries
```
import bw2data as bd
import bw2calc as bc
import bw_processing as bwp
import numpy as np
import matrix_utils as mu
```
# Create new project
```
bd.projects.set_current("Multifunctionality")
```
Our existing implementation allows us to distinguish activities and prodducts, though not everyone does this.
```
db = bd.Database("background")
db.write({
("background", "1"): {
"type": "process",
"name": "1",
"exchanges": [{
"input": ("background", "bio"),
"amount": 1,
"type": "biosphere",
}]
},
("background", "2"): {
"type": "process",
"name": "2",
"exchanges": [{
"input": ("background", "bio"),
"amount": 10,
"type": "biosphere",
}]
},
("background", "bio"): {
"type": "biosphere",
"name": "bio",
"exchanges": [],
},
("background", "3"): {
"type": "process",
"name": "2",
"exchanges": [
{
"input": ("background", "1"),
"amount": 2,
"type": "technosphere",
}, {
"input": ("background", "2"),
"amount": 4,
"type": "technosphere",
}, {
"input": ("background", "4"),
"amount": 1,
"type": "production",
}
]
},
("background", "4"): {
"type": "product",
}
})
method = bd.Method(("something",))
method.write([(("background", "bio"), 1)])
```
# LCA of background system
This database is fine and normal. It work the way we expect.
Here we use the preferred calling convention for Brightway 2.5, with the convenience function `prepare_lca_inputs`.
```
fu, data_objs, _ = bd.prepare_lca_inputs(demand={("background", "4"): 1}, method=("something",))
lca = bc.LCA(fu, data_objs=data_objs)
lca.lci()
lca.lcia()
lca.score
```
# Multifunctional activities
What happens when we have an activity that produces multiple products?
```
db = bd.Database("example mf")
db.write({
# Activity
("example mf", "1"): {
"type": "process",
"name": "mf 1",
"exchanges": [
{
"input": ("example mf", "2"),
"amount": 2,
"type": "production",
}, {
"input": ("example mf", "3"),
"amount": 4,
"type": "production",
},
{
"input": ("background", "1"),
"amount": 2,
"type": "technosphere",
}, {
"input": ("background", "2"),
"amount": 4,
"type": "technosphere",
}
]
},
# Product
("example mf", "2"): {
"type": "good",
"price": 4
},
# Product
("example mf", "3"): {
"type": "good",
"price": 6
}
})
```
We can do an LCA of one of the products, but we will get a warning about a non-square matrix:
```
fu, data_objs, _ = bd.prepare_lca_inputs(demand={("example mf", "1"): 1}, method=("something",))
lca = bc.LCA(fu, data_objs=data_objs)
lca.lci()
```
If we look at the technosphere matrix, we can see our background database (upper left quadrant), and the two production exchanges in the lower right:
```
lca.technosphere_matrix.toarray()
```
# Handling multifunctionality
There are many ways to do this. This notebook is an illustration of how such approaches can be madde easier using the helper libraries [bw_processing](https://github.com/brightway-lca/bw_processing) and [matrix_utils](https://github.com/brightway-lca/matrix_utils), not a statement that one approach is better (or even correct).
We create a new, in-memory "delta" `bw_processing` data package that gives new values for some additional columns in the matrix (the virtual activities generated by allocating each product), as well as updating values in the existing matrix.
```
def economic_allocation(dataset):
assert isinstance(dataset, bd.backends.Activity)
# Split exchanges into functional and non-functional
functions = [exc for exc in dataset.exchanges() if exc.input.get('type') in {'good', 'waste'}]
others = [exc for exc in dataset.exchanges() if exc.input.get('type') not in {'good', 'waste'}]
for exc in functions:
assert exc.input.get("price") is not None
total_value = sum([exc.input['price'] * exc['amount'] for exc in functions])
# Plus one because need to add (missing) production exchanges
n = len(functions) * (len(others) + 1) + 1
data = np.zeros(n)
indices = np.zeros(n, dtype=bwp.INDICES_DTYPE)
flip = np.zeros(n, dtype=bool)
for i, f in enumerate(functions):
allocation_factor = f['amount'] * f.input['price'] / total_value
col = bd.get_id(f.input)
# Add explicit production
data[i * (len(others) + 1)] = f['amount']
indices[i * (len(others) + 1)] = (col, col)
for j, o in enumerate(others):
index = i * (len(others) + 1) + j + 1
data[index] = o['amount'] * allocation_factor
flip[index] = o['type'] in {'technosphere', 'generic consumption'}
indices[index] = (bd.get_id(o.input), col)
# Add implicit production of allocated dataset
data[-1] = 1
indices[-1] = (dataset.id, dataset.id)
# Note: This assumes everything is in technosphere, a real function would also
# patch the biosphere
allocated = bwp.create_datapackage(sum_intra_duplicates=True, sum_inter_duplicates=False)
allocated.add_persistent_vector(
matrix="technosphere_matrix",
indices_array=indices,
flip_array=flip,
data_array=data,
name=f"Allocated version of {dataset}",
)
return allocated
dp = economic_allocation(bd.get_activity(("example mf", "1")))
lca = bc.LCA({bd.get_id(("example mf", "2")): 1}, data_objs=data_objs + [dp])
lca.lci()
```
Note that the last two columns, when summed together, form the unallocated activity (column 4):
```
lca.technosphere_matrix.toarray()
```
To make sure what we have done is clear, we can create the matrix just for the "delta" data package:
```
mu.MappedMatrix(packages=[dp], matrix="technosphere_matrix").matrix.toarray()
```
And we can now do LCAs of both allocated products:
```
lca.lcia()
lca.score
lca = bc.LCA({bd.get_id(("example mf", "3")): 1}, data_objs=data_objs + [dp])
lca.lci()
lca.lcia()
lca.score
```
| true |
code
| 0.501282 | null | null | null | null |
|
#### _Speech Processing Labs 2021: SIGNALS 1: Digital Signals: Sampling and Superposition_
```
## Run this first!
%matplotlib inline
import sys
import matplotlib.pyplot as plt
import numpy as np
import cmath
from matplotlib.animation import FuncAnimation
from IPython.display import HTML
plt.style.use('ggplot')
from dspMisc import *
```
# Digital Signals: Sampling and Superposition
### Learning Outcomes
* Understand how we can approximate a sine wave with a specific frequency, given a specific sampling rate
* Understand how sampling rate limits the frequencies of sinusoids we can describe with discrete sequences
* Explain when aliasing will occurr and how this relates the sampling rate and the Nyquist frequency.
* Observe how compound waveforms can be described as a linear combination of phasors ('superposition')
### Background
* Topic Videos: Digital Signal, Short Term Analysis, Series Expansion
* [Interpreting the discrete fourier transform](./signals-1-1-interpreting-the-discrete-fourier-transform.ipynb)
#### Extra background (extension material)
* [Phasors, complex numbers and sinusoids](./signals-1-2a-digital-signals-complex-numbers.ipynb)
## 1 Introduction
In the class videos, you've seen that sound waves are changes in air pressure (amplitude) over time. In the notebook [interpreting the discrete fourier transform](./signals-1-1-interpreting-the-discrete-fourier-transform.ipynb), we saw that we can
decompose complex sound waves into 'pure tone' frequency components. We also saw that the output of the DFT was actually a sequence of complex numbers! In this notebook, we'll give a bit more background on the relationship
between complex numbers and sinusoids, and why it's useful to characterise sinusoids in the complex plane.
## 2 Phasors and Sinusoids: tl;dr
At this point, I should say that you can get a conceptual understanding of digital signal processing concepts without going through _all_ the math. We certainly won't be examining your knowledge of complex numbers or geometry in this class. Of course, if you want to go further in understanding digital signal processing then you will have to learn a bit more about complex numbers, algebra, calculus and geometry than we'll touch upon here.
However, right now the main point that we'd like you to take away from this notebook is that we can conveniently represent periodic functions, like sine waves, in terms of **phasors**: basically what shown on the left hand side of the following gif:

You can think of the **phasor as an analogue clockface** with one moving hand. On the right hand side is one period of a 'pure tone' sinusoid, sin(t).
Now, we can think of every movement of the 'clockhand' (the phasor is actually this **vector**) as a step in time on the sinusoid graph: at every time step, the phasor (i.e., clockhand) rotates by some angle. If you follow the blue dots on both graphs, you should be able to see that the amplitude of the sinusoid matches the height of the clockhand on the phasor at each time step.
This gives us a different way of viewing the periodicity of $\sin(t)$. The sinusoid starts to repeat itself when the phasor has done one full circle. So, rather than drawing out an infinite time vs amplitude graph, we can capture the behaviour of this periodic function in terms rotations with respect to this finite circle.
So, what's the connection with complex numbers? Well, that blue dot on the phasor actually represents a complex number, and the dimensions of that graph are actually the **real** (horizontal) and **imaginary** (vertical) parts of that number. That is, a complex number of the form $a + jb$, where $a$ is the real part and $b$ is the imaginary part. Quite conveniently, we can also express complex numbers in terms of a **magnitude** or radius $r$ (length of the clockhand) and a **phase angle** $\theta$ (angle of rotation from the point (1,0)) and an exponential. So, we can write each point that the phasor hits in the form $re^{j\theta}$. This will be familiar if you've had a look at the DFT formulae.
This relationship with complex numbers basically allows us to describe complicated periodic waveforms in terms of combinations of 'pure tone' sinusoids. It turns out that maths for this works very elegantly using the phasor/complex number based representation.
The basic things you need to know are:
* A **sinusoid** (time vs amplitude, i.e. in the **time domain**) can be described in terms of a vector rotating around a circle (i.e. a phasor in the complex plane)
* The **phasor** vector (i.e., 'clockhand') is described by a complex number $re^{j\theta}$
* $re^{j\theta}$ is a point on a circle centered at (0,0) with radius $r$, $\theta$ degrees rotated from $(r,0)$ on the 2D plane.
* the **magnitude** $r$ tells us what the peak amplitude of the corresponding sine wave is
* the **phase angle** $\theta$ tells us how far around the circle the phasor has gone:
* zero degrees (0 radians) corresponds to the point (r,0), while 90 degrees ($\pi/2$ radians) corresponds to the point (0,r)
* The vertical projection of the vector (onto the y-axis) corresponds to the amplitude of a **sine wave** $\sin(\theta)$
* The horizontal projection of the vector (onto the x-axis) corresponds to the amplitude of a **cosine wave** $\cos(\theta)$
* The **period** of these sine and cosine waves is the same as the time it takes to make one full circle of the phasor (in seconds). As such the **frequency** of the sine and cosine waves is the same as the frequency with which the phasor makes a full cycle (in cycles/second = Hertz).
If you take the maths on faith, you can see all of this just from the gif above. You'll probably notice in most phonetics text books, if they show this at all, they will just show the rotating phasor without any of the details.
If you want to know more about how this works, you can find a quick tour of these concepts in the (extension) notebook on [complex numbers and sinusoids](./sp-m1-2-digital-signals-complex-numbers). But it's fine if you don't get all the details right now. In fact, if you get the intuition behind from the phasor/sinusoid relationship above, it's fine to move on now to the rest of the content in this notebook.
## Changing the frequency of a sinusoid
So, we think of sine (and cosine) waves in terms of taking steps around a circle in the 2D (complex) plane. Each of these 'steps' was represented by a complex number, $re^{j\theta}$ (the phasor) where the magnitude $r$ tells you the radius of the circle, and the phase angle $\theta$ tells you how far around the circle you are. When $\theta = 0$, means you are at the point (r,0), while $\theta = 90$ degrees means you are at the point (0,r). There are 360 degrees (or 2$\pi$ radians) makes a complete cycle, i.e. when $\theta = 360$ degrees, you end up back at (r,0).
<div class="alert alert-success">
It's often easier to deal with angles measured in <strong>radians</strong> rather than <strong>degrees</strong>. The main thing to note is that:
$$2\pi \text{ radians} = 360 \text{ degrees, i.e. 1 full circle }$$
Again, it may not seem obvious why we should want to use radians instead of the more familiar degrees. The reason is that it makes dividing up a circle really nice and neat and so ends up making calculations much easier in the long run!
</div>
So that describes a generic sinusoid, e.g. $\sin(\theta)$, but now you might ask yourself how do we generate a generate a sine wave with a specific frequency $f$ Herzt (Hz=cycles/second)?
Let's take a concrete example, if we want a sinusoid with a frequency of $f=10$ Hz, that means:
* **Frequency:** we need to complete 10 full circles of the phasor in 1 second.
* **Period:** So, we have to complete 1 full cycle every 1/10 seconds (i.e. the period of this sinusoid $T=0.1$ seconds).
* **Angular velocity:** So, the phasor has to rotate at a speed of $2\pi/0.1 = 20\pi$ radians per second
So if we take $t$ to represent time, a sine wave with frequency 10 Hz has the form $\sin(20\pi t)$
* Check: at $t=0.1$ seconds we have $\sin(20 \times \pi \times 0.1) = \sin(2\pi)$, one full cycle.
* This corresponds to the phasor $e^{20\pi t j}$, where $t$ represents some point in time.
In general:
* A sine wave with peak amplitude R and frequency $f$ Hz is expressed as $R\sin(2 \pi f t)$
* The amplitude of this sine wave at time $t$ corresponds to the imaginary part of the phasor $Re^{2\pi ftj}$.
* A cosine wave with peak amplitude R and frequency $f$ Hz is expressed as $\cos (2 \pi f t$)
* The amplitude of this cosine wave at time $t$ corresponds to the real part of the phasor $Re^{2\pi ftj}$.
The term $2\pi f$ corresponds to the angular velocity, often written as $\omega$ which is measured in radians per second.
### Exercise
Q: What's the frequency of $\sin(2\pi t)$?
## Frequency and Sampling Rate
The representation above assumes we're dealing with a continuous sinusoid, but since we're dealing with computers we need to think about digital (i.e. discrete) representations of waveforms.
So if we want to analyze a wave, we also need to sample it at a specific **sampling rate**, $f_s$.
For a given sampling rate $f_s$ (samples/second) we can work out the time between each sample, the **sampling period** as:
$$ t_s = \frac{1}{f_s}$$
The units of $t_s$ is seconds/sample. That means that if we want a phasor to complete $f$ cycles/second, the angle between each sampled $\theta_s$ step will need to be a certain size in order to complete a full cycle every $t_s$ seconds.
The units here help us figure this out: the desired frequency $f$ has units cycles/second. So, we can calculate what fraction of a complete cycle we need to take with each sample by multiplying $f$ with the sampling time $t_s$.
* $c_s = ft_s$.
* cycles/sample = cycles/second x seconds/sample
We know each cycle is $2\pi$ radians (360 degrees), so we can then convert $c_s$ to an angle as follows:
* $ \theta_s = 2 \pi c_s $
### Exercise
Q: Calculate the period $t_s$ and angle $\theta_s$ between samples for a sine wave with frequency $f=8$ Hz and sampling rate of $f_s=64$
### Notes
### Setting the Phasor Frequency
I've written a function `gen_phasors_vals_freq` that calculates the complex phasor values (`zs`), angles (`thetas`) and time steps (`tsteps`) for a phasor with a given frequency `freq` over a given time period (`Tmin` to `Tmax`). In the following we'll use this to plot how changes in the phasor relate to changes in the corresponding sinusoid given a specific sampling rate (`sampling_rate`).
#### Example:
Let's look at a phasor and corresponding sine wave with frequency $f=2$ Hz (`freq`), given a sampling rate of $f_s=16$ (`sampling_rate`) over 4 seconds.
```
## Our parameters:
Tmin = 0
Tmax = 4
freq = 2 # cycles/second
sampling_rate = 16 # i.e, f_s above
t_step=1/sampling_rate # i.e., t_s above
## Get our complex values corresponding to the phasor with frequency freq
zs, thetas, tsteps = gen_phasor_vals_freq(Tmin=Tmin, Tmax=Tmax, t_step=t_step, freq=freq)
## Project to real and imaginary parts for plotting
Xs = np.real(zs)
Ys = np.imag(zs)
## generate the background for the plot: a phasor diagram on the left, a time v amplitude graph on the right
fig, phasor, sinusoid = create_anim_bkg(tsteps, thetas, freq)
## the phasor is plotted on the left on the left with a circle with radius 1 for reference
phasor.set_xlabel("Real values")
phasor.set_ylabel("Imaginary values")
# plot the points the phasor will "step on"
phasor.scatter(Xs, Ys)
## Plot our actual sampled sine wave in magenta on the right
sinusoid.plot(tsteps, Ys, 'o', color='magenta')
sinusoid.set_xlabel("Time (s)")
sinusoid.set_ylabel("Amplitude")
```
You should see two graphs above:
* On the left is the phasor diagram: the grey circle represents a phasor with magnitude 1, the red dots represents the points on the circle that the phasor samples between `tmin` and `tmax` given the `sampling_rate`.
* On the right is the time versus amplitude graph: The grey line shows a continuous sine wave with with frequency `freq`, the magenta dots show the points we actually sample between times `tmin` and `tmax` given the `sampling_rate`.
You can see that although we sample 64 points for the sine wave, we actually just hit the same 8 values per cycle on the phasor.
It's clearer when we animate it the phasor in time:
```
## Now let's animate it!
## a helper to draw the 'clockhand' line
X, Y, n_samples = get_line_coords(Xs, Ys)
## initialize the animation
line = phasor.plot([], [], color='b', lw=3)[0]
sin_t = sinusoid.plot([], [], 'o', color='b')[0]
figs = (line, sin_t)
anim = FuncAnimation(
fig, lambda x: anim_sinusoid(x, X=X, Y=Y, tsteps=tsteps, figs=figs), interval=600, frames=n_samples)
HTML(anim.to_html5_video())
```
### Exercise
Change the `freq` variable in the code below to investigate:
* What happens when the sine wave frequency (cycles/second) `freq` is set to `sampling_rate/2`?
* What happens when the frequency `freq` approaches the half the `sampling_rate`?
* What happens when the frequency `freq` equals the half the `sampling_rate`?
* What happens when the frequency `freq` is greater than `sampling_rate/2`
```
## Example: Play around with these values
Tmax = 1
Tmin = 0
freq = 15 # cycles/second
sampling_rate = 16 # f_s above
t_step=1/sampling_rate
print("freq=%.2f cycles/sec, sampling rate=%.2f samples/sec, sampling period=%.2f sec" % (freq, sampling_rate, t_step) )
## Get our complex values corresponding to the sine wave
zs, thetas, tsteps = gen_phasor_vals_freq(Tmin=Tmin, Tmax=Tmax, t_step=t_step, freq=freq)
## Project to real and imaginary parts for plotting
Xs = np.real(zs)
Ys = np.imag(zs)
## generate the background
fig, phasor, sinusoid = create_anim_bkg(tsteps, thetas, freq)
## Plot the phasor samples
phasor.scatter(Xs, Ys)
phasor.set_xlabel("Real values")
phasor.set_ylabel("Imaginary values")
## Plot our actual sampled sine wave in magenta
sinusoid.plot(tsteps, Ys, 'o-', color='magenta')
sinusoid.set_xlabel("Time (s)")
sinusoid.set_ylabel("Amplitude")
## Animate the phasor and sinusoid
X, Y, n_samples = get_line_coords(Xs, Ys)
line = phasor.plot([], [], color='b', lw=3)[0]
sin_t = sinusoid.plot([], [], 'o', color='b')[0]
figs = (line, sin_t)
anim = FuncAnimation(
fig, lambda x: anim_sinusoid(x, X=X, Y=Y, tsteps=tsteps, figs=figs), interval=600, frames=n_samples)
HTML(anim.to_html5_video())
```
### Notes
## Aliasing
If you change the frequency (`freq`) for the phasor to be higher than half the sampling rate , you'll see that the actual frequency of the sinusoid doesn't actually keep getting higher. In fact, with `freq=8` the sine wave (i.e. projection of the vertical (imaginary) component) doesn't appear to have any amplitude modulation at all. However, keen readers will note that for `sampling_rate=16` and `freq=8` in the example above, the real projection (i.e. cosine) would show amplitude modulations since $\cos(t)$ is 90 degree phase shifted relative to $\sin(t)$. The phasor `freq=15` appears to complete only one cycle per second, just like for `freq=1`, but appears to rotating the opposite way.
These are examples of **aliasing**: given a specific sampling rate there is a limit to which we can distinguish different frequencies because we simply can't take enough samples to show the difference!
In the example above, even though we are sampling from a 15 Hz wave for `freq=15`, we only get one sample per cycle and the overall sampled sequence looks like a 1 Hz wave. So, the fact that the phasor appears to rotate the opposite way to `freq=1` is because it's actually just the 15th step of the `freq=1` phasor.
<div class="alert alert-success">
In general, with a sampling rate of $f_s$ we can't distinguish between a sine wave of frequency $f_0$ and a sine wave of $f_0 + kf_s$ for any integer $k$.
</div>
This means that we can't actually tell the frequency of the underlying waveform based on the sample amplitudes alone.
The practical upshot of this is that for sampling rate $f_s$, the highest frequency we can actually sample is $f_s/2$, the **Nyquist Frequency**. This is one of the most important concepts in digital signal processing and will effect pretty much all the methods we use. It's why we see the mirroring effect in [the DFT output spectrum](./signals-1-1-interpreting-the-discrete-fourier-transform.ipynb). So, if you remember just one thing, remember this!
## Superposition
This use of phasors to represent sinusoids may seem excessively complex at the moment, but it actually gives us a nice way of visualizing what happens when we add two sine waves together, i.e. linear superposition.
We've seen how the Fourier Transform gives us a way of breaking down periodic waveforms (no matter how complicated) into a linear combination of sinusoids (cosine waves, specifically). But if you've seen the actual DFT equations, you'll have noticed that each DFT output is actually is described in terms of phasors of specific frequencies (e.g. sums over $e^{-j \theta}$ values). We can now get at least a visual idea of what this means.
Let's look at how can combining phasors can let us define complicated waveforms in a simple manner.
### Magnitude and Phase Modifications
First, let's note that we can easily change the magnitude and phase of a sine wave before adding it to others to make a complex waveform.
* We can change the magnitude of a sinusoidal component by multiplying all the values of that sinusoid by a scalar $r$.
* We can apply a phase shift of $\phi$ radians to $\sin(\theta)$ to gives us a sine wave of the form: $\sin(\theta + \phi)$. It basically means we start our cycles around the unit circle at $e^{i\phi}$ instead of at $e^{i0} = 1 + i0 \mapsto (1,0)$
### Generating linear combinations of sinusoids
Let's plot some combinations of sinusoids.
First let's set the sampling rate and the start and end times of the sequence we're going to generate:
```
## Some parameters to play with
Tmax = 2
Tmin = 0
sampling_rate = 16
t_step=1/sampling_rate
```
Now, let's create some phasors with different magnitudes, frequencies and phases. Here we create 2 phasors with magnitude 1 and no phase shift, one with `freq=2` Hz and another phasor with frequency `2*freq`.
We then add the two phasors values together at each timestep (`zs_sum` in the code below):
```
## Define a bunch of sinusoids. We can do this in terms of 3 parameters:
## (magnitude, frequency, phase)
## The following defines two sinusoids, both with magnitude (peak amplitude) 1 and the same phase (no phase shift)
## The second has double the frequency of the first:
freq=2
params = [(1, freq, 0), (1, 2*freq, 0)]
## Later: change these values and see what happens, e.g.
#params = [(1, freq, 0), (0.4, 5*freq, 0), (0.4, 5*freq, np.pi)]
phasor_list = []
theta_list = []
tsteps_list = []
## Generate a list of phasors for each set of (mag, freq, phase) parameters
for mag, freq, phase in params:
## Generate a phasor with frequency freq
## zs are the phasor values
## thetas are the corresponding angles for each value in zs
## tsteps are the corresponding time steps for each value in zs
zs, thetas, tsteps = gen_phasor_vals_freq(Tmin=Tmin, Tmax=Tmax, t_step=t_step, freq=freq)
## Apply the phase_shift
phase_shift = np.exp(1j*phase)
## scale by the magnitude mag - changes the peak amplitude
zs = mag*zs*phase_shift
## Append the phasor to a list
phasor_list.append(zs)
## The angle sequence and time sequence in case you want to inspect them
## We don't actually use them below
theta_list.append(thetas)
tsteps_list.append(tsteps)
## Superposition: add the individual phasors in the list together (all with the same weights right now)
zs_sum = np.zeros(len(tsteps_list[0]))
for z in phasor_list:
zs_sum = zs_sum + z
```
Now, we can plot the sine (vertical) component of the individual phasors (on the right), ignoring the cosine (horizontal) component for the moment.
```
## Plot the phasor (left) and the projection of the imaginary (vertical) component (right)
## cosproj would be the projection to the real axis, but let's just ignore that for now
fig, phasor, sinproj, cosproj = create_phasor_sinusoid_bkg(Tmin, Tmax, ymax=3, plot_phasor=True, plot_real=False, plot_imag=True,)
dense_tstep=0.001
for mag, freq, phase in params:
## We just want to plot the individual sinusoids (time v amplitude), so we ignore
## the complex numbers we've been using to plot the phasors
_, dense_thetas, dense_tsteps = gen_phasor_vals_freq(Tmin, Tmax, dense_tstep, freq)
sinproj.plot(dense_tsteps, mag*np.sin(dense_thetas+phase), color='grey')
```
Now plot the sum of the phasors (left) and the projected imaginary component in magenta (right) - that is, the sum of the sine components (in grey)
```
## Plot sinusoids as sampled
Xlist = []
Ylist = []
## some hacks to get to represent the individual phasors as lines from the centre of a circle as well as points
for i, zs in enumerate(phasor_list):
Xs_ = np.real(zs)
Ys_ = np.imag(zs)
X_, Y_, _ = get_line_coords(Xs_, Ys_)
Xlist.append(X_)
Ylist.append(Y_)
## Project the real and imaginary parts of the timewise summed phasor values
Xs = np.real(zs_sum)
Ys = np.imag(zs_sum)
Xline, Yline, _ = get_line_coords(Xs, Ys)
## plot the summed phasor values as 2-d coordinates (left)
## plot the sine projection of the phasor values in time (right)
sinproj.plot(tsteps_list[0], Ys, color='magenta')
fig
```
Now let's see an animation of how we're adding these phasors together!
```
anim = get_phasor_animation(Xline, Yline, tsteps, phasor, sinproj, cosproj, fig, Xlist=Xlist, Ylist=Ylist, params=params)
anim
```
In the animation above you should see:
* the red circle represents the first phasor (`freq=2`)
* the blue circle represents the 2nd phasor (`freq=4`)
* In adding the the two phasors together, we add the corresponding vectors for each phasor at each point in time.
### Exercise:
* What happens when you add up two sinusoids with the same frequency but different magnitudes
* e.g. `params = [(1, freq, 0), (2, freq, 0)]`
* What happens when you change the phase?
* Can you find $\phi$ such that $\sin(\theta+\phi) = \cos(\theta)$ ?
* When do the individual sinusoids cancel each other out?
* Assume you have a compound sinusoid defined by the following params:
* `params = [(1, freq, 0), (0.4, 5*freq, 0)]`
* What sinusoid could you add to cancel the higher frequency component out while keeping the lower frequency one?
### Notes
## Maths Perspective: The DFT equation as a sum of phasors
Now if you look at the mathematical form of the DFT, you can start to recognize this as representing a sequence of phasors of different frequencies, which have a real (cosine) and imaginary (sine) component.
The DFT is defined as follows:
* For input: $x[n]$, for $n=0..N-1$ (i.e. a time series of $N$ samples)
* We calculate an output of N complex numbers $\mapsto$ magnitude and phases of specific phasors:
Where the $k$th output, DFT[k], is calculated using the following equation:
$$
\begin{align}
DFT[k] &= \sum_{n=0}^{N-1} x[n] e^{-j \frac{2\pi n}{N} k} \\
\end{align}
$$
Which is equivalent to the following (using Euler's rule):
$$
\begin{align}
DFT[k] &= \sum_{n=0}^{N-1} x[n]\big[\cos(\frac{2\pi n}{N} k) - j \sin(\frac{2\pi n}{N} k) \big]
\end{align}
$$
This basically says that each DFT output is the result of multiplying the $n$th input value $x[n]$ with the $n$th sample of a phasor (hence sine and cosine waves) of a specific frequency, and summing the result (hence the complex number output). The frequency of DFT[k] is $k$ times the frequency of DFT[1], where the frequency of DFT[1] depends on the input size $N$ and the sampling rate (as discussed the [this notebook](./signals-1-1-interpreting-the-discrete-fourier-transform.ipynb)). The sampling rate determines the time each phasor step takes, hence how much time it takes to make a full phasor cycle, hence what frequencies we can actually compare the input against.
The pointwise multiplication and summation is also known as a dot product (aka inner product). The dot product between two vectors tells us how similar those two vectors are. So in a very rough sense, the DFT 'figures out' which frequency components are present in the input, by looking at how similar the input is to each of the N phasors represented in the DFT output.
There are two more notebooks on the DFT for this module, but both are extension material (not essential).
* [This notebook](./signals-1-3-discrete-fourier-transform-in-detail.ipynb) goes into more maths details but is purely extension (you can skip)
* [This notebook](./signals-1-4-more-interpreting-the-dft.ipynb) looks at a few more issues in interpreting the DFT
So, you can look at those if you want more details. Otherwise, we'll move onto the source-filter model in the second signals lab!
| true |
code
| 0.726225 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/aubricot/computer_vision_with_eol_images/blob/master/object_detection_for_image_cropping/chiroptera/chiroptera_train_tf2_ssd_rcnn.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Train Tensorflow Faster-RCNN and SSD models to detect bats (Chiroptera) from EOL images
---
*Last Updated 19 Oct 2021*
-Now runs in Python 3 with Tensorflow 2.0-
Use EOL user generated cropping coordinates to train Faster-RCNN and SSD Object Detection Models implemented in Tensorflow to detect bats from EOL images. Training data consists of the user-determined best square thumbnail crop of an image, so model outputs will also be a square around objects of interest.
Datasets were downloaded to Google Drive in [chiroptera_preprocessing.ipynb](https://github.com/aubricot/computer_vision_with_eol_images/blob/master/object_detection_for_image_cropping/chiroptera/chiroptera_preprocessing.ipynb).
***Models were trained in Python 2 and TF 1 in Jan 2020: RCNN trained for 2 days to 200,000 steps and SSD for 4 days to 450,000 steps.***
Notes:
* Before you you start: change the runtime to "GPU" with "High RAM"
* Change parameters using form fields on right (/where you see 'TO DO' in code)
* For each 24 hour period on Google Colab, you have up to 12 hours of free GPU access.
References:
* [Official Tensorflow Object Detection API Instructions](https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/training.html)
* [Medium Blog on training using Tensorflow Object Detection API in Colab](https://medium.com/analytics-vidhya/training-an-object-detection-model-with-tensorflow-api-using-google-colab-4f9a688d5e8b)
## Installs & Imports
---
```
# Mount google drive to import/export files
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
# For running inference on the TF-Hub module
import tensorflow as tf
import tensorflow_hub as hub
# For downloading and displaying images
import matplotlib
import matplotlib.pyplot as plt
import tempfile
import urllib
from urllib.request import urlretrieve
from six.moves.urllib.request import urlopen
from six import BytesIO
# For drawing onto images
from PIL import Image
from PIL import ImageColor
from PIL import ImageDraw
from PIL import ImageFont
from PIL import ImageOps
# For measuring the inference time
import time
# For working with data
import numpy as np
import pandas as pd
import os
import csv
# Print Tensorflow version
print('Tensorflow Version: %s' % tf.__version__)
# Check available GPU devices
print('The following GPU devices are available: %s' % tf.test.gpu_device_name())
# Define functions
# Read in data file exported from "Combine output files A-D" block above
def read_datafile(fpath, sep="\t", header=0, disp_head=True):
"""
Defaults to tab-separated data files with header in row 0
"""
try:
df = pd.read_csv(fpath, sep=sep, header=header)
if disp_head:
print("Data header: \n", df.head())
except FileNotFoundError as e:
raise Exception("File not found: Enter the path to your file in form field and re-run").with_traceback(e.__traceback__)
return df
# To load image in and do something with it
def load_img(path):
img = tf.io.read_file(path)
img = tf.image.decode_jpeg(img, channels=3)
return img
# To display loaded image
def display_image(image):
fig = plt.figure(figsize=(20, 15))
plt.grid(False)
plt.imshow(image)
# For reading in images from URL and passing through TF models for inference
def download_and_resize_image(url, new_width=256, new_height=256, #From URL
display=False):
_, filename = tempfile.mkstemp(suffix=".jpg")
response = urlopen(url)
image_data = response.read()
image_data = BytesIO(image_data)
pil_image = Image.open(image_data)
im_h, im_w = pil_image.size
pil_image = ImageOps.fit(pil_image, (new_width, new_height), Image.ANTIALIAS)
pil_image_rgb = pil_image.convert("RGB")
pil_image_rgb.save(filename, format="JPEG", quality=90)
#print("Image downloaded to %s." % filename)
if display:
display_image(pil_image)
return filename, im_h, im_w
# Download, compile and build the Tensorflow Object Detection API (takes 4-9 minutes)
# TO DO: Type in the path to your working directory in form field to right
basewd = "/content/drive/MyDrive/train" #@param {type:"string"}
%cd $basewd
# Set up directory for TF2 Model Garden
# TO DO: Type in the folder you would like to contain TF2
folder = "tf2" #@param {type:"string"}
if not os.path.exists(folder):
os.makedirs(folder)
%cd $folder
os.makedirs("tf_models")
%cd tf_models
# Clone the Tensorflow Model Garden
!git clone --depth 1 https://github.com/tensorflow/models/
%cd ../..
# Build the Object Detection API
wd = basewd + '/' + folder
%cd $wd
!cd tf_models/models/research/ && protoc object_detection/protos/*.proto --python_out=. && cp object_detection/packages/tf2/setup.py . && python -m pip install .
```
## Model preparation (only run once)
---
These blocks download and set-up files needed for training object detectors. After running once, you can train and re-train as many times as you'd like.
### Download and extract pre-trained models
```
# Download pre-trained models from Tensorflow Object Detection Model Zoo
# https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md
# SSD and Faster-RCNN used as options below
# modified from https://github.com/RomRoc/objdet_train_tensorflow_colab/blob/master/objdet_custom_tf_colab.ipynb
import shutil
import glob
import tarfile
# CD to folder where TF models are installed (tf2)
%cd $wd
# Make folders for your training files for each model
# Faster RCNN Model
if not (os.path.exists('tf_models/train_demo')):
!mkdir tf_models/train_demo
if not (os.path.exists('tf_models/train_demo/rcnn')):
!mkdir tf_models/train_demo/rcnn
if not (os.path.exists('tf_models/train_demo/rcnn/pretrained_model')):
!mkdir tf_models/train_demo/rcnn/pretrained_model
if not (os.path.exists('tf_models/train_demo/rcnn/finetuned_model')):
!mkdir tf_models/train_demo/rcnn/finetuned_model
if not (os.path.exists('tf_models/train_demo/rcnn/trained')):
!mkdir tf_models/train_demo/rcnn/trained
# Download the model
MODEL = 'faster_rcnn_resnet50_v1_640x640_coco17_tpu-8'
MODEL_FILE = MODEL + '.tar.gz'
DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/tf2/20200711/'
DEST_DIR = 'tf_models/train_demo/rcnn/pretrained_model'
if not (os.path.exists(MODEL_FILE)):
urlretrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE)
tar = tarfile.open(MODEL_FILE)
tar.extractall()
tar.close()
os.remove(MODEL_FILE)
if (os.path.exists(DEST_DIR)):
shutil.rmtree(DEST_DIR)
os.rename(MODEL, DEST_DIR)
# SSD Model
if not (os.path.exists('tf_models/train_demo/ssd')):
!mkdir tf_models/train_demo/ssd
if not (os.path.exists('tf_models/train_demo/ssd/pretrained_model')):
!mkdir tf_models/train_demo/ssd/pretrained_model
if not (os.path.exists('tf_models/train_demo/ssd/finetuned_model')):
!mkdir tf_models/train_demo/ssd/finetuned_model
if not (os.path.exists('tf_models/train_demo/ssd/trained')):
!mkdir tf_models/train_demo/ssd/trained
# Download the model
MODEL = 'ssd_mobilenet_v2_320x320_coco17_tpu-8'
MODEL_FILE = MODEL + '.tar.gz'
DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/tf2/20200711/'
DEST_DIR = 'tf_models/train_demo/ssd/pretrained_model'
if not (os.path.exists(MODEL_FILE)):
urlretrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE)
tar = tarfile.open(MODEL_FILE)
tar.extractall()
tar.close()
os.remove(MODEL_FILE)
if (os.path.exists(DEST_DIR)):
shutil.rmtree(DEST_DIR)
os.rename(MODEL, DEST_DIR)
```
### Convert training data to tf.record format
1) Download generate_tfrecord.py using code block below
2) Open the Colab file explorer on the right and navigate to your current working directory
3) Double click on generate_tfrecord.py to open it in the Colab text editor.
4) Modify the file for your train dataset:
* update label names to the class(es) of interest at line 31 (Chiroptera)
# TO-DO replace this with label map
def class_text_to_int(row_label):
if row_label == 'Chiroptera':
return 1
else:
None
* update the filepath where you want your train tf.record file to save at line 85
# TO-DO replace path with your filepath
def main(_):
writer = tf.python_io.TFRecordWriter('/content/drive/MyDrive/[yourfilepath]/tf.record')
5) Close Colab text editor and proceed with steps below to generate tf.record files for your test and train datasets
```
# Download chiroptera_generate_tfrecord.py to your wd in Google Drive
# Follow directions above to modify the file for your dataset
!gdown --id 1fVXeuk7ALHTlTLK3GGH8p6fMHuuWt1Sr
# Convert crops_test to tf.record format for test data
# Modified from https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/training.html
# TO DO: Update file paths in form fields
csv_input = "/content/drive/MyDrive/train/tf2/pre-processing/Chiroptera_crops_test_notaug_oob_rem_fin.csv" #@param {type:"string"}
output_path = "/content/drive/MyDrive/train/tf2/test_images/tf.record" #@param {type:"string"}
test_image_dir = "/content/drive/MyDrive/train/tf2/test_images" #@param {type:"string"}
!python chiroptera_generate_tfrecord.py --csv_input=$csv_input --output_path=$output_path --image_dir=$test_image_dir
# Move tf.record for test images to test images directory
!mv tf.record $image_dir
# Convert crops_train to tf.record format for train data
# Modified from https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/training.html
# TO DO: Update file paths in form fields
csv_input = "/content/drive/MyDrive/train/tf2/pre-processing/Chiroptera_crops_train_aug_oob_rem_fin.csv" #@param {type:"string"}
output_path = "/content/drive/MyDrive/train/tf2/images/tf.record" #@param {type:"string"}
train_image_dir = "/content/drive/MyDrive/train/tf2/images" #@param {type:"string"}
global image_dir
!python chiroptera_generate_tfrecord.py --csv_input=$csv_input --output_path=$output_path --image_dir=$train_image_dir
# Move tf.record for training images to train images directory
!mv tf.record $image_dir
```
### Make label map for class Chiroptera
```
%%writefile labelmap.pbtxt
item {
id: 1
name: 'Chiroptera'
}
```
### Modify model config files for training Faster-RCNN and SSD with your dataset
If you have errors with training, check the pipline_config_path and model_dir in the config files for R-FCN or Faster-RCNN model
```
# Adjust model config file based on training/testing datasets
# Modified from https://stackoverflow.com/a/63645324
from google.protobuf import text_format
from object_detection.protos import pipeline_pb2
%cd $wd
# TO DO: Adjust parameters ## add form fields here
filter = "Chiroptera" #@param {type:"string"}
config_basepath = "tf_models/train_demo/" #@param {type:"string"}
label_map = 'labelmap.pbtxt'
train_tfrecord_path = "/content/drive/MyDrive/train/tf2/images/tf.record" #@param {type:"string"}
test_tfrecord_path = "/content/drive/MyDrive/train/tf2/test_images/tf.record" #@param {type:"string"}
ft_ckpt_basepath = "/content/drive/MyDrive/train/tf2/tf_models/train_demo/" #@param {type:"string"}
ft_ckpt_type = "detection" #@param ["detection", "classification"]
num_classes = 1 #@param
batch_size = 1 #@param ["1", "4", "8", "16", "32", "64", "128"] {type:"raw"}
# Define pipeline for modifying model config files
def read_config(model_config):
if 'rcnn/' in model_config:
model_ckpt = 'rcnn/pretrained_model/checkpoint/ckpt-0'
elif 'ssd/' in model_config:
model_ckpt = 'ssd/pretrained_model/checkpoint/ckpt-0'
config_fpath = config_basepath + model_config
pipeline = pipeline_pb2.TrainEvalPipelineConfig()
with tf.io.gfile.GFile(config_fpath, "r") as f:
proto_str = f.read()
text_format.Merge(proto_str, pipeline)
return pipeline, model_ckpt, config_fpath
def modify_config(pipeline, model_ckpt, ft_ckpt_basepath):
finetune_checkpoint = ft_ckpt_basepath + model_ckpt
pipeline.model.faster_rcnn.num_classes = num_classes
pipeline.train_config.fine_tune_checkpoint = finetune_checkpoint
pipeline.train_config.fine_tune_checkpoint_type = ft_ckpt_type
pipeline.train_config.batch_size = batch_size
pipeline.train_config.use_bfloat16 = False # True only if training on TPU
pipeline.train_input_reader.label_map_path = label_map
pipeline.train_input_reader.tf_record_input_reader.input_path[0] = train_tfrecord_path
pipeline.eval_input_reader[0].label_map_path = label_map
pipeline.eval_input_reader[0].tf_record_input_reader.input_path[0] = test_tfrecord_path
return pipeline
def write_config(pipeline, config_fpath):
config_outfpath = os.path.splitext(config_fpath)[0] + '_' + filter + '.config'
config_text = text_format.MessageToString(pipeline)
with tf.io.gfile.GFile(config_outfpath, "wb") as f:
f.write(config_text)
return config_outfpath
def setup_pipeline(model_config, ft_ckpt_basepath):
print('\n Modifying model config file for {}'.format(model_config))
pipeline, model_ckpt, config_fpath = read_config(model_config)
pipeline = modify_config(pipeline, model_ckpt, ft_ckpt_basepath)
config_outfpath = write_config(pipeline, config_fpath)
print(' Modifed model config file saved to {}'.format(config_outfpath))
if config_outfpath:
return "Success!"
else:
return "Fail: try again"
# Modify model configs
model_configs = ['rcnn/pretrained_model/pipeline.config', 'ssd/pretrained_model/pipeline.config']
[setup_pipeline(model_config, ft_ckpt_basepath) for model_config in model_configs]
```
## Train
---
```
# Determine how many train and eval steps to use based on dataset size
# TO DO: Only need to update path if you didn't just run "Model Preparation" block above
try:
train_image_dir
except NameError:
train_image_dir = "/content/drive/MyDrive/train/tf2/images" #@param {type:"string"}
examples = len(os.listdir(train_image_dir))
print("Number of train examples: \n", examples)
# Get the number of testing examples
# TO DO: Only need to update path if you didn't just run "Model Preparation" block above
try:
test_image_dir
except NameError:
test_image_dir = "/content/drive/MyDrive/train/tf2/test_images" #@param {type:"string"}
test_examples = len(os.listdir(test_image_dir))
print("Number of test examples: \n", test_examples)
# Get the training batch size
# TO DO: Only need to update value if you didn't just run "Model Preparation" block above
try:
batch_size
except NameError:
batch_size = 1 #@param ["1", "4", "8", "16", "32", "64", "128"] {type:"raw"}
print("Batch size: \n", batch_size)
# Calculate roughly how many steps to use for training and testing
steps_per_epoch = examples / batch_size
num_eval_steps = test_examples / batch_size
print("Number of steps per training epoch: \n", int(steps_per_epoch))
print("Number of evaluation steps: \n", int(num_eval_steps))
# TO DO: Choose how many epochs to train for
epochs = 410 #@param {type:"slider", min:10, max:1000, step:100}
num_train_steps = int(epochs * steps_per_epoch)
num_eval_steps = int(num_eval_steps)
# TO DO: Choose paths for RCNN or SSD model
pipeline_config_path = "tf_models/train_demo/rcnn/pretrained_model/pipeline_Chiroptera.config" #@param ["tf_models/train_demo/rcnn/pretrained_model/pipeline_Chiroptera.config", "tf_models/train_demo/ssd/pretrained_model/pipeline_Chiroptera.config"]
model_dir = "tf_models/train_demo/rcnn/trained" #@param ["tf_models/train_demo/rcnn/trained", "tf_models/train_demo/ssd/trained"]
output_directory = "tf_models/train_demo/rcnn/finetuned_model" #@param ["tf_models/train_demo/rcnn/finetuned_model", "tf_models/train_demo/ssd/finetuned_model"]
trained_checkpoint_dir = "tf_models/train_demo/rcnn/trained" #@param ["tf_models/train_demo/rcnn/trained", "tf_models/train_demo/ssd/trained"] {allow-input: true}
# Save vars to environment for access with cmd line tools below
os.environ["trained_checkpoint_dir"] = "trained_checkpoint_dir"
os.environ["num_train_steps"] = "num_train_steps"
os.environ["num_eval_steps"] = "num_eval_steps"
os.environ["pipeline_config_path"] = "pipeline_config_path"
os.environ["model_dir"] = "model_dir"
os.environ["output_directory"] = "output_directory"
# Optional: Visualize training progress with Tensorboard
# Load the TensorBoard notebook extension
%load_ext tensorboard
# Log training progress using TensorBoard
%tensorboard --logdir $model_dir
# Actual training
# Note: You can change the number of epochs in code block below and re-run to train longer
# Modified from https://github.com/RomRoc/objdet_train_tensorflow_colab/blob/master/objdet_custom_tf_colab.ipynb
matplotlib.use('Agg')
%cd $wd
!python tf_models/models/research/object_detection/model_main_tf2.py \
--alsologtostderr \
--num_train_steps=$num_train_steps \
--num_eval_steps=$num_eval_steps \
--pipeline_config_path=$pipeline_config_path \
--model_dir=$model_dir
# Export trained model
# Modified from https://github.com/RomRoc/objdet_train_tensorflow_colab/blob/master/objdet_custom_tf_colab.ipynb
%cd $wd
# Save the model
!python tf_models/models/research/object_detection/exporter_main_v2.py \
--input_type image_tensor \
--pipeline_config_path=$pipeline_config_path \
--trained_checkpoint_dir=$trained_checkpoint_dir \
--output_directory=$output_directory
# Evaluate trained model to get mAP and IoU stats for COCO 2017
# Change pipeline_config_path and checkpoint_dir when switching between SSD and Faster-RCNN models
matplotlib.use('Agg')
!python tf_models/models/research/object_detection/model_main_tf2.py \
--alsologtostderr \
--model_dir=$model_dir \
--pipeline_config_path=$pipeline_config_path \
--checkpoint_dir=$trained_checkpoint_dir
```
| true |
code
| 0.562116 | null | null | null | null |
|
## **Yolov3 Algorithm**
```
import struct
import numpy as np
import pandas as pd
import os
from keras.layers import Conv2D
from keras.layers import Input
from keras.layers import BatchNormalization
from keras.layers import LeakyReLU
from keras.layers import ZeroPadding2D
from keras.layers import UpSampling2D
from keras.layers.merge import add, concatenate
from keras.models import Model
```
**Access Google Drive**
```
# Load the Drive helper and mount
from google.colab import drive
drive.mount('/content/drive')
```
**Residual Block**
formula: y=F(x) + x
```
def _conv_block(inp, convs, skip=True):
x = inp
count = 0
for conv in convs:
if count == (len(convs) - 2) and skip:
skip_connection = x
count += 1
if conv['stride'] > 1: x = ZeroPadding2D(((1,0),(1,0)))(x) #padding as darknet prefer left and top
x = Conv2D(conv['filter'],
conv['kernel'],
strides=conv['stride'],
padding='valid' if conv['stride'] > 1 else 'same', # padding as darknet prefer left and top
name='conv_' + str(conv['layer_idx']),
use_bias=False if conv['bnorm'] else True)(x)
if conv['bnorm']: x = BatchNormalization(epsilon=0.001, name='bnorm_' + str(conv['layer_idx']))(x)
if conv['leaky']: x = LeakyReLU(alpha=0.1, name='leaky_' + str(conv['layer_idx']))(x)
return add([skip_connection, x]) if skip else x
```
**Create Yolov3 Architecture**
Three output layers: 82, 94, 106
```
def make_yolov3_model():
input_image = Input(shape=(None, None, 3))
# Layer 0 => 4
x = _conv_block(input_image, [{'filter': 32, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 0},
{'filter': 64, 'kernel': 3, 'stride': 2, 'bnorm': True, 'leaky': True, 'layer_idx': 1},
{'filter': 32, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 2},
{'filter': 64, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 3}])
# Layer 5 => 8
x = _conv_block(x, [{'filter': 128, 'kernel': 3, 'stride': 2, 'bnorm': True, 'leaky': True, 'layer_idx': 5},
{'filter': 64, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 6},
{'filter': 128, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 7}])
# Layer 9 => 11
x = _conv_block(x, [{'filter': 64, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 9},
{'filter': 128, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 10}])
# Layer 12 => 15
x = _conv_block(x, [{'filter': 256, 'kernel': 3, 'stride': 2, 'bnorm': True, 'leaky': True, 'layer_idx': 12},
{'filter': 128, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 13},
{'filter': 256, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 14}])
# Layer 16 => 36
for i in range(7):
x = _conv_block(x, [{'filter': 128, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 16+i*3},
{'filter': 256, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 17+i*3}])
skip_36 = x
# Layer 37 => 40
x = _conv_block(x, [{'filter': 512, 'kernel': 3, 'stride': 2, 'bnorm': True, 'leaky': True, 'layer_idx': 37},
{'filter': 256, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 38},
{'filter': 512, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 39}])
# Layer 41 => 61
for i in range(7):
x = _conv_block(x, [{'filter': 256, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 41+i*3},
{'filter': 512, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 42+i*3}])
skip_61 = x
# Layer 62 => 65
x = _conv_block(x, [{'filter': 1024, 'kernel': 3, 'stride': 2, 'bnorm': True, 'leaky': True, 'layer_idx': 62},
{'filter': 512, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 63},
{'filter': 1024, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 64}])
# Layer 66 => 74
for i in range(3):
x = _conv_block(x, [{'filter': 512, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 66+i*3},
{'filter': 1024, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 67+i*3}])
# Layer 75 => 79
x = _conv_block(x, [{'filter': 512, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 75},
{'filter': 1024, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 76},
{'filter': 512, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 77},
{'filter': 1024, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 78},
{'filter': 512, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 79}], skip=False)
# Layer 80 => 82
yolo_82 = _conv_block(x, [{'filter': 1024, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 80},
{'filter': 54, 'kernel': 1, 'stride': 1, 'bnorm': False, 'leaky': False, 'layer_idx': 81}], skip=False)
# Layer 83 => 86
x = _conv_block(x, [{'filter': 256, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 84}], skip=False)
x = UpSampling2D(2)(x)
x = concatenate([x, skip_61])
# Layer 87 => 91
x = _conv_block(x, [{'filter': 256, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 87},
{'filter': 512, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 88},
{'filter': 256, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 89},
{'filter': 512, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 90},
{'filter': 256, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 91}], skip=False)
# Layer 92 => 94
yolo_94 = _conv_block(x, [{'filter': 512, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 92},
{'filter': 54, 'kernel': 1, 'stride': 1, 'bnorm': False, 'leaky': False, 'layer_idx': 93}], skip=False)
# Layer 95 => 98
x = _conv_block(x, [{'filter': 128, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 96}], skip=False)
x = UpSampling2D(2)(x)
x = concatenate([x, skip_36])
# Layer 99 => 106
yolo_106 = _conv_block(x, [{'filter': 128, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 99},
{'filter': 256, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 100},
{'filter': 128, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 101},
{'filter': 256, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 102},
{'filter': 128, 'kernel': 1, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 103},
{'filter': 256, 'kernel': 3, 'stride': 1, 'bnorm': True, 'leaky': True, 'layer_idx': 104},
{'filter': 54, 'kernel': 1, 'stride': 1, 'bnorm': False, 'leaky': False, 'layer_idx': 105}], skip=False)
model = Model(input_image, [yolo_82, yolo_94, yolo_106])
return model
```
**Read and Load the pre-trained model weight**
```
class WeightReader:
def __init__(self, weight_file):
with open(weight_file, 'rb') as w_f:
major, = struct.unpack('i', w_f.read(4))
minor, = struct.unpack('i', w_f.read(4))
revision, = struct.unpack('i', w_f.read(4))
if (major*10 + minor) >= 2 and major < 1000 and minor < 1000:
w_f.read(8)
else:
w_f.read(4)
transpose = (major > 1000) or (minor > 1000)
binary = w_f.read()
self.offset = 0
self.all_weights = np.frombuffer(binary, dtype='float32')
def read_bytes(self, size):
self.offset = self.offset + size
return self.all_weights[self.offset-size:self.offset]
def load_weights(self, model):
for i in range(106):
try:
conv_layer = model.get_layer('conv_' + str(i))
print("loading weights of convolution #" + str(i))
if i not in [81, 93, 105]:
norm_layer = model.get_layer('bnorm_' + str(i))
size = np.prod(norm_layer.get_weights()[0].shape)
beta = self.read_bytes(size) # bias
gamma = self.read_bytes(size) # scale
mean = self.read_bytes(size) # mean
var = self.read_bytes(size) # variance
weights = norm_layer.set_weights([gamma, beta, mean, var])
if len(conv_layer.get_weights()) > 1:
bias = self.read_bytes(np.prod(conv_layer.get_weights()[1].shape))
kernel = self.read_bytes(np.prod(conv_layer.get_weights()[0].shape))
kernel = kernel.reshape(list(reversed(conv_layer.get_weights()[0].shape)))
kernel = kernel.transpose([2,3,1,0])
conv_layer.set_weights([kernel, bias])
else:
kernel = self.read_bytes(np.prod(conv_layer.get_weights()[0].shape))
kernel = kernel.reshape(list(reversed(conv_layer.get_weights()[0].shape)))
kernel = kernel.transpose([2,3,1,0])
conv_layer.set_weights([kernel])
except ValueError:
print("no convolution #" + str(i))
def reset(self):
self.offset = 0
```
**Define the model**
```
model = make_yolov3_model()
```
**Call class WeightReader to read the weight & load to the model**
```
weight_reader = WeightReader("/content/drive/MyDrive/yolo_custom_model_Training/backup/test_cfg_20000.weights")
weight_reader.load_weights(model)
```
**We will use a pre-trained model to perform object detection**
```
import numpy as np
from matplotlib import pyplot
from matplotlib.patches import Rectangle
from numpy import expand_dims
from keras.models import load_model
from keras.preprocessing.image import load_img
from keras.preprocessing.image import img_to_array
# define the expected input shape for the model
input_w, input_h = 416, 416
```
**Draw bounding box on the images**
```
class BoundBox:
def __init__(self, xmin, ymin, xmax, ymax, objness = None, classes = None):
self.xmin = xmin
self.ymin = ymin
self.xmax = xmax
self.ymax = ymax
self.objness = objness
self.classes = classes
self.label = -1
self.score = -1
def get_label(self):
if self.label == -1:
self.label = np.argmax(self.classes)
return self.label
def get_score(self):
if self.score == -1:
self.score = self.classes[self.get_label()]
return self.score
def _sigmoid(x):
return 1. / (1. + np.exp(-x))
def decode_netout(netout, anchors, obj_thresh, net_h, net_w):
grid_h, grid_w = netout.shape[:2] # 0 and 1 is row and column 13*13
nb_box = 3 # 3 anchor boxes
netout = netout.reshape((grid_h, grid_w, nb_box, -1)) #13*13*3 ,-1
nb_class = netout.shape[-1] - 5
boxes = []
netout[..., :2] = _sigmoid(netout[..., :2])
netout[..., 4:] = _sigmoid(netout[..., 4:])
netout[..., 5:] = netout[..., 4][..., np.newaxis] * netout[..., 5:]
netout[..., 5:] *= netout[..., 5:] > obj_thresh
for i in range(grid_h*grid_w):
row = i / grid_w
col = i % grid_w
for b in range(nb_box):
# 4th element is objectness score
objectness = netout[int(row)][int(col)][b][4]
if(objectness.all() <= obj_thresh): continue
# first 4 elements are x, y, w, and h
x, y, w, h = netout[int(row)][int(col)][b][:4]
x = (col + x) / grid_w # center position, unit: image width
y = (row + y) / grid_h # center position, unit: image height
w = anchors[2 * b + 0] * np.exp(w) / net_w # unit: image width
h = anchors[2 * b + 1] * np.exp(h) / net_h # unit: image height
# last elements are class probabilities
classes = netout[int(row)][col][b][5:]
box = BoundBox(x-w/2, y-h/2, x+w/2, y+h/2, objectness, classes)
boxes.append(box)
return boxes
def correct_yolo_boxes(boxes, image_h, image_w, net_h, net_w):
new_w, new_h = net_w, net_h
for i in range(len(boxes)):
x_offset, x_scale = (net_w - new_w)/2./net_w, float(new_w)/net_w
y_offset, y_scale = (net_h - new_h)/2./net_h, float(new_h)/net_h
boxes[i].xmin = int((boxes[i].xmin - x_offset) / x_scale * image_w)
boxes[i].xmax = int((boxes[i].xmax - x_offset) / x_scale * image_w)
boxes[i].ymin = int((boxes[i].ymin - y_offset) / y_scale * image_h)
boxes[i].ymax = int((boxes[i].ymax - y_offset) / y_scale * image_h)
```
**Intersection over Union - Actual bounding box vs predicted bounding box**
```
def _interval_overlap(interval_a, interval_b):
x1, x2 = interval_a
x3, x4 = interval_b
if x3 < x1:
if x4 < x1:
return 0
else:
return min(x2,x4) - x1
else:
if x2 < x3:
return 0
else:
return min(x2,x4) - x3
#intersection over union
def bbox_iou(box1, box2):
intersect_w = _interval_overlap([box1.xmin, box1.xmax], [box2.xmin, box2.xmax])
intersect_h = _interval_overlap([box1.ymin, box1.ymax], [box2.ymin, box2.ymax])
intersect = intersect_w * intersect_h
w1, h1 = box1.xmax-box1.xmin, box1.ymax-box1.ymin
w2, h2 = box2.xmax-box2.xmin, box2.ymax-box2.ymin
#Union(A,B) = A + B - Inter(A,B)
union = w1*h1 + w2*h2 - intersect
return float(intersect) / union
```
**Non Max Suppression - Only choose the high probability bounding boxes**
```
#boxes from correct_yolo_boxes and decode_netout
def do_nms(boxes, nms_thresh):
if len(boxes) > 0:
nb_class = len(boxes[0].classes)
else:
return
for c in range(nb_class):
sorted_indices = np.argsort([-box.classes[c] for box in boxes])
for i in range(len(sorted_indices)):
index_i = sorted_indices[i]
if boxes[index_i].classes[c] == 0: continue
for j in range(i+1, len(sorted_indices)):
index_j = sorted_indices[j]
if bbox_iou(boxes[index_i], boxes[index_j]) >= nms_thresh:
boxes[index_j].classes[c] = 0
```
**Load and Prepare images**
```
def load_image_pixels(filename, shape):
# load the image to get its shape
image = load_img(filename) #load_img() Keras function to load the image .
width, height = image.size
# load the image with the required size
image = load_img(filename, target_size=shape) # target_size argument to resize the image after loading
# convert to numpy array
image = img_to_array(image)
# scale pixel values to [0, 1]
image = image.astype('float32')
image /= 255.0 #rescale the pixel values from 0-255 to 0-1 32-bit floating point values.
# add a dimension so that we have one sample
image = expand_dims(image, 0)
return image, width, height
```
**Save all of the boxes above the threshold**
```
def get_boxes(boxes, labels, thresh):
v_boxes, v_labels, v_scores = list(), list(), list()
# enumerate all boxes
for box in boxes:
# enumerate all possible labels
for i in range(len(labels)):
# check if the threshold for this label is high enough
if box.classes[i] > thresh:
v_boxes.append(box)
v_labels.append(labels[i])
v_scores.append(box.classes[i]*100)
return v_boxes, v_labels, v_scores
```
**Draw all the boxes based on the information from the previous step**
```
def draw_boxes(filename, v_boxes, v_labels, v_scores):
# load the image
data = pyplot.imread(filename)
# plot the image
pyplot.imshow(data)
# get the context for drawing boxes
ax = pyplot.gca()
# plot each box
for i in range(len(v_boxes)):
#by retrieving the coordinates from each bounding box and creating a Rectangle object.
box = v_boxes[i]
# get coordinates
y1, x1, y2, x2 = box.ymin, box.xmin, box.ymax, box.xmax
# calculate width and height of the box
width, height = x2 - x1, y2 - y1
# create the shape
rect = Rectangle((x1, y1), width, height, fill=False, color='white')
# draw the box
ax.add_patch(rect)
# draw text and score in top left corner
label = "%s (%.3f)" % (v_labels[i], v_scores[i])
pyplot.text(x1, y1, label, color='white')
# show the plot
pyplot.show()
draw_boxes
```
### **Detection**
```
%cd '/content/drive/MyDrive/yolo_custom_model_Training/custom_data/'
input_w, input_h = 416, 416
anchors = [[116,90, 156,198, 373,326], [30,61, 62,45, 59,119], [10,13, 16,30, 33,23]]
class_threshold = 0.15
pred_right = 0
labels = ['clear_plastic_bottle','plastic_bottle_cap','drink_can','plastic_straw','paper_straw',
'disposable_plastic_cup','styrofoam_piece','glass_bottle','pop_tab','paper_bag','plastic_utensils',
'normal_paper','plastic_lid']
filepath = '/content/drive/MyDrive/yolo_custom_model_Training/custom_data/'
for im in os.listdir(filepath):
image, image_w, image_h = load_image_pixels(im, (input_w, input_h))
yhat = model.predict(image)
boxes = list()
for i in range(len(yhat)):
boxes += decode_netout(yhat[i][0], anchors[i], class_threshold, input_h, input_w)
correct_yolo_boxes(boxes, image_h, image_w, input_h, input_w)
do_nms(boxes, 0.1)
v_boxes, v_labels, v_scores = get_boxes(boxes, labels, class_threshold)
if len(v_labels)!=0:
image_name, useless = im.split('.')
if image_name[:-3] == v_labels[0]:
pred_right +=1
accuracy = '{:.2%}'.format(pred_right/130)
print("the detection accuracy is " + accuracy)
pred_right
```
| true |
code
| 0.512388 | null | null | null | null |
|
# The Perceptron
```
import mxnet as mx
from mxnet import nd, autograd
import matplotlib.pyplot as plt
import numpy as np
mx.random.seed(1)
```
## A Separable Classification Problem
```
# generate fake data that is linearly separable with a margin epsilon given the data
def getfake(samples, dimensions, epsilon):
wfake = nd.random_normal(shape=(dimensions)) # fake weight vector for separation
bfake = nd.random_normal(shape=(1)) # fake bias
wfake = wfake / nd.norm(wfake) # rescale to unit length
# making some linearly separable data, simply by chosing the labels accordingly
X = nd.zeros(shape=(samples, dimensions))
Y = nd.zeros(shape=(samples))
i = 0
while (i < samples):
tmp = nd.random_normal(shape=(1,dimensions))
margin = nd.dot(tmp, wfake) + bfake
if (nd.norm(tmp).asscalar() < 3) & (abs(margin.asscalar()) > epsilon):
X[i,:] = tmp[0]
Y[i] = 1 if margin.asscalar() > 0 else -1
i += 1
return X, Y
# plot the data with colors chosen according to the labels
def plotdata(X,Y):
for (x,y) in zip(X,Y):
if (y.asscalar() == 1):
plt.scatter(x[0].asscalar(), x[1].asscalar(), color='r')
else:
plt.scatter(x[0].asscalar(), x[1].asscalar(), color='b')
# plot contour plots on a [-3,3] x [-3,3] grid
def plotscore(w,d):
xgrid = np.arange(-3, 3, 0.02)
ygrid = np.arange(-3, 3, 0.02)
xx, yy = np.meshgrid(xgrid, ygrid)
zz = nd.zeros(shape=(xgrid.size, ygrid.size, 2))
zz[:,:,0] = nd.array(xx)
zz[:,:,1] = nd.array(yy)
vv = nd.dot(zz,w) + d
CS = plt.contour(xgrid,ygrid,vv.asnumpy())
plt.clabel(CS, inline=1, fontsize=10)
X, Y = getfake(50, 2, 0.3)
plotdata(X,Y)
plt.show()
```
## Perceptron Implementation
```
def perceptron(w,b,x,y):
if (y * (nd.dot(w,x) + b)).asscalar() <= 0:
w += y * x
b += y
return 1
else:
return 0
w = nd.zeros(shape=(2))
b = nd.zeros(shape=(1))
for (x,y) in zip(X,Y):
res = perceptron(w,b,x,y)
if (res == 1):
print('Encountered an error and updated parameters')
print('data {}, label {}'.format(x.asnumpy(),y.asscalar()))
print('weight {}, bias {}'.format(w.asnumpy(),b.asscalar()))
plotscore(w,b)
plotdata(X,Y)
plt.scatter(x[0].asscalar(), x[1].asscalar(), color='g')
plt.show()
```
## Perceptron Convergence in Action
```
Eps = np.arange(0.025, 0.45, 0.025)
Err = np.zeros(shape=(Eps.size))
for j in range(10):
for (i,epsilon) in enumerate(Eps):
X, Y = getfake(1000, 2, epsilon)
for (x,y) in zip(X,Y):
Err[i] += perceptron(w,b,x,y)
Err = Err / 10.0
plt.plot(Eps, Err, label='average number of updates for training')
plt.legend()
plt.show()
```
| true |
code
| 0.697789 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/Serbeld/RX-COVID-19/blob/master/Detection5C_NormNew_v2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
!pip install lime
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.applications import inception_v3
from tensorflow.keras.layers import Dense,Dropout,Flatten,Input,AveragePooling2D,BatchNormalization
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.utils import to_categorical
from sklearn.preprocessing import LabelBinarizer
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from imutils import paths
import matplotlib.pyplot as plt
import numpy as np
import cv2
import os
import lime
from lime import lime_image
from skimage.segmentation import mark_boundaries
import pandas as pd
plt.rcParams["figure.figsize"] = (10,5)
#Loading the dataset
!pip install h5py
import h5py
from google.colab import drive,files
drive.mount('/content/drive')
hdf5_path = '/content/drive/My Drive/Dataset5C/Dataset5C.hdf5'
dataset = h5py.File(hdf5_path, "r")
import numpy as np
import matplotlib.pylab as plt
#train
train_img = dataset["train_img"]
xt = np.array(train_img)
yt = np.array(dataset["train_labels"])
#test
testX = np.array(dataset["test_img"])
testY = np.array(dataset["test_labels"])
#Validation
xval = np.array(dataset["val_img"])
yval = np.array(dataset["val_labels"])
print("Training Shape: "+ str(xt.shape))
print("Validation Shape: "+ str(xval.shape))
print("Testing Shape: "+ str(testX.shape))
#Categorical values or OneHot
import keras
num_classes = 5
yt = keras.utils.to_categorical(yt,num_classes)
testY = keras.utils.to_categorical(testY,num_classes)
yval = keras.utils.to_categorical(yval,num_classes)
#Image
num_image = 15
print()
print('Healthy: [1 0 0 0 0]')
print('Pneumonia & Covid-19: [0 1 0 0 0]')
print('Cardiomegaly: [0 0 1 0 0]')
print('Other respiratory disease: [0 0 0 1 0]')
print('Pleural Effusion: [0 0 0 0 1]')
print()
print("Output: "+ str(yt[num_image]))
imagen = train_img[num_image]
plt.imshow(imagen)
plt.show()
## global params
INIT_LR = 1e-5 # learning rate
EPOCHS = 10 # training epochs
BS = 4 # batch size
## build network
from tensorflow.keras.models import load_model
#Inputs
inputs = Input(shape=(512, 512, 3), name='images')
inputs2 = BatchNormalization()(inputs)
#Inception Model
output1 = inception_v3.InceptionV3(include_top=False,weights= "imagenet",
input_shape=(512, 512, 3),
classes = 5)(inputs2)
#AveragePooling2D
output = AveragePooling2D(pool_size=(2, 2), strides=None,
padding='valid',name='AvgPooling')(output1)
#Flattened
output = Flatten(name='Flatten')(output)
#Dropout
output = Dropout(0.2,name='Dropout')(output)
#ReLU layer
output = Dense(10, activation = 'relu',name='ReLU')(output)
#Dense layer
output = Dense(5, activation='softmax',name='softmax')(output)
# the actual model train)
model = Model(inputs=inputs, outputs=output)
print("[INFO] compiling model...")
opt = Adam(lr=INIT_LR, decay=INIT_LR / EPOCHS)
model.compile(loss="categorical_crossentropy", optimizer=opt,
metrics=["accuracy"])
model.summary()
from tensorflow.keras.callbacks import ModelCheckpoint
model_checkpoint = ModelCheckpoint(filepath="/content/drive/My Drive/Dataset5C/Model",
monitor='val_loss', save_best_only=True)
## train
print("[INFO] training head...")
H = model.fit({'images': xt},
{'softmax': yt},
batch_size = BS,
epochs = EPOCHS,
validation_data=(xval, yval),
callbacks=[model_checkpoint],
shuffle=True)
#Load the best model trained
model = load_model("/content/drive/My Drive/Dataset5C/Model")
## eval
print("[INFO] evaluating network...")
print()
print("Loss: "+ str(round(model.evaluate(testX,testY,verbose=0)[0],2))+ " Acc: "+ str(round(model.evaluate(testX,testY,verbose=1)[1],2)))
print()
predIdxs = model.predict(testX)
predIdxs = np.argmax(predIdxs, axis=1) # argmax for the predicted probability
#print(classification_report(testY.argmax(axis=1), predIdxs,target_names=lb.classes_))
cm = confusion_matrix(testY.argmax(axis=1), predIdxs)
total = sum(sum(cm))
#print(total) #60
acc = (cm[0, 0] + cm[1, 1] + cm[2, 2] + cm[3,3]+ cm[4,4]) / total
#sensitivity = cm[0, 0] / (cm[0, 0] + cm[0, 1])
#specificity = cm[1, 1] / (cm[1, 0] + cm[1, 1])
# show the confusion matrix, accuracy, sensitivity, and specificity
print(cm)
print("acc: {:.4f}".format(acc))
#print("sensitivity: {:.4f}".format(sensitivity))
#print("specificity: {:.4f}".format(specificity))
## explain
N = EPOCHS
plt.style.use("ggplot")
plt.figure(1)
plt.plot(np.arange(0, N), H.history["loss"], label="train_loss")
plt.plot(np.arange(0, N), H.history["val_loss"], label="val_loss")
plt.plot(np.arange(0, N), H.history["accuracy"], label="train_acc")
plt.plot(np.arange(0, N), H.history["val_accuracy"], label="val_acc")
plt.title("Precision of COVID-19 detection.")
plt.xlabel("Epoch #")
plt.ylabel("Loss/Accuracy")
plt.legend(loc="lower left")
#plt.axis([0, EPOCHS, 0.3, 0.9])
plt.savefig("/content/drive/My Drive/Dataset5C/Model/trained_cero_plot_Inception_2nd_time.png")
plt.show()
import cv2
plt.figure(2)
for ind in range(1):
explainer = lime_image.LimeImageExplainer()
explanation = explainer.explain_instance(testX[-ind], model.predict,
hide_color=0, num_samples=42)
print("> label:", testY[ind].argmax(), "- predicted:", predIdxs[ind])
temp, mask = explanation.get_image_and_mask(
explanation.top_labels[0], positive_only=True, num_features=5, hide_rest=True)
mask = np.array(mark_boundaries(temp/2 +1, mask))
#print(mask.shape)
imagen = testX[ind]
imagen[:,:,0] = imagen[:,:,2]
imagen[:,:,1] = imagen[:,:,2]
mask[:,:,0] = mask[:,:,2]
mask[:,:,1] = mask[:,:,2]
plt.imshow((mask +imagen)/255)
plt.savefig("/content/drive/My Drive/Dataset5C/Model/trained_pulmons_inception_Normal"+str(ind)+".png")
plt.show()
plt.figure(3)
for ind in range(1):
explainer = lime_image.LimeImageExplainer()
explanation = explainer.explain_instance(testX[-ind], model.predict,
hide_color=0, num_samples=42)
print("> label:", testY[ind].argmax(), "- predicted:", predIdxs[ind])
temp, mask = explanation.get_image_and_mask(
explanation.top_labels[0], positive_only=True, num_features=5, hide_rest=True)
mask = np.array(mark_boundaries(temp/2 +1, mask))
#print(mask.shape)
imagen = testX[ind]
imagen[:,:,0] = imagen[:,:,2]
imagen[:,:,1] = imagen[:,:,2]
mask[:,:,0] = mask[:,:,2]
mask[:,:,1] = mask[:,:,2]
kernel = np.ones((50,50),np.uint8)
mask = cv2.dilate(mask,kernel,iterations = 1)
mask = cv2.blur(mask,(30,30))
plt.imshow((mask +imagen)/255)
plt.savefig("/content/drive/My Drive/Dataset5C/Model/trained_pulmons_inception_Light"+str(ind)+".png")
plt.show()
plt.figure(4)
for ind in range(1):
explainer = lime_image.LimeImageExplainer()
explanation = explainer.explain_instance(testX[-ind], model.predict,
hide_color=0, num_samples=42)
print("> label:", testY[ind].argmax(), "- predicted:", predIdxs[ind])
temp, mask = explanation.get_image_and_mask(
explanation.top_labels[0], positive_only=True, num_features=3, hide_rest=True)
mask = np.array(mark_boundaries(temp/2 +1, mask))
#print(mask.shape)
imagen = testX[ind]
imagen[:,:,0] = imagen[:,:,2]
imagen[:,:,1] = imagen[:,:,2]
mask[:,:,0] = mask[:,:,2]
mask[:,:,1] = mask[:,:,2]
kernel = np.ones((50,50),np.uint8)
mask = cv2.dilate(mask,kernel,iterations = 1)
mask = cv2.blur(mask,(30,30))
mask = np.array(mask, dtype=np.uint8)
mask = cv2.medianBlur(mask,5)
mask = cv2.cvtColor(mask, cv2.COLOR_BGR2GRAY)
mask = cv2.applyColorMap(mask, cv2.COLORMAP_HOT) #heatmap
end = cv2.addWeighted((imagen/255), 0.7, mask/255, 0.3, 0)
plt.imshow((end))
plt.savefig("/content/drive/My Drive/Dataset5C/Model/trained_pulmons_inception_Heat_map_purple"+str(ind)+".png")
plt.show()
plt.figure(4)
for ind in range(1):
explainer = lime_image.LimeImageExplainer()
explanation = explainer.explain_instance(testX[-ind], model.predict,
hide_color=0, num_samples=42)
print("> label:", testY[ind].argmax(), "- predicted:", predIdxs[ind])
temp, mask = explanation.get_image_and_mask(
explanation.top_labels[0], positive_only=True, num_features=2, hide_rest=True)
mask = np.array(mark_boundaries(temp/2 +1, mask))
#print(mask.shape)
imagen = testX[ind]
imagen[:,:,0] = imagen[:,:,2]
imagen[:,:,1] = imagen[:,:,2]
mask[:,:,0] = mask[:,:,2]
mask[:,:,1] = mask[:,:,2]
kernel = np.ones((30,30),np.uint8)
mask = cv2.dilate(mask,kernel,iterations = 2)
mask = cv2.blur(mask,(30,30))
mask = cv2.blur(mask,(30,30))
mask = np.array(mask, dtype=np.uint8)
mask = cv2.medianBlur(mask,5)
mask = cv2.cvtColor(mask, cv2.COLOR_BGR2GRAY)
mask2 = cv2.applyColorMap((mask), cv2.COLORMAP_JET) #heatmap
mask = cv2.blur(mask,(60,60))
mask = cv2.applyColorMap(mask, cv2.COLORMAP_HOT) #heatmap
mask = ((mask*1.1 + mask2*0.7)/255)*(3/2)
end = cv2.addWeighted(imagen/255, 0.8, mask2/255, 0.3, 0)
#end = cv2.addWeighted(end, 0.8, mask/255, 0.2, 0)
plt.imshow((end))
cv2.imwrite("/content/drive/My Drive/Maps/Heat_map"+str(ind)+".png",end*255)
plt.savefig("/content/drive/My Drive/Dataset5C/Model/trained_pulmons_inception_Heat_map"+str(ind)+".png")
plt.show()
plt.figure(5)
for ind in range(1):
explainer = lime_image.LimeImageExplainer()
explanation = explainer.explain_instance(testX[-ind], model.predict,
hide_color=0, num_samples=42)
print("> label:", testY[ind].argmax(), "- predicted:", predIdxs[ind])
temp, mask = explanation.get_image_and_mask(
explanation.top_labels[0], positive_only=True, num_features=1, hide_rest=True)
mask = np.array(mark_boundaries(temp/2 +1, mask))
#print(mask.shape)
imagen = testX[ind]
imagen[:,:,0] = imagen[:,:,2]
imagen[:,:,1] = imagen[:,:,2]
mask[:,:,0] = mask[:,:,2]
mask[:,:,1] = mask[:,:,2]
kernel = np.ones((30,30),np.uint8)
mask = cv2.dilate(mask,kernel,iterations = 2)
mask = cv2.blur(mask,(30,30))
mask = cv2.blur(mask,(30,30))
mask = np.array(mask, dtype=np.uint8)
mask = cv2.medianBlur(mask,5)
mask = cv2.cvtColor(mask, cv2.COLOR_BGR2GRAY)
mask2 = cv2.applyColorMap((mask), cv2.COLORMAP_JET) #heatmap
mask = cv2.blur(mask,(60,60))
mask = cv2.applyColorMap(mask, cv2.COLORMAP_HOT) #heatmap
mask = ((mask*1.1 + mask2*0.7)/255)*(3/2)
end = cv2.addWeighted(imagen/255, 0.8, mask2/255, 0.3, 0)
#end = cv2.addWeighted(end, 0.8, mask/255, 0.2, 0)
deep = np.reshape(end,newshape=(512,512,3),order='C')
CHANNEL1=deep[:,:,2]
CHANNEL2=deep[:,:,0]
deep[:,:,0] = CHANNEL1
#deep[:,:,2] = CHANNEL2
plt.imshow((deep))
plt.savefig("/content/drive/My Drive/Dataset5C/Model/trained_pulmons_inception_Heat_map_ma"+str(ind)+".png")
plt.show()
```
| true |
code
| 0.722576 | null | null | null | null |
|
# RidgeRegression with Scale & Power Transformer
This Code template is for the regression analysis using simple Ridge Regression with Feature Rescaling technique Scale and Feature Transformation technique PowerTransformer in a pipeline. Ridge Regression is also known as Tikhonov regularization.
### Required Packages
```
import warnings
import numpy as np
import pandas as pd
import seaborn as se
import matplotlib.pyplot as plt
from sklearn.linear_model import Ridge
from sklearn.pipeline import Pipeline,make_pipeline
from sklearn.preprocessing import scale,PowerTransformer
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error
warnings.filterwarnings('ignore')
```
### Initialization
Filepath of CSV file
```
#filepath
file_path= ""
```
List of features which are required for model training .
```
#x_values
features=[]
```
Target feature for prediction.
```
#y_value
target=''
```
### Data Fetching
Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
```
df=pd.read_csv(file_path)
df.head()
```
### Feature Selections
It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.
We will assign all the required input features to X and target/outcome to Y.
```
X=df[features]
Y=df[target]
```
### Data Preprocessing
Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
```
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
```
Calling preprocessing functions on the feature and target set.
```
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
```
#### Correlation Map
In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
```
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
```
### Data Splitting
The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
```
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
```
### Data Rescaling
<Code>scale</Code> standardizes a dataset along any axis. It standardizes features by removing the mean and scaling to unit variance.
scale is similar to <Code>StandardScaler</Code> in terms of feature transformation, but unlike StandardScaler, it lacks Transformer API i.e., it does not have <Code>fit_transform</Code>, <Code>transform</Code> and other related methods.
```
x_train =scale(x_train)
x_test = scale(x_test)
```
### Feature Transformation
Power transforms are a family of parametric, monotonic transformations that are applied to make data more Gaussian-like. This is useful for modeling issues related to heteroscedasticity (non-constant variance), or other situations where normality is desired.
##### For more information on PowerTransformer [ click here](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.PowerTransformer.html)
### Model
Ridge regression addresses some of the problems of Ordinary Least Squares by imposing a penalty on the size of the coefficients. The ridge coefficients minimize a penalized residual sum of squares:
\begin{equation*}
\min_{w} || X w - y||_2^2 + \alpha ||w||_2^2
\end{equation*}
The complexity parameter controls the amount of shrinkage: the larger the value of , the greater the amount of shrinkage and thus the coefficients become more robust to collinearity.
This model solves a regression model where the loss function is the linear least squares function and regularization is given by the l2-norm. Also known as Ridge Regression or Tikhonov regularization. This estimator has built-in support for multi-variate regression (i.e., when y is a 2d-array of shape (n_samples, n_targets)).
#### Model Tuning Parameters
> **alpha** -> Regularization strength; must be a positive float. Regularization improves the conditioning of the problem and reduces the variance of the estimates. Larger values specify stronger regularization.
> **solver** -> Solver to use in the computational routines {‘auto’, ‘svd’, ‘cholesky’, ‘lsqr’, ‘sparse_cg’, ‘sag’, ‘saga’}
```
model=make_pipeline(PowerTransformer(), Ridge(random_state=123))
model.fit(x_train,y_train)
```
#### Model Accuracy
We will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model.
> **score**: The **score** function returns the coefficient of determination <code>R<sup>2</sup></code> of the prediction.
```
y_pred=model.predict(x_test)
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
```
> **r2_score**: The **r2_score** function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions.
> **mae**: The **mean abosolute error** function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model.
> **mse**: The **mean squared error** function squares the error(penalizes the model for large errors) by our model.
```
print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100))
print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred)))
print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred)))
```
#### Prediction Plot
First, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis.
For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.
```
plt.figure(figsize=(14,10))
plt.plot(range(20),y_test[0:20], color = "green")
plt.plot(range(20),model.predict(x_test[0:20]), color = "red")
plt.legend(["Actual","prediction"])
plt.title("Predicted vs True Value")
plt.xlabel("Record number")
plt.ylabel(target)
plt.show()
```
#### Creator: Ganapathi Thota , Github: [Profile](https://github.com/Shikiz)
| true |
code
| 0.51068 | null | null | null | null |
|
# Understanding Principal Component Analysis
**Outline**
* [Introduction](#intro)
* [Assumption and derivation](#derive)
* [PCA Example](#example)
* [PCA Usage](#usage)
```
%load_ext watermark
%matplotlib inline
# %config InlineBackend.figure_format='retina'
from matplotlib import pyplot as plt
import pandas as pd
import numpy as np
import math
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import Pipeline
from sklearn.metrics import accuracy_score
%watermark -a 'Johnny' -d -t -v -p numpy,pandas,matplotlib,sklearn
```
---
## <a id="intro">Introduction</a>
When we have two features that are highly correlated with each other, we may not want to include both of them in our model. In [Lasso and Ridge regression](http://nbviewer.jupyter.org/github/johnnychiuchiu/Machine-Learning/blob/master/LinearRegression/linearRegressionModelBuilding.ipynb#ridge), what it does is fitting a model with all the predictors but put a penalized term, either L1 or L2 norm on the value of the regression coefficients, this will shrinks the coefficient estimates towards zero. In other words, it try to pick some predictors out of all the predictors in order to reduce the dimension of our column space.
Principal Component Analysis(PCA) is another type of dimension reduction method. What PCA is all about is **Finding the directions of maximum variance in high-dimensional data and project it onto a smaller dimensional subspace while retaining most of the information.** The main idea and motivation is that each of the $n$ observations lives in $p$-dimensional space, but not all of these dimensions are equally interesting. PCA seeks a small number of dimensions that are as intersteing as possible. The concept of *interesting* is measured by the amount that the observations vary along each dimension.
Note that PCA is just a linear transformation method. Compared to the original space, it can project our high-dimensional data into another dimension, of which each of the direction are with the maximum variance. In other words, the orthogonality of principal components implies that PCA finds the most uncorrelated components to explain as much variation in the data as possible. We can then pick the number of directions, i.e. components, we want to keep while containing most of the information of the original data. The direction of the highest variance is called the first principal component, the second highest is call the second principal component, and so on.
In PCA, we found out that the first principal component is obtained by doing eigendecomposition of the covariance matrix X, and the eigenvector with the largest eigenvalue is our first principal component in the sense that every vector in the span of this eigenvector will strech out by the largest amount, since eigenvalues are the factors by which the eigenvectors streckes or squishes during the transformation. Therefore, we can sort the top k component by the value of the eigenvalues that we found from doing eigendecomposition of the covariance matrix X.
**Application of PCA**
* We can use PCA as a tool for data visualization. For instance, if we can obtain a two-dimensional representation of the data that captures most of the information, then we can plot hte observations in this low-dimensional space.
* We can use princial components as predictors in a regression model in place of the original larger set of variables.
---
## <a id="derive">Assumption and derivation</a>
**Assumption** for PCA before we derive the whole process are
* Since we are only interested in variance, we assume that each of the variables in $X$ has been and should be centered to have mean zero, i.e. the column means of $X$ are zero.
**Method Derivation**
Assume we have n observation, and a set of features $X1, X2, X3, \dots, Xp$. In order words, we have
\begin{pmatrix}
x_{1,1} & x_{1,2} & \cdots & x_{1,p} \\
x_{2,1} & x_{2,2} & \cdots & x_{2,p} \\
\vdots & \vdots & \ddots & \vdots \\
x_{n,1} & x_{n,2} & \cdots & x_{n,p}
\end{pmatrix}
where
\begin{equation*}
X1 = \begin{bmatrix}
x_{1,1} \\
x_{2,1} \\
\vdots \\
x_{n,1}
\end{bmatrix}
\end{equation*}
PCA will try to find a low dimensional representation of a dataset that contains as much as possible of the variance. The idea is that each of the n observations lives in p-dimensional space, but not all of these dimensions are equally interesting. PCA seeks a small number of dimensions that are as interesting as possible. Let see how these dimensions, or *principal component* are found.
Given $n \times p$ data set $X$, how do we compute the first principal component? We look for the linear combination of the sample feature values of the form
$$z_{i,1} = \phi_{1,1}x_{i,1}+\phi_{2,1}x_{i,2}+\dots+\phi_{p,1}x_{i,p}$$
where
0<i<n and $\phi_1$ denotes the first principal component loading vector, which is
\begin{equation*}
\phi_1=\begin{pmatrix}
\phi_{1,1} \\
\phi_{2,1} \\
\vdots \\
\phi_{p,1}
\end{pmatrix}
\end{equation*}
We'll have n values of $z_1$, and we want to look for the linear combination that has the largest sample variance. More formally,
\begin{equation*}
Z_1
=
\begin{pmatrix}
z_{1,1} \\
z_{2,1} \\
\vdots \\
z_{n,1}
\end{pmatrix}
=
\begin{pmatrix}
\phi_{1,1}x_{1,1} + \phi_{2,1}x_{1,2} + \cdots + \phi_{p,1}x_{1,p} \\
\phi_{1,1}x_{2,1} + \phi_{2,1}x_{2,2} + \cdots + \phi_{p,1}x_{2,p} \\
\vdots \\
\phi_{1,1}x_{n,1} + \phi_{2,1}x_{n,2} + \cdots + \phi_{p,1}x_{n,p}
\end{pmatrix}
=
\begin{pmatrix}
\phi_{1,1}
\phi_{2,1}
\dots
\phi_{p,1}
\end{pmatrix}
\begin{pmatrix}
x_{1,1} & x_{1,2} & \cdots & x_{1,p} \\
x_{2,1} & x_{2,2} & \cdots & x_{2,p} \\
\vdots & \vdots & \ddots & \vdots \\
x_{n,1} & x_{n,2} & \cdots & x_{n,p}
\end{pmatrix}
=
\phi_{1,1}X_{1}+\phi_{2,1}X_{2}+\dots+\phi_{p,1}X_{p}
=
\phi_1^T X
\end{equation*}
We assume that each of the variables in $X$ has been centered to have mean zero, i.e., the column means of $X$ are zero. Therefore, $E(X_i)=0$ for i in 1,...p. It's obvious to know that $E(Z_1)=E(\phi_{1,1}X_{1}+\phi_{2,1}X_{2}+\dots+\phi_{p,1}X_{p}) = 0$
Therefore, the variance of $Z_1$ is
$$Var(Z_1) = E\Big[[Z_1-E(Z_1)][Z_1-E(Z_1)]^T\Big] = E\Big[Z_1 Z_1^T \Big] = E\Big[(\phi_1^T X) (\phi_1^T X)^T \Big] = E\Big[\phi_1^T X X^T \phi_1\Big] = \phi_1^T E[X X^T] \phi_1$$
We also know that the [covariance matrix](https://en.wikipedia.org/wiki/Covariance_matrix) of X is
$$C = Cov(X) = E\Big[[X-E(X)][X-E(X)]^T\Big] = E[X X^T]$$
Hence, the $Var(Z_1)= \phi_1^T E[X X^T] \phi_1 = \phi_1^T C \phi_1$
Apart from finding the largest sample variance, we also constrain the loadings so that their sum of squares is equal to one, since otherwise setting these elements to be arbitrarily large in absolute value could result in an arbitrarily large variance. More formally,
$$\sum_{j=1}^{p}\phi_{j1}^2=1$$
In other words, the first principal component loading vector solves the optimization problem
$$\text{maximize}_\phi \quad \phi^TC\phi$$
$$\text{subject to} \sum_{j=1}^{p}\phi_{j1}^2 = \phi_1^T \phi_1 =1$$
This objective function can be solved by the Lagrange multiplier, minimizing the loss function:
$$L = \phi^T C\phi - \lambda(\phi^T \phi-1)$$
Next, to solve for $\phi$, we set the partial derivative of L with respect to $\phi$ to 0.
$$\frac{\partial L}{\partial \phi_1} = C\phi - \lambda \phi_1 =0 $$
$$ C\phi_1 = \lambda \phi_1 $$
Surprisingly we see that it is actually a eigendecomposition problem. To refresh our mind a little bit, here is a very good [youtube video](https://www.youtube.com/watch?v=PFDu9oVAE-g&index=14&list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab) explaining what eigenvalue and eigenvector is in a very geometrical way.
Therefore, from the equation above, we pick $\phi$ as the eigenvector associated with the largest eigenvalue.
Also, most data can’t be well-described by a single principal component. Typically, we compute multiple principal components by computing all eigenvectors of the covariance matrix of $X$ and ranking them by their eigenvalues. After sorting the eigenpairs, the next question is “how many principal components are we going to choose for our new feature subspace?” A useful measure is the so-called “explained variance,” which can be calculated from the eigenvalues. The explained variance tells us how much information (variance) can be attributed to each of the principal components.
To sum up, here are the **steps that we take to perform a PCA analysis**
1. Standardize the data.
2. Obtain the Eigenvectors and Eigenvalues from the covariance matrix (technically the correlation matrix after performing the standardization).
3. Sort eigenvalues in descending order and choose the k eigenvectors that correspond to the k largest eigenvalues where k is the number of dimensions of the new feature subspace.
4. Projection onto the new feature space. During this step we will take the top k eigenvectors and use it to transform the original dataset X to obtain a k-dimensional feature subspace X′.
---
## <a id="process">PCA Analysis Example</a>
Let's use the classical IRIS data to illustrate the topics that we just covered, including
* What are the explained variance of each component? How many component should we pick?
* How will the scatter plot be if we plot in the dimension of first and second component?
```
# Read Data
df = pd.read_csv(
filepath_or_buffer='https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data',
header=None,
sep=',')
df.columns=['sepal_len', 'sepal_wid', 'petal_len', 'petal_wid', 'class']
df.dropna(how="all", inplace=True) # drops the empty line at file-end
df.tail()
# split data table into data X and class labels y
X = df.iloc[:,0:4].values
y = df.iloc[:,4].values
```
**EDA**
To get a feeling for how the 3 different flower classes are distributes along the 4 different features, let us visualize them via histograms.
```
def plot_iris():
label_dict = {1: 'Iris-Setosa',
2: 'Iris-Versicolor',
3: 'Iris-Virgnica'}
feature_dict = {0: 'sepal length [cm]',
1: 'sepal width [cm]',
2: 'petal length [cm]',
3: 'petal width [cm]'}
with plt.style.context('seaborn-whitegrid'):
plt.figure(figsize=(8, 6))
for cnt in range(4):
plt.subplot(2, 2, cnt+1)
for lab in ('Iris-setosa', 'Iris-versicolor', 'Iris-virginica'):
plt.hist(X[y==lab, cnt],
label=lab,
bins=10,
alpha=0.3,)
plt.xlabel(feature_dict[cnt])
plt.legend(loc='upper right', fancybox=True, fontsize=8)
plt.tight_layout()
plt.show()
plot_iris()
```
## Process
### 1. Standardize the data
```
# create a StandardScaler object
scaler = StandardScaler()
# fit and then transform to get the standardized dataset
scaler.fit(X)
X_std = scaler.transform(X)
```
### 2. Do eigendecomposition and sort eigenvalues in descending order
```
# n_components: Number of components to keep
# if n_components is not set all components are kept
my_pca = PCA(n_components=None)
my_pca.fit(X_std)
def plot_var_explained(var_exp, figsize=(6,4)):
"""variance explained per component plot"""
# get culmulative variance explained
cum_var_exp = np.cumsum(var_exp)
# plot
with plt.style.context('seaborn-whitegrid'):
plt.figure(figsize=figsize)
plt.bar(range(len(var_exp)), var_exp, alpha=0.5, align='center',
label='individual explained variance')
plt.step(range(len(var_exp)), cum_var_exp, where='mid',
label='cumulative explained variance')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.legend(loc='best')
plt.tight_layout()
plt.show()
var_exp = my_pca.explained_variance_ratio_
plot_var_explained(var_exp, figsize=(6,4))
# plot a simpler version of the bar chart
pd.DataFrame(my_pca.explained_variance_ratio_).plot.bar()
```
The plot above clearly shows that most of the variance (72.77% of the variance to be precise) can be explained by the first principal component alone. The second principal component still bears some information (23.03%) while the third and fourth principal components can safely be dropped without losing to much information. Together, the first two principal components contain 95.8% of the information.
### 3. Check the scores within each principal component
```
PC_df = pd.DataFrame(my_pca.components_,columns=df.iloc[:,0:4].columns).transpose()
PC_df
import seaborn as sns
plt.figure(figsize=None) #(4,4)
sns.heatmap(PC_df,cmap="RdBu_r",annot=PC_df.values, linewidths=1, center=0)
```
From the above heatmap & table, we can see that first component consist of all 4 features with a smaller weight on sepal_wid
### 4. Projection onto the new feature space
During this step we will take the top k eigenvectors and use it to transform the original dataset X to obtain a k-dimensional feature subspace X′.
```
sklearn_pca = PCA(n_components=2)
Y_sklearn = sklearn_pca.fit_transform(X_std)
Y_sklearn[1:10]
```
Each of the list in the array above shows the projected value of each observation onto the first two principal components. If we want to fit model using the data projected on to their first 2 principal component, then `Y_sklearn` is the data we want to use.
## <a id="usage">PCA Usage</a>
### Data Visualization
We can use PCA as a tool for data visualization. For instance, if we can obtain a two-dimensional representation of the data that captures most of the information, then we can plot hte observations in this low-dimensional space.
Let's see how it will be like using IRIS data if we plot it out in the first two principal components.
```
with plt.style.context('seaborn-whitegrid'):
plt.figure(figsize=(6, 4))
for lab, col in zip(('Iris-setosa', 'Iris-versicolor', 'Iris-virginica'),
('blue', 'red', 'green')):
print(lab)
print(col)
plt.scatter(Y_sklearn[y==lab, 0],
Y_sklearn[y==lab, 1],
label=lab,
c=col)
plt.xlabel('Principal Component 1')
plt.ylabel('Principal Component 2')
plt.legend(loc='lower center')
plt.tight_layout()
plt.show()
```
### Principal Component Regression
We can use princial components as predictors in a regression model in place of the original larger set of variables.
Let's compare the result of logistic regression using all the features with the one using only the first two component
```
# the code is copied from Ethen's PCA blog post, which is listed in the reference.
# split 30% of the iris data into a test set for evaluation
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size = 0.3, random_state = 1)
# create the pipeline, where we'll
# standardize the data, perform PCA and
# fit the logistic regression
pipeline1 = Pipeline([
('standardize', StandardScaler()),
('pca', PCA(n_components = 2)),
('logistic', LogisticRegression(random_state = 1))
])
pipeline1.fit(X_train, y_train)
y_pred1 = pipeline1.predict(X_test)
# pipeline without PCA
pipeline2 = Pipeline([
('standardize', StandardScaler()),
('logistic', LogisticRegression(random_state = 1))
])
pipeline2.fit(X_train, y_train)
y_pred2 = pipeline2.predict(X_test)
# access the prediction accuracy
print('PCA Accuracy %.3f' % accuracy_score(y_test, y_pred1))
print('Accuracy %.3f' % accuracy_score(y_test, y_pred2))
```
We saw that by using only the first two component, the accuracy only drop by 0.022, which is about 2-3% from the original accuracy. Actually, by using the first three principal component, we can get the same accuracy as the original model with all the features.
### Reference
* [PCA in 3 steps](http://sebastianraschka.com/Articles/2015_pca_in_3_steps.html)
* [Everything you did and didn't know about PCA
](http://alexhwilliams.info/itsneuronalblog/2016/03/27/pca/)
* [Ethen: Principal Component Analysis (PCA) from scratch](http://nbviewer.jupyter.org/github/ethen8181/machine-learning/blob/master/dim_reduct/PCA.ipynb)
* [Wiki: Matrix Multiplication](https://en.wikipedia.org/wiki/Matrix_multiplication)
* [Sklearn: Pipelining: chaining a PCA and a logistic regression](http://scikit-learn.org/stable/auto_examples/plot_digits_pipe.html#sphx-glr-auto-examples-plot-digits-pipe-py)
| true |
code
| 0.714871 | null | null | null | null |
|
# Chatbot using Seq2Seq LSTM models
In this notebook, we will assemble a seq2seq LSTM model using Keras Functional API to create a working Chatbot which would answer questions asked to it.
Chatbots have become applications themselves. You can choose the field or stream and gather data regarding various questions. We can build a chatbot for an e-commerce webiste or a school website where parents could get information about the school.
Messaging platforms like Allo have implemented chatbot services to engage users. The famous [Google Assistant](https://assistant.google.com/), [Siri](https://www.apple.com/in/siri/), [Cortana](https://www.microsoft.com/en-in/windows/cortana) and [Alexa](https://www.alexa.com/) may have been build using simialr models.
So, let's start building our Chatbot.
## 1) Importing the packages
We will import [TensorFlow](https://www.tensorflow.org) and our beloved [Keras](https://www.tensorflow.org/guide/keras). Also, we import other modules which help in defining model layers.
```
import numpy as np
import tensorflow as tf
import pickle
from tensorflow.keras import layers , activations , models , preprocessing
```
## 2) Preprocessing the data
### A) Download the data
The dataset hails from [chatterbot/english on Kaggle](https://www.kaggle.com/kausr25/chatterbotenglish).com by [kausr25](https://www.kaggle.com/kausr25). It contains pairs of questions and answers based on a number of subjects like food, history, AI etc.
The raw data could be found from this repo -> https://github.com/shubham0204/Dataset_Archives
```
!wget https://github.com/shubham0204/Dataset_Archives/blob/master/chatbot_nlp.zip?raw=true -O chatbot_nlp.zip
!unzip chatbot_nlp.zip
```
### B) Reading the data from the files
We parse each of the `.yaml` files.
* Concatenate two or more sentences if the answer has two or more of them.
* Remove unwanted data types which are produced while parsing the data.
* Append `<START>` and `<END>` to all the `answers`.
* Create a `Tokenizer` and load the whole vocabulary ( `questions` + `answers` ) into it.
```
from tensorflow.keras import preprocessing , utils
import os
import yaml
dir_path = 'chatbot_nlp/data'
files_list = os.listdir(dir_path + os.sep)
questions = list()
answers = list()
for filepath in files_list:
stream = open( dir_path + os.sep + filepath , 'rb')
docs = yaml.safe_load(stream)
conversations = docs['conversations']
for con in conversations:
if len( con ) > 2 :
questions.append(con[0])
replies = con[ 1 : ]
ans = ''
for rep in replies:
ans += ' ' + rep
answers.append( ans )
elif len( con )> 1:
questions.append(con[0])
answers.append(con[1])
answers_with_tags = list()
for i in range( len( answers ) ):
if type( answers[i] ) == str:
answers_with_tags.append( answers[i] )
else:
questions.pop( i )
answers = list()
for i in range( len( answers_with_tags ) ) :
answers.append( '<START> ' + answers_with_tags[i] + ' <END>' )
tokenizer = preprocessing.text.Tokenizer()
tokenizer.fit_on_texts( questions + answers )
VOCAB_SIZE = len( tokenizer.word_index )+1
print( 'VOCAB SIZE : {}'.format( VOCAB_SIZE ))
```
### C) Preparing data for Seq2Seq model
Our model requires three arrays namely `encoder_input_data`, `decoder_input_data` and `decoder_output_data`.
For `encoder_input_data` :
* Tokenize the `questions`. Pad them to their maximum length.
For `decoder_input_data` :
* Tokenize the `answers`. Pad them to their maximum length.
For `decoder_output_data` :
* Tokenize the `answers`. Remove the first element from all the `tokenized_answers`. This is the `<START>` element which we added earlier.
```
from gensim.models import Word2Vec
import re
vocab = []
for word in tokenizer.word_index:
vocab.append( word )
def tokenize( sentences ):
tokens_list = []
vocabulary = []
for sentence in sentences:
sentence = sentence.lower()
sentence = re.sub( '[^a-zA-Z]', ' ', sentence )
tokens = sentence.split()
vocabulary += tokens
tokens_list.append( tokens )
return tokens_list , vocabulary
#p = tokenize( questions + answers )
#model = Word2Vec( p[ 0 ] )
#embedding_matrix = np.zeros( ( VOCAB_SIZE , 100 ) )
#for i in range( len( tokenizer.word_index ) ):
#embedding_matrix[ i ] = model[ vocab[i] ]
# encoder_input_data
tokenized_questions = tokenizer.texts_to_sequences( questions )
maxlen_questions = max( [ len(x) for x in tokenized_questions ] )
padded_questions = preprocessing.sequence.pad_sequences( tokenized_questions , maxlen=maxlen_questions , padding='post' )
encoder_input_data = np.array( padded_questions )
print( encoder_input_data.shape , maxlen_questions )
# decoder_input_data
tokenized_answers = tokenizer.texts_to_sequences( answers )
maxlen_answers = max( [ len(x) for x in tokenized_answers ] )
padded_answers = preprocessing.sequence.pad_sequences( tokenized_answers , maxlen=maxlen_answers , padding='post' )
decoder_input_data = np.array( padded_answers )
print( decoder_input_data.shape , maxlen_answers )
# decoder_output_data
tokenized_answers = tokenizer.texts_to_sequences( answers )
for i in range(len(tokenized_answers)) :
tokenized_answers[i] = tokenized_answers[i][1:]
padded_answers = preprocessing.sequence.pad_sequences( tokenized_answers , maxlen=maxlen_answers , padding='post' )
onehot_answers = utils.to_categorical( padded_answers , VOCAB_SIZE )
decoder_output_data = np.array( onehot_answers )
print( decoder_output_data.shape )
```
## 3) Defining the Encoder-Decoder model
The model will have Embedding, LSTM and Dense layers. The basic configuration is as follows.
* 2 Input Layers : One for `encoder_input_data` and another for `decoder_input_data`.
* Embedding layer : For converting token vectors to fix sized dense vectors. **( Note : Don't forget the `mask_zero=True` argument here )**
* LSTM layer : Provide access to Long-Short Term cells.
Working :
1. The `encoder_input_data` comes in the Embedding layer ( `encoder_embedding` ).
2. The output of the Embedding layer goes to the LSTM cell which produces 2 state vectors ( `h` and `c` which are `encoder_states` )
3. These states are set in the LSTM cell of the decoder.
4. The decoder_input_data comes in through the Embedding layer.
5. The Embeddings goes in LSTM cell ( which had the states ) to produce seqeunces.
<center><img style="float: center;" src="https://cdn-images-1.medium.com/max/1600/1*bnRvZDDapHF8Gk8soACtCQ.gif"></center>
Image credits to [Hackernoon](https://hackernoon.com/tutorial-3-what-is-seq2seq-for-text-summarization-and-why-68ebaa644db0).
```
encoder_inputs = tf.keras.layers.Input(shape=( maxlen_questions , ))
encoder_embedding = tf.keras.layers.Embedding( VOCAB_SIZE, 200 , mask_zero=True ) (encoder_inputs)
encoder_outputs , state_h , state_c = tf.keras.layers.LSTM( 200 , return_state=True )( encoder_embedding )
encoder_states = [ state_h , state_c ]
decoder_inputs = tf.keras.layers.Input(shape=( maxlen_answers , ))
decoder_embedding = tf.keras.layers.Embedding( VOCAB_SIZE, 200 , mask_zero=True) (decoder_inputs)
decoder_lstm = tf.keras.layers.LSTM( 200 , return_state=True , return_sequences=True )
decoder_outputs , _ , _ = decoder_lstm ( decoder_embedding , initial_state=encoder_states )
decoder_dense = tf.keras.layers.Dense( VOCAB_SIZE , activation=tf.keras.activations.softmax )
output = decoder_dense ( decoder_outputs )
model = tf.keras.models.Model([encoder_inputs, decoder_inputs], output )
model.compile(optimizer=tf.keras.optimizers.RMSprop(), loss='categorical_crossentropy')
model.summary()
```
## 4) Training the model
We train the model for a number of epochs with `RMSprop` optimizer and `categorical_crossentropy` loss function.
```
model.fit([encoder_input_data , decoder_input_data], decoder_output_data, batch_size=50, epochs=150 )
model.save( 'model.h5' )
```
## 5) Defining inference models
We create inference models which help in predicting answers.
**Encoder inference model** : Takes the question as input and outputs LSTM states ( `h` and `c` ).
**Decoder inference model** : Takes in 2 inputs, one are the LSTM states ( Output of encoder model ), second are the answer input seqeunces ( ones not having the `<start>` tag ). It will output the answers for the question which we fed to the encoder model and its state values.
```
def make_inference_models():
encoder_model = tf.keras.models.Model(encoder_inputs, encoder_states)
decoder_state_input_h = tf.keras.layers.Input(shape=( 200 ,))
decoder_state_input_c = tf.keras.layers.Input(shape=( 200 ,))
decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c]
decoder_outputs, state_h, state_c = decoder_lstm(
decoder_embedding , initial_state=decoder_states_inputs)
decoder_states = [state_h, state_c]
decoder_outputs = decoder_dense(decoder_outputs)
decoder_model = tf.keras.models.Model(
[decoder_inputs] + decoder_states_inputs,
[decoder_outputs] + decoder_states)
return encoder_model , decoder_model
```
## 6) Talking with our Chatbot
First, we define a method `str_to_tokens` which converts `str` questions to Integer tokens with padding.
```
def str_to_tokens( sentence : str ):
words = sentence.lower().split()
tokens_list = list()
for word in words:
tokens_list.append( tokenizer.word_index[ word ] )
return preprocessing.sequence.pad_sequences( [tokens_list] , maxlen=maxlen_questions , padding='post')
```
1. First, we take a question as input and predict the state values using `enc_model`.
2. We set the state values in the decoder's LSTM.
3. Then, we generate a sequence which contains the `<start>` element.
4. We input this sequence in the `dec_model`.
5. We replace the `<start>` element with the element which was predicted by the `dec_model` and update the state values.
6. We carry out the above steps iteratively till we hit the `<end>` tag or the maximum answer length.
```
enc_model , dec_model = make_inference_models()
for _ in range(10):
states_values = enc_model.predict( str_to_tokens( input( 'Enter question : ' ) ) )
empty_target_seq = np.zeros( ( 1 , 1 ) )
empty_target_seq[0, 0] = tokenizer.word_index['start']
stop_condition = False
decoded_translation = ''
while not stop_condition :
dec_outputs , h , c = dec_model.predict([ empty_target_seq ] + states_values )
sampled_word_index = np.argmax( dec_outputs[0, -1, :] )
sampled_word = None
for word , index in tokenizer.word_index.items() :
if sampled_word_index == index :
decoded_translation += ' {}'.format( word )
sampled_word = word
if sampled_word == 'end' or len(decoded_translation.split()) > maxlen_answers:
stop_condition = True
empty_target_seq = np.zeros( ( 1 , 1 ) )
empty_target_seq[ 0 , 0 ] = sampled_word_index
states_values = [ h , c ]
print( decoded_translation )
```
## 7) Conversion to TFLite ( Optional )
We can convert our seq2seq model to a TensorFlow Lite model so that we can use it on edge devices.
```
!pip install tf-nightly
converter = tf.lite.TFLiteConverter.from_keras_model( enc_model )
buffer = converter.convert()
open( 'enc_model.tflite' , 'wb' ).write( buffer )
converter = tf.lite.TFLiteConverter.from_keras_model( dec_model )
open( 'dec_model.tflite' , 'wb' ).write( buffer )
```
| true |
code
| 0.518059 | null | null | null | null |
|
# Introduction
A mass on a spring experiences a force described by Hookes law.
For a displacment $x$, the force is
$$F=-kx,$$
where $k$ is the spring constant with units of N/m.
The equation of motion is
$$ F = ma $$
or
$$ -k x = m a .$$
Because acceleration is the second derivative of displacment, this is
a differential equation,
$$ \frac{d^2}{dt^2} = -\frac{k}{m} x.$$
The solution to this equation is harmonic motion, for example
$$ x(t) = A\sin\omega t,$$
where $A$ is some amplitude and $\omega = \sqrt{k/m}$.
This can be verified by plugging the solution into the differential equation.
The angular frequency $\omega$ is related to the frequency $f$ and the period $T$ by
$$f = \omega/2\pi$$ and $$T=2\pi/\omega$$
We can illustrate this rather trivial case with an interacive plot.
```
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.html import widgets
def make_plot(t):
fig, ax = plt.subplots()
x,y = 0,0
plt.plot(x, y, 'k.')
plt.plot(x + 0.3 * t, y, 'bo')
plt.xlim(-1,1)
plt.ylim(-1,1)
widgets.interact(make_plot, t=(-1,1,0.1))
```
We want to generalize this result to several massess connected by several springs.
# The spring constant as a second derivative of potential
The force related to poential energy by
$$ F = -\frac{d}{dx}V(x).$$
Ths equation comes directly from the definition that work is force times distance.
Integrating this, we find the potential energy of a mass on a spring,
$$ V(x) = \frac{1}{2}kx^2. $$
In fact, the spring contant can be defined to be the second derivative of the potential,
$$ k = \frac{d^2}{dx^2} V(x).$$ We take the value of the second derivative at the minimum
of the potential, which assumes that the oscillations are not very far from equilibrium.
We see that Hooke's law is simply
$$F = -\frac{d^2 V(x)}{dx^2} x, $$
where the second derivative is evaluated at the minimum of the potential.
For a general potential, we can write the equation of motion as
$$ \frac{d^2}{dt^2} x = -\frac{1}{m}\frac{d^2V(x)}{dx^2} x.$$
The expression on the right hand side is known as the dynamical matrix,
though this is a trivial 1x1 matrix.
# Two masses connected by a spring
Now the potential depends on two corrdinates,
$$ V(x_1, x_2) = \frac{1}{2} k (x_1 - x_2 - d),$$
where $d$ is the equilibrium separation of the particles.
Now the force on each particle depends on the positions of both of the particles,
$$
\begin{pmatrix}F_1 \\ F_2\end{pmatrix}
= -
\begin{pmatrix}
\frac{\partial^2 V}{\partial x_1^2} &
\frac{\partial^2 V}{\partial x_1\partial x_2} \\
\frac{\partial^2 V}{\partial x_1\partial x_2} &
\frac{\partial^2 V}{\partial x_2^2} \\
\end{pmatrix}
\begin{pmatrix}x_1 \\ x_2\end{pmatrix}
$$
For performing the derivatives, we find
$$
\begin{pmatrix}F_1 \\ F_2\end{pmatrix}
= -
\begin{pmatrix}
k & -k \\
-k & k \\
\end{pmatrix}
\begin{pmatrix}x_1 \\ x_2\end{pmatrix}
$$
The equations of motion are coupled,
$$
\begin{pmatrix}
\frac{d^2x_1}{dt^2} \\
\frac{d^2x_2}{dt^2} \\
\end{pmatrix}
= -
\begin{pmatrix}
k/m & -k/m \\
-k/m & k/m \\
\end{pmatrix}
\begin{pmatrix}x_1 \\ x_2\end{pmatrix}
$$
To decouple the equations, we find the eigenvalues and eigenvectors.
```
import numpy as np
a = np.array([[1, -1], [-1, 1]])
freq, vectors = np.linalg.eig(a)
vectors = vectors.transpose()
```
The frequencies of the two modes of vibration are (in multiples of $\sqrt{k/m}$)
```
freq
```
The first mode is a vibrational mode were the masses vibrate against each other (moving in opposite directions). This can be seen from the eigenvector.
```
vectors[0]
```
The second mode is a translation mode with zero frequency—both masses move in the same direction.
```
vectors[1]
```
We can interactively illustrate the vibrational mode.
```
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.html import widgets
def make_plot(t):
fig, ax = plt.subplots()
x,y = np.array([-1,1]), np.array([0,0])
plt.plot(x, y, 'k.')
plt.plot(x + 0.3 * vectors[0] * t, y, 'bo')
plt.xlim(-1.5,1.5)
plt.ylim(-1.5,1.5)
widgets.interact(make_plot, t=(-1,1,0.1))
```
# Finding the dynamical matrix with numerical derivatives
We start from a function $V(x)$. If we want to calculate a derivative,
we just use the difference formula but don't take the delta too small.
Using $\delta x = 10^{-6}$ is safe.
$$
F = -\frac{dV(x)}{dx} \approx
\frac{V(x+\Delta x) - V(x-\Delta x)}{2\Delta x}
$$
Note that it is more accurate to do this symmetric difference formula
than it would be to use the usual forward derivative from calculus class.
It's easy to see this formula is just calculating the slope of the function using points near $x$.
```
def V(x):
return 0.5 * x**2
deltax = 1e-6
def F_approx(x):
return ( V(x + deltax) - V(x - deltax) ) / (2 * deltax)
[(x, F_approx(x)) for x in np.linspace(-2,2,9)]
```
Next, we can find the second derivative by using the difference formula twice.
We find the nice expression,
$$
\frac{d^2V}{dx^2} \approx \frac{V(x+\Delta x) - 2V(x) + V(x-\Delta x)}{(\Delta x)^2}.
$$
This formula has the nice interpretation of comparing the value of $V(x)$ to
the average of points on either side. If it is equal to the average, the line
is straight and the second derivative is zero.
If average of the outer values is larger than $V(x)$, then the ends curve upward,
and the second derivative is positive.
Likewise, if the average of the outer values is less than $V(x)$, then the ends curve downward,
and the second derivative is negative.
```
def dV2dx2_approx(x):
return ( V(x + deltax) - 2 * V(x) + V(x - deltax) ) / deltax**2
[(x, dV2dx2_approx(x)) for x in np.linspace(-2,2,9)]
```
Now we can use these derivative formulas to calcuate the dynamical matrix
for the two masses on one spring. Well use $k=1$ and $m=1$ for simplicity.
```
def V2(x1, x2):
return 0.5 * (x1 - x2)**2
x1, x2 = -1, 1
mat = np.array(
[[(V2(x1+deltax, x2) - 2 * V2(x1,x2) + V2(x1-deltax, x2)) / deltax**2 ,
(V2(x1+deltax, x2+deltax) - V2(x1-deltax, x2+deltax)
- V2(x1+deltax, x2-deltax) + V2(x1+deltax, x2+deltax)) / (2*deltax)**2],
[(V2(x1+deltax, x2+deltax) - V2(x1-deltax, x2+deltax)
- V2(x1+deltax, x2-deltax) + V2(x1+deltax, x2+deltax)) / (2*deltax)**2,
(V2(x1, x2+deltax) - 2 * V2(x1,x2) + V2(x1, x2-deltax)) / deltax**2 ]]
)
mat
freq, vectors = np.linalg.eig(mat)
vectors = vectors.transpose()
for f,v in zip(freq, vectors):
print("freqency", f, ", eigenvector", v)
```
For practical calcuations, we have to automate this matrix construction for an arbitrary potential.
| true |
code
| 0.594963 | null | null | null | null |
|
A **Deep Q Network** implementation in tensorflow with target network & random
experience replay. The code is tested with Gym's discrete action space
environment, CartPole-v0 on Colab.
---
## Notations:
Model network = $Q_{\theta}$
Model parameter = $\theta$
Model network Q value = $Q_{\theta}$ (s, a)
Target network = $Q_{\phi}$
Target parameter = $\phi$
Target network Q value = $Q_{\phi}$ ($s^{'}$, $a^{'}$)
---
## Equations:
TD target = r (s, a) $+$ $\gamma$ $max_{a}$ $Q_{\phi}$ $s^{'}$, $a^{'}$)
TD error = (TD target) $-$ (Model network Q value)
= [r (s, a) $+$ $\gamma$ $max_{a^{'}}$ $Q_{\phi}$ ($s^{'}$, $a^{'}$)] $-$ $Q_{\theta}$ (s, a)
---
## Key implementation details:
Update target parameter $\phi$ with model parameter $\theta$.
Copy $\theta$ to $\phi$ with *either* soft or hard parameter update.
Hard parameter update:
```
with tf.variable_scope('hard_replace'):
self.target_replace_hard = [t.assign(m) for t, m in zip(self.target_net_params, self.model_net_params)]
```
```
# hard params replacement
if self.learn_step % self.tau_step == 0:
self.sess.run(self.target_replace_hard)
self.learn_step += 1
```
Soft parameter update: polyak $\cdot$ $\theta$ + (1 $-$ polyak) $\cdot$ $\phi$
```
with tf.variable_scope('soft_replace'):
self.target_replace_soft = [t.assign(self.polyak * m + (1 - self.polyak) * t)
for t, m in zip(self.target_net_params, self.model_net_params)]
```
Stop TD target from contributing to gradient computation:
```
# exclude td_target in gradient computation
td_target = tf.stop_gradient(td_target)
```
---
## References:
[Human-level control through deep reinforcement learning
(Mnih et al., 2015)](https://storage.googleapis.com/deepmind-media/dqn/DQNNaturePaper.pdf)
---
<br>
```
import tensorflow as tf
import gym
import numpy as np
from matplotlib import pyplot as plt
# random sampling for learning from experience replay
class Exp():
def __init__(self, obs_size, max_size):
self.obs_size = obs_size
self.num_obs = 0
self.max_size = max_size
self.mem_full = False
# memory structure that stores samples from observations
self.mem = {'s' : np.zeros(self.max_size * self.obs_size, dtype=np.float32).reshape(self.max_size,self.obs_size),
'a' : np.zeros(self.max_size * 1, dtype=np.int32).reshape(self.max_size,1),
'r' : np.zeros(self.max_size * 1).reshape(self.max_size,1),
'done' : np.zeros(self.max_size * 1, dtype=np.int32).reshape(self.max_size,1)}
# stores sample obervation at each time step in experience memory
def store(self, s, a, r, done):
i = self.num_obs % self.max_size
self.mem['s'][i,:] = s
self.mem['a'][i,:] = a
self.mem['r'][i,:] = r
self.mem['done'][i,:] = done
self.num_obs += 1
if self.num_obs == self.max_size:
self.num_obs = 0 # reset number of observation
self.mem_full = True
# returns a minibatch of experience
def minibatch(self, minibatch_size):
if self.mem_full == False:
max_i = min(self.num_obs, self.max_size) - 1
else:
max_i = self.max_size - 1
# randomly sample a minibatch of indexes
sampled_i = np.random.randint(max_i, size=minibatch_size)
s = self.mem['s'][sampled_i,:].reshape(minibatch_size, self.obs_size)
a = self.mem['a'][sampled_i].reshape(minibatch_size)
r = self.mem['r'][sampled_i].reshape((minibatch_size,1))
s_next = self.mem['s'][sampled_i + 1,:].reshape(minibatch_size, self.obs_size)
done = self.mem['done'][sampled_i].reshape((minibatch_size,1))
return (s, a, r, s_next, done)
# Evaluates behavior policy while improving target policy
class DQN_agent():
def __init__(self, num_actions, obs_size, nhidden,
epoch,
epsilon, gamma, learning_rate,
replace, polyak, tau_step,
mem_size, minibatch_size):
super(DQN_agent, self).__init__()
self.actions = range(num_actions)
self.num_actions = num_actions
self.obs_size = obs_size # number of features
self.nhidden = nhidden # hidden nodes
self.epoch = epoch # for epsilon decay & to decide when to start training
self.epsilon = epsilon # for eploration
self.gamma = gamma # discount factor
self.learning_rate = learning_rate # learning rate alpha
# for params replacement
self.replace = replace # type of replacement
self.polyak = polyak # for soft replacement
self.tau_step = tau_step # for hard replacement
self.learn_step = 0 # steps after learning
# for Experience replay
self.mem = Exp(self.obs_size, mem_size) # memory that holds experiences
self.minibatch_size = minibatch_size
self.step = 0 # each step in a episode
# for tensorflow ops
self.built_graph()
self.sess = tf.Session()
self.sess.run(tf.global_variables_initializer())
self.sess.run(self.target_replace_hard)
self.cum_loss_per_episode = 0 # for charting display
# decay epsilon after each epoch
def epsilon_decay(self):
if self.step % self.epoch == 0:
self.epsilon = max(.01, self.epsilon * .95)
# epsilon-greedy behaviour policy for action selection
def act(self, state):
if np.random.random() < self.epsilon:
i = np.random.randint(0,len(self.actions))
else:
# get Q(s,a) from model network
Q_val = self.sess.run(self.model_Q_val, feed_dict={self.s: np.reshape(state, (1,state.shape[0]))})
# get index of largest Q(s,a)
i = np.argmax(Q_val)
action = self.actions[i]
self.step += 1
self.epsilon_decay()
return action
def learn(self, s, a, r, done):
# stores observation in memory as experience at each time step
self.mem.store(s, a, r, done)
# starts training a minibatch from experience after 1st epoch
if self.step > self.epoch:
self.replay() # start training with experience replay
def td_target(self, r, done, target_Q_val):
# select max Q values from target network (greedy policy)
max_target_Q_val = tf.reduce_max(target_Q_val, axis=1, keepdims=True)
# if state = done, td_target = r
td_target = (1.0 - tf.cast(done, tf.float32)) * tf.math.multiply(self.gamma, max_target_Q_val) + r
# exclude td_target in gradient computation
td_target = tf.stop_gradient(td_target)
return td_target
# select Q(s,a) from actions using e-greedy as behaviour policy from model network
def predicted_Q_val(self, a, model_Q_val):
# create 1D tensor of length = number of rows in a
arr = tf.range(tf.shape(a)[0], dtype=tf.int32)
# stack by column to create indices for Q(s,a) selections based on a
indices = tf.stack([arr, a], axis=1)
# select Q(s,a) using indice from model_Q_val
Q_val = tf.gather_nd(model_Q_val, indices)
Q_val = tf.reshape(Q_val, (self.minibatch_size, 1))
return Q_val
# contruct neural network
def built_net(self, var_scope, w_init, b_init, features, num_hidden, num_output):
with tf.variable_scope(var_scope):
feature_layer = tf.contrib.layers.fully_connected(features, num_hidden,
activation_fn = tf.nn.relu,
weights_initializer = w_init,
biases_initializer = b_init)
Q_val = tf.contrib.layers.fully_connected(feature_layer, num_output,
activation_fn = None,
weights_initializer = w_init,
biases_initializer = b_init)
return Q_val
# contruct tensorflow graph
def built_graph(self):
tf.reset_default_graph()
self.s = tf.placeholder(tf.float32, [None,self.obs_size], name='s')
self.a = tf.placeholder(tf.int32, [None,], name='a')
self.r = tf.placeholder(tf.float32, [None,1], name='r')
self.s_next = tf.placeholder(tf.float32, [None,self.obs_size], name='s_next')
self.done = tf.placeholder(tf.int32, [None,1], name='done')
# weight, bias initialization
w_init = tf.initializers.lecun_uniform()
b_init = tf.initializers.he_uniform(1e-4)
self.model_Q_val = self.built_net('model_net', w_init, b_init, self.s, self.nhidden, self.num_actions)
self.target_Q_val = self.built_net('target_net', w_init, b_init, self.s_next, self.nhidden, self.num_actions)
with tf.variable_scope('td_target'):
td_target = self.td_target(self.r, self.done, self.target_Q_val)
with tf.variable_scope('predicted_Q_val'):
predicted_Q_val = self.predicted_Q_val(self.a, self.model_Q_val)
with tf.variable_scope('loss'):
self.loss = tf.losses.huber_loss(td_target, predicted_Q_val)
with tf.variable_scope('optimizer'):
self.optimizer = tf.train.GradientDescentOptimizer(self.learning_rate).minimize(self.loss)
# get network params
with tf.variable_scope('params'):
self.target_net_params = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='target_net')
self.model_net_params = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='model_net')
# replace target net params with model net params
with tf.variable_scope('hard_replace'):
self.target_replace_hard = [t.assign(m) for t, m in zip(self.target_net_params, self.model_net_params)]
with tf.variable_scope('soft_replace'):
self.target_replace_soft = [t.assign(self.polyak * m + (1 - self.polyak) * t)
for t, m in zip(self.target_net_params, self.model_net_params)]
# decide soft or hard params replacement
def replace_params(self):
if self.replace == 'soft':
# soft params replacement
self.sess.run(self.target_replace_soft)
else:
# hard params replacement
if self.learn_step % self.tau_step == 0:
self.sess.run(self.target_replace_hard)
self.learn_step += 1
def replay(self):
# select minibatch of experiences from memory for training
(s, a, r, s_next, done) = self.mem.minibatch(self.minibatch_size)
# training
_, loss = self.sess.run([self.optimizer, self.loss], feed_dict = {self.s: s,
self.a: a,
self.r: r,
self.s_next: s_next,
self.done: done})
self.cum_loss_per_episode += loss
self.replace_params()
# compute stats
def stats(r_per_episode, R, cum_R, cum_R_episodes,
cum_loss_per_episode, cum_loss, cum_loss_episodes):
r_per_episode = np.append(r_per_episode, R) # store reward per episode
cum_R_episodes += R
cum_R = np.append(cum_R, cum_R_episodes) # store cumulative reward of all episodes
cum_loss_episodes += cum_loss_per_episode
cum_loss = np.append(cum_loss, cum_loss_episodes) # store cumulative loss of all episodes
return (r_per_episode, cum_R_episodes, cum_R, cum_loss_episodes, cum_loss)
# plot performance
def plot_charts(values, y_label):
fig = plt.figure(figsize=(10,5))
plt.title("DQN performance")
plt.xlabel("Episode")
plt.ylabel(y_label)
plt.plot(values)
plt.show(fig)
def display(r_per_episode, cum_R, cum_loss):
plot_charts(r_per_episode, "Reward")
plot_charts(cum_R, "cumulative_reward")
plot_charts(cum_loss, "cumulative_loss")
avg_r = np.sum(r_per_episode) / max_episodes
print("avg_r", avg_r)
avg_loss = np.sum(cum_loss) / max_episodes
print("avg_loss", avg_loss)
def run_episodes(env, agent, max_episodes):
r_per_episode = np.array([0])
cum_R = np.array([0])
cum_loss = np.array([0])
cum_R_episodes = 0
cum_loss_episodes = 0
# repeat each episode
for episode_number in range(max_episodes):
s = env.reset() # reset new episode
done = False
R = 0
# repeat each step
while not done:
# select action using behaviour policy(epsilon-greedy) from model network
a = agent.act(s)
# take action in environment
next_s, r, done, _ = env.step(a)
# agent learns
agent.learn(s, a, r, done)
s = next_s
R += r
(r_per_episode, cum_R_episodes, cum_R, cum_loss_episodes, cum_loss) = stats(r_per_episode, R, cum_R, cum_R_episodes,
agent.cum_loss_per_episode, cum_loss, cum_loss_episodes)
display(r_per_episode, cum_R, cum_loss)
env.close()
env = gym.make('CartPole-v0') # openai gym environment
#env = gym.make('Pong-v0') # openai gym environment
max_episodes = 500
epoch = 100
num_actions = env.action_space.n # number of possible actions
obs_size = env.observation_space.shape[0] # dimension of state space
nhidden = 128 # number of hidden nodes
epsilon = .9
gamma = .9
learning_rate = .3
replace = 'soft' # params replacement type, 'soft' for soft replacement or empty string '' for hard replacement
polyak = .001
tau_step = 300
mem_size = 30000
minibatch_size = 64
%matplotlib inline
agent = DQN_agent(num_actions, obs_size, nhidden,
epoch,
epsilon, gamma, learning_rate,
replace, polyak, tau_step,
mem_size, minibatch_size)
run_episodes(env, agent, max_episodes)
```
| true |
code
| 0.663505 | null | null | null | null |
|
# Generating conditional probability tables subject to constraints
```
import os
from pathlib import Path
from itertools import product
import numpy as np
import pandas as pd
from fake_data_for_learning.fake_data_for_learning import (
BayesianNodeRV, FakeDataBayesianNetwork, SampleValue
)
from fake_data_for_learning.utils import RandomCpt
from fake_data_for_learning.probability_polytopes import (
MapMultidimIndexToLinear, ProbabilityPolytope, ExpectationConstraint
)
```
Suppose we want to generate data from a discrete Bayesian network, such as
Product -> Days <- Rating,
where e.g. Product is the (insurance) product name, Rating is rating strength (i.e. market price / technical price) for a submission, and Days is the number of days to generate a quote for the submission.
The number of entries in probability and conditional probability tables to define this Bayesian network is
$ | Product | + | Rating | + | Product | \times | Rating | \times | Days |$.
For example, let us define Industry and Rating as follows
```
product_values = ['financial', 'liability', 'property']
product_type = BayesianNodeRV('product_type', np.array([0.2, 0.5, 0.3]), values=product_values)
rating_values = range(2)
rating = BayesianNodeRV('rating', np.array([0.3, 0.7]))
```
Suppose that Days is also discrete, e.g.
```
days_values = range(4)
```
Then if we choose the ordering of the conditional probability table axes as Product, Rating, Days, we can generate the entries of the conditional probability table for Days conditioned on Industry and Rating with `utils.RandomCpt`:
```
random_cpt = RandomCpt(len(product_values), len(rating_values), len(days_values))
X = random_cpt()
X[0, 0, :].sum()
```
So the total number of probability table entries to specify is, as in the formula above,
```
f'Number of probability table entries: {len(product_values) + len(rating_values) + (len(product_values) * len(rating_values) * len(days_values))}'
```
It would be nice to specify certain properties of the matrix without having to change entries individually. For example, we may want to insist that
\begin{equation*}
E(D | P = property) = 3.5 \\
E(D | P = financial) = 1.0 \\
E(D | P= liability) = 2.0
\end{equation*}
Denote the entries of the conditional probability table as
$$(\rho_{p, r | d})$$
The the above constraints become
\begin{equation*}
\frac{1}{|R|} \sum_{r, d} d \, \rho_{\mathrm{property},\, r\, | d} = 3.5 \\
\frac{1}{|R|} \sum_{r, d} d \, \rho_{\mathrm{financial},\, r\, | d} = 1.0\\
\frac{1}{|R|} \sum_{r, d} d \, \rho_{\mathrm{liability},\, r\, | d} = 2.0.
\end{equation*}
As $(\rho)$ is a conditional probability table, we also have the constraints
\begin{equation*}
0 \leq \rho_{p,\,r\,|d} \leq 1 \textrm{ for all }(p,\,r,\,d),\\
\sum_{d} \rho_{p,\,r,\,| d} = 1 \textrm{ for each pair } (p, \, r)
\end{equation*}
Together, these constraints define convex polytope contained in (probability) simplex $\Delta_{R-1} \subseteq \mathbb{R}^{R}$, where $R = |Product | \times | Rating | \times | Days|$ (see e.g. Chapter 1 of *Lectures on Algebraic Statistics*, Drton, Sturmfels, Sullivant). This polytope is defined as an intersection of half-spaces, i.e. using the so-called *H-representation* of the polytope, see *Lectures on Polytopes* by Ziegler, Chapters 0 and 1.
To generate a random (conditional) probability table to these constraints, the vertex-, or *V-representation* of the probability polytope $P$ is much more useful, because given the a vertex matrix $V$, where each column is a vertex of $P$ in $\mathbb{R}^R$, and all points in $P$ can be obtained as
$$
\begin{equation*}
x = V \cdot t
\end{equation*}
$$
where $t \in \mathbb{R}^N$, with $N$ being the number of vertices for $P$, and $t$ satisfying $0 \leq t_i \leq 1$, $\sum t_i = 1$.
Once we have determined the V-representation $V$, then the problem of generating conditional probability tables subject to our given expectation value constraints reduces to the much simpler problem of generating points on the non-negative quadrant of the unit (hyper) cube in $R^N$.
Before we get to our goal of generating these probability tables for our hit ratio problem, let's look at elementary examples.
## (Conditional) Probability Polytopes
The simplest example of a probability polytope is that of a Bernoulli random variable.
```
bernoulli = ProbabilityPolytope(('outcome',), dict(outcome=range(2)))
A, b = bernoulli.get_probability_half_planes()
print(A, '\n', b)
```
We convert the formulation A x <= b to the V-description
```
bernoulli.get_vertex_representation()
tertiary = ProbabilityPolytope(('outcome',), dict(outcome=range(3)))
tertiary.get_vertex_representation()
conditional_bernoullis = ProbabilityPolytope(
('input', 'output'), dict(input=range(2), output=range(2))
)
conditional_bernoullis.get_vertex_representation()
```
The benefit of having the vertex-representation (V-representation) of the probability polytope is that generating random (conditional) probability tables is straightforward, namely, we can get all elements of the probability polytope by taking combinations of the vertex (column) vectors.
In the flattened coordinates, we have, e.g.
```
conditional_bernoullis.generate_flat_random_cpt()
```
In the multidimensional coordinates for conditional probability tables here, we have e.g.
```
conditional_bernoullis.generate_random_cpt()
```
## Adding contraints on conditional expectation values
```
conditional_bernoullis.set_expectation_constraints(
[ExpectationConstraint(equation=dict(input=1), moment=1, value=0.5)]
)
conditional_bernoullis.get_expect_equations_col_indices(conditional_bernoullis.expect_constraints[0].equation)
conditional_bernoullis.get_vertex_representation()
conditional_bernoullis.generate_random_cpt()
two_input_constrained_polytope = ProbabilityPolytope(
('input', 'more_input', 'output'),
dict(input=['hi', 'low'], more_input=range(2), output=range(2))
)
two_input_constrained_polytope.set_expectation_constraints(
[ExpectationConstraint(equation=dict(more_input=0), moment=1, value=0.25)]
)
two_input_constrained_polytope.get_vertex_representation()
```
## Hit rate polytope again
```
days_polytope = ProbabilityPolytope(
('product', 'rating', 'days'),
coords = {
'product': product_values,
'rating': rating_values,
'days': days_values
}
)
days_polytope.set_expectation_constraints(
[
ExpectationConstraint(equation=dict(product='financial'), moment=1, value=0.2),
ExpectationConstraint(equation=dict(product='liability'), moment=1, value=0.9),
ExpectationConstraint(equation=dict(product='property'), moment=1, value=0.5),
]
)
days_cpt = days_polytope.generate_random_cpt()
days_cpt
```
Now we create our Bayesian network with desired constraints on some expectation values
```
days = BayesianNodeRV('days', days_cpt, parent_names=['product_type', 'rating'])
bn = FakeDataBayesianNetwork(product_type, rating)#, days)
bn = FakeDataBayesianNetwork(product_type, rating, days)
bn.rvs(10)
```
| true |
code
| 0.621598 | null | null | null | null |
|
# Feldman and Cousins intervals with asymptotics.
This is a copy of `FC_interval_freq.ipynb` using the asymptotic formulae instead of toys.
```
import numpy as np
import matplotlib.pyplot as plt
import os
import time
import zfit
from zfit.loss import UnbinnedNLL
from zfit.minimize import Minuit
zfit.settings.set_seed(10)
from hepstats.hypotests.calculators import AsymptoticCalculator
from hepstats.hypotests import ConfidenceInterval
from hepstats.hypotests.parameters import POIarray
from hepstats.hypotests.exceptions import POIRangeError
from utils import one_minus_cl_plot, pltdist, plotfitresult
```
In this example we consider an experiment where the observable $x$ is simply the measured value of $\mu$ in an experiment with a Gaussian resolution with known width $\sigma = 1$. We will compute the confidence belt for a 90 % condifdence level for the mean of the Gaussian $\mu$.
We define a sampler below for a Gaussian pdf with $\sigma = 1$ using the `zfit` library, the sampler allows to generate samples for different values of $\mu$. 1000 entries are generated for each sample.
```
bounds = (-10, 10)
obs = zfit.Space('x', limits=bounds)
mean = zfit.Parameter("mean", 0)
sigma = zfit.Parameter("sigma", 1.0)
model = zfit.pdf.Gauss(obs=obs, mu=mean, sigma=sigma)
data = model.create_sampler(1000)
data.resample()
```
Below is defined the negative-likelihood function which is needed to compute Feldman and Cousins intervals as described in [arXiv:1109.0714](https://arxiv.org/abs/1109.0714). The negative-likelihood function is mimised to compute the measured mean $x$ and its uncertainty $\sigma_x$.
```
# Create the negative log likelihood
nll = UnbinnedNLL(model=model, data=data)
# Instantiate a minuit minimizer
minimizer = Minuit(verbosity=0)
# minimisation of the loss function
minimum = minimizer.minimize(loss=nll)
minimum.hesse();
print(minimum)
x_err = minimum.params[mean]["minuit_hesse"]["error"]
```
To compute the the confidence belt on $\mu$ 90 % CL intervals have to be computed for several values of the measured mean $x$. Samples are generated for $\mu = n \times \sigma_x$ with $n = -6, -5, -4, ..., 3, 4, 5, 6$, and fitted to measure the mean $x_n$.
90 % CL intervals are evaluated for each $x_n$ for the two following cases, $\mu > 0$ and $\mu$ unbounded.
With `hepstats`, The intervals are obtained with `ConfidenceInterval` object using a calculator. Here the calculator is the `AsymptoticCalculator` which computes the intervals using asymptotic formulae (see [Asymptotic formulae for likelihood-based tests of new physics](https://arxiv.org/pdf/1007.1727.pdf)), an example of a 68 % CL interval with the `AsymptoticCalculator` can be found [here](https://github.com/scikit-hep/hepstats/blob/master/notebooks/hypotests/confidenceinterval_asy_zfit.ipynb).
The option `qtilde = True` should be used if $\mu > 0$.
```
results = {}
for n in np.arange(-6, 7, 1.0):
x = n * x_err
if n not in results:
zfit.settings.set_seed(5)
data.resample(param_values={mean: x})
minimum = minimizer.minimize(loss=nll)
minimum.hesse();
results_n = {}
results_n["x"] = minimum.params[mean]["value"]
results_n["x_err"] = minimum.params[mean]["minuit_hesse"]["error"]
calculator = AsymptoticCalculator(minimum, minimizer)
x_min = results_n["x"] - results_n["x_err"]*3
x_max = results_n["x"] + results_n["x_err"]*3
if n < -1:
x_max = max(0.5 * results_n["x_err"], x_max)
poinull = POIarray(mean, np.linspace(x_min, x_max, 50))
results_n["calculator"] = calculator
results_n["poinull"] = poinull
else:
results_n = results[n]
calculator = results_n["calculator"]
poinull = results_n["poinull"]
if "mu_lower" not in results_n:
for qtilde in [True, False]:
while True:
try:
ci = ConfidenceInterval(calculator, poinull, qtilde=qtilde)
interval = ci.interval(alpha=0.05, printlevel=0)
break
except POIRangeError:
values = poinull.values
poinull = POIarray(mean, np.concatenate([values, [values[-1] + np.diff(values)[0]]]))
results_n["poinull"] = poinull
if qtilde:
results_n["mu_lower"] = interval["lower"]
results_n["mu_upper"] = interval["upper"]
else:
results_n["mu_lower_unbound"] = interval["lower"]
results_n["mu_upper_unbound"] = interval["upper"]
results[n] = results_n
```
The plot of the confidence belt of $\mu$ at 90 % CL as function of the measured mean values $x$ (in unit of $\sigma_x$), for the bounded and unbounded case are shown below.
```
f = plt.figure(figsize=(9, 8))
plt.plot([v["x"]/v["x_err"] for v in results.values()],
[v["mu_upper_unbound"]/v["x_err"] for v in results.values()], color="black", label="90 % CL, no boundaries")
plt.plot([v["x"]/v["x_err"] for v in results.values()],
[v["mu_lower_unbound"]/v["x_err"] for v in results.values()], color="black")
plt.plot([v["x"]/v["x_err"] for v in results.values()],
[v["mu_upper"]/v["x_err"] for v in results.values()], "--", color="crimson", label="90 % CL, $\mu > 0$")
plt.plot([v["x"]/v["x_err"] for v in results.values()],
[v["mu_lower"]/v["x_err"] for v in results.values()], "--", color="crimson")
plt.ylim(0.)
plt.legend(fontsize=15)
plt.ylabel("Mean $\mu$", fontsize=15)
plt.xlabel("Measured mean $x$", fontsize=15);
```
For the unbounded and the $\mu > 0$ cases the plot reproduces the figure 3 and 10, respectively, of [A Unified Approach to the Classical Statistical Analysis of Small Signals, Gary J. Feldman, Robert D. Cousins](https://arxiv.org/pdf/physics/9711021.pdf).
| true |
code
| 0.658857 | null | null | null | null |
|
```
%load_ext watermark
%watermark -p torch,pytorch_lightning,torchvision,torchmetrics,matplotlib
%load_ext pycodestyle_magic
%flake8_on --ignore W291,W293,E703
```
<a href="https://pytorch.org"><img src="https://raw.githubusercontent.com/pytorch/pytorch/master/docs/source/_static/img/pytorch-logo-dark.svg" width="90"/></a> <a href="https://www.pytorchlightning.ai"><img src="https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/docs/source/_static/images/logo.svg" width="150"/></a>
# Model Zoo -- VGG16 Trained on CIFAR-10
This notebook implements the VGG16 convolutional network [1] and applies it to CIFAR-10 digit classification.

### References
- [1] Simonyan, K., & Zisserman, A. (2014). [Very deep convolutional networks for large-scale image recognition](https://arxiv.org/abs/1409.1556). arXiv preprint arXiv:1409.1556.
## General settings and hyperparameters
- Here, we specify some general hyperparameter values and general settings
- Note that for small datatsets, it is not necessary and better not to use multiple workers as it can sometimes cause issues with too many open files in PyTorch. So, if you have problems with the data loader later, try setting `NUM_WORKERS = 0` instead.
```
BATCH_SIZE = 256
NUM_EPOCHS = 25
LEARNING_RATE = 0.001
NUM_WORKERS = 4
```
## Implementing a Neural Network using PyTorch Lightning's `LightningModule`
- In this section, we set up the main model architecture using the `LightningModule` from PyTorch Lightning.
- When using PyTorch Lightning, we can start with defining our neural network model in pure PyTorch, and then we use it in the `LightningModule` to get all the extra benefits that PyTorch Lightning provides.
- In this case, since Torchvision already offers a nice and efficient PyTorch implementation of MobileNet-v2, let's load it from the Torchvision hub:
```
import torch.nn as nn
class PyTorchVGG16(nn.Module):
def __init__(self, num_classes):
super().__init__()
# calculate same padding:
# (w - k + 2*p)/s + 1 = o
# => p = (s(o-1) - w + k)/2
self.block_1 = nn.Sequential(
nn.Conv2d(in_channels=3,
out_channels=64,
kernel_size=(3, 3),
stride=(1, 1),
# (1(32-1)- 32 + 3)/2 = 1
padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=64,
out_channels=64,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=(2, 2),
stride=(2, 2))
)
self.block_2 = nn.Sequential(
nn.Conv2d(in_channels=64,
out_channels=128,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=128,
out_channels=128,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=(2, 2),
stride=(2, 2))
)
self.block_3 = nn.Sequential(
nn.Conv2d(in_channels=128,
out_channels=256,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=256,
out_channels=256,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=256,
out_channels=256,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=(2, 2),
stride=(2, 2))
)
self.block_4 = nn.Sequential(
nn.Conv2d(in_channels=256,
out_channels=512,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=512,
out_channels=512,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=512,
out_channels=512,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=(2, 2),
stride=(2, 2))
)
self.block_5 = nn.Sequential(
nn.Conv2d(in_channels=512,
out_channels=512,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=512,
out_channels=512,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=512,
out_channels=512,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=(2, 2),
stride=(2, 2))
)
self.features = nn.Sequential(
self.block_1, self.block_2,
self.block_3, self.block_4,
self.block_5
)
self.classifier = nn.Sequential(
nn.Linear(512, 4096),
nn.ReLU(True),
nn.Dropout(p=0.5),
nn.Linear(4096, 4096),
nn.ReLU(True),
nn.Dropout(p=0.5),
nn.Linear(4096, num_classes),
)
# self.avgpool = nn.AdaptiveAvgPool2d((7, 7))
for m in self.modules():
if isinstance(m, torch.nn.Conv2d):
#n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
#m.weight.data.normal_(0, np.sqrt(2. / n))
m.weight.detach().normal_(0, 0.05)
if m.bias is not None:
m.bias.detach().zero_()
elif isinstance(m, torch.nn.Linear):
m.weight.detach().normal_(0, 0.05)
m.bias.detach().detach().zero_()
def forward(self, x):
x = self.features(x)
# x = self.avgpool(x)
x = x.view(x.size(0), -1)
logits = self.classifier(x)
return logits
```
- Next, we can define our `LightningModule` as a wrapper around our PyTorch model:
```
import pytorch_lightning as pl
import torchmetrics
# LightningModule that receives a PyTorch model as input
class LightningModel(pl.LightningModule):
def __init__(self, model, learning_rate):
super().__init__()
self.learning_rate = learning_rate
# The inherited PyTorch module
self.model = model
# Save settings and hyperparameters to the log directory
# but skip the model parameters
self.save_hyperparameters(ignore=['model'])
# Set up attributes for computing the accuracy
self.train_acc = torchmetrics.Accuracy()
self.valid_acc = torchmetrics.Accuracy()
self.test_acc = torchmetrics.Accuracy()
# Defining the forward method is only necessary
# if you want to use a Trainer's .predict() method (optional)
def forward(self, x):
return self.model(x)
# A common forward step to compute the loss and labels
# this is used for training, validation, and testing below
def _shared_step(self, batch):
features, true_labels = batch
logits = self(features)
loss = torch.nn.functional.cross_entropy(logits, true_labels)
predicted_labels = torch.argmax(logits, dim=1)
return loss, true_labels, predicted_labels
def training_step(self, batch, batch_idx):
loss, true_labels, predicted_labels = self._shared_step(batch)
self.log("train_loss", loss)
# To account for Dropout behavior during evaluation
self.model.eval()
with torch.no_grad():
_, true_labels, predicted_labels = self._shared_step(batch)
self.train_acc.update(predicted_labels, true_labels)
self.log("train_acc", self.train_acc, on_epoch=True, on_step=False)
self.model.train()
return loss # this is passed to the optimzer for training
def validation_step(self, batch, batch_idx):
loss, true_labels, predicted_labels = self._shared_step(batch)
self.log("valid_loss", loss)
self.valid_acc(predicted_labels, true_labels)
self.log("valid_acc", self.valid_acc,
on_epoch=True, on_step=False, prog_bar=True)
def test_step(self, batch, batch_idx):
loss, true_labels, predicted_labels = self._shared_step(batch)
self.test_acc(predicted_labels, true_labels)
self.log("test_acc", self.test_acc, on_epoch=True, on_step=False)
def configure_optimizers(self):
optimizer = torch.optim.Adam(self.parameters(), lr=self.learning_rate)
return optimizer
```
## Setting up the dataset
- In this section, we are going to set up our dataset.
### Inspecting the dataset
```
from torchvision import datasets
from torchvision import transforms
from torch.utils.data import DataLoader
train_dataset = datasets.CIFAR10(root='./data',
train=True,
transform=transforms.ToTensor(),
download=True)
train_loader = DataLoader(dataset=train_dataset,
batch_size=BATCH_SIZE,
num_workers=NUM_WORKERS,
drop_last=True,
shuffle=True)
test_dataset = datasets.CIFAR10(root='./data',
train=False,
transform=transforms.ToTensor())
test_loader = DataLoader(dataset=test_dataset,
batch_size=BATCH_SIZE,
num_workers=NUM_WORKERS,
drop_last=False,
shuffle=False)
from collections import Counter
train_counter = Counter()
for images, labels in train_loader:
train_counter.update(labels.tolist())
print('\nTraining label distribution:')
sorted(train_counter.items(), key=lambda pair: pair[0])
test_counter = Counter()
for images, labels in test_loader:
test_counter.update(labels.tolist())
print('\nTest label distribution:')
sorted(test_counter.items(), key=lambda pair: pair[0])
```
### A quick visual check
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import torchvision
for images, labels in train_loader:
break
plt.figure(figsize=(8, 8))
plt.axis("off")
plt.title("Training Images")
plt.imshow(np.transpose(torchvision.utils.make_grid(
images[:64],
padding=2,
normalize=True),
(1, 2, 0)))
plt.show()
```
### Performance baseline
- Especially for imbalanced datasets, it's quite useful to compute a performance baseline.
- In classification contexts, a useful baseline is to compute the accuracy for a scenario where the model always predicts the majority class -- you want your model to be better than that!
```
majority_class = test_counter.most_common(1)[0]
majority_class
```
- (To be fair, the classes in the test set are perfectly evenly distributed, so the majority class is an arbitrary choice in this case)
```
baseline_acc = majority_class[1] / sum(test_counter.values())
print('Accuracy when always predicting the majority class:')
print(f'{baseline_acc:.2f} ({baseline_acc*100:.2f}%)')
```
### Setting up a `DataModule`
- There are three main ways we can prepare the dataset for Lightning. We can
1. make the dataset part of the model;
2. set up the data loaders as usual and feed them to the fit method of a Lightning Trainer -- the Trainer is introduced in the next subsection;
3. create a LightningDataModule.
- Here, we are going to use approach 3, which is the most organized approach. The `LightningDataModule` consists of several self-explanatory methods as we can see below:
```
import os
from torch.utils.data.dataset import random_split
from torch.utils.data import DataLoader
from torchvision import transforms
class DataModule(pl.LightningDataModule):
def __init__(self, data_path='./'):
super().__init__()
self.data_path = data_path
def prepare_data(self):
datasets.CIFAR10(root=self.data_path,
download=True)
self.train_transform = torchvision.transforms.Compose([
# torchvision.transforms.Resize((70, 70)),
# torchvision.transforms.RandomCrop((64, 64)),
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
self.test_transform = torchvision.transforms.Compose([
# torchvision.transforms.Resize((70, 70)),
# torchvision.transforms.CenterCrop((64, 64)),
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
return
def setup(self, stage=None):
train = datasets.CIFAR10(root=self.data_path,
train=True,
transform=self.train_transform,
download=False)
self.test = datasets.CIFAR10(root=self.data_path,
train=False,
transform=self.test_transform,
download=False)
self.train, self.valid = random_split(train, lengths=[45000, 5000])
def train_dataloader(self):
train_loader = DataLoader(dataset=self.train,
batch_size=BATCH_SIZE,
drop_last=True,
shuffle=True,
num_workers=NUM_WORKERS)
return train_loader
def val_dataloader(self):
valid_loader = DataLoader(dataset=self.valid,
batch_size=BATCH_SIZE,
drop_last=False,
shuffle=False,
num_workers=NUM_WORKERS)
return valid_loader
def test_dataloader(self):
test_loader = DataLoader(dataset=self.test,
batch_size=BATCH_SIZE,
drop_last=False,
shuffle=False,
num_workers=NUM_WORKERS)
return test_loader
```
- Note that the `prepare_data` method is usually used for steps that only need to be executed once, for example, downloading the dataset; the `setup` method defines the the dataset loading -- if you run your code in a distributed setting, this will be called on each node / GPU.
- Next, lets initialize the `DataModule`; we use a random seed for reproducibility (so that the data set is shuffled the same way when we re-execute this code):
```
import torch
torch.manual_seed(1)
data_module = DataModule(data_path='./data')
```
## Training the model using the PyTorch Lightning Trainer class
- Next, we initialize our model.
- Also, we define a call back so that we can obtain the model with the best validation set performance after training.
- PyTorch Lightning offers [many advanced logging services](https://pytorch-lightning.readthedocs.io/en/latest/extensions/logging.html) like Weights & Biases. Here, we will keep things simple and use the `CSVLogger`:
```
from pytorch_lightning.callbacks import ModelCheckpoint
from pytorch_lightning.loggers import CSVLogger
pytorch_model = PyTorchVGG16(num_classes=10)
lightning_model = LightningModel(
pytorch_model, learning_rate=LEARNING_RATE)
callbacks = [ModelCheckpoint(
save_top_k=1, mode='max', monitor="valid_acc")] # save top 1 model
logger = CSVLogger(save_dir="logs/", name="my-model")
```
- Now it's time to train our model:
```
import time
trainer = pl.Trainer(
max_epochs=NUM_EPOCHS,
callbacks=callbacks,
progress_bar_refresh_rate=50, # recommended for notebooks
accelerator="auto", # Uses GPUs or TPUs if available
devices="auto", # Uses all available GPUs/TPUs if applicable
logger=logger,
log_every_n_steps=100)
start_time = time.time()
trainer.fit(model=lightning_model, datamodule=data_module)
runtime = (time.time() - start_time)/60
print(f"Training took {runtime:.2f} min in total.")
```
## Evaluating the model
- After training, let's plot our training ACC and validation ACC using pandas, which, in turn, uses matplotlib for plotting (you may want to consider a [more advanced logger](https://pytorch-lightning.readthedocs.io/en/latest/extensions/logging.html) that does that for you):
```
import pandas as pd
metrics = pd.read_csv(f"{trainer.logger.log_dir}/metrics.csv")
aggreg_metrics = []
agg_col = "epoch"
for i, dfg in metrics.groupby(agg_col):
agg = dict(dfg.mean())
agg[agg_col] = i
aggreg_metrics.append(agg)
df_metrics = pd.DataFrame(aggreg_metrics)
df_metrics[["train_loss", "valid_loss"]].plot(
grid=True, legend=True, xlabel='Epoch', ylabel='Loss')
df_metrics[["train_acc", "valid_acc"]].plot(
grid=True, legend=True, xlabel='Epoch', ylabel='ACC')
```
- The `trainer` automatically saves the model with the best validation accuracy automatically for us, we which we can load from the checkpoint via the `ckpt_path='best'` argument; below we use the `trainer` instance to evaluate the best model on the test set:
```
trainer.test(model=lightning_model, datamodule=data_module, ckpt_path='best')
```
## Predicting labels of new data
- You can use the `trainer.predict` method on a new `DataLoader` or `DataModule` to apply the model to new data.
- Alternatively, you can also manually load the best model from a checkpoint as shown below:
```
path = trainer.checkpoint_callback.best_model_path
print(path)
lightning_model = LightningModel.load_from_checkpoint(
path, model=pytorch_model)
lightning_model.eval();
```
- Note that our PyTorch model, which is passed to the Lightning model requires input arguments. However, this is automatically being taken care of since we used `self.save_hyperparameters()` in our PyTorch model's `__init__` method.
- Now, below is an example applying the model manually. Here, pretend that the `test_dataloader` is a new data loader.
```
test_dataloader = data_module.test_dataloader()
all_true_labels = []
all_predicted_labels = []
for batch in test_dataloader:
features, labels = batch
with torch.no_grad():
logits = lightning_model(features)
predicted_labels = torch.argmax(logits, dim=1)
all_predicted_labels.append(predicted_labels)
all_true_labels.append(labels)
all_predicted_labels = torch.cat(all_predicted_labels)
all_true_labels = torch.cat(all_true_labels)
all_predicted_labels[:5]
```
Just as an internal check, if the model was loaded correctly, the test accuracy below should be identical to the test accuracy we saw earlier in the previous section.
```
test_acc = torch.mean((all_predicted_labels == all_true_labels).float())
print(f'Test accuracy: {test_acc:.4f} ({test_acc*100:.2f}%)')
```
## Inspecting Failure Cases
- In practice, it is often informative to look at failure cases like wrong predictions for particular training instances as it can give us some insights into the model behavior and dataset.
- Inspecting failure cases can sometimes reveal interesting patterns and even highlight dataset and labeling issues.
```
# Append the folder that contains the
# helper_data.py, helper_plotting.py, and helper_evaluate.py
# files so we can import from them
import sys
sys.path.append('../pytorch_ipynb')
from helper_data import UnNormalize
from helper_plotting import show_examples
class_dict = {0: 'airplane',
1: 'automobile',
2: 'bird',
3: 'cat',
4: 'deer',
5: 'dog',
6: 'frog',
7: 'horse',
8: 'ship',
9: 'truck'}
# We normalized each channel during training; here
# we are reverting the normalization so that we
# can plot them as images
unnormalizer = UnNormalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
show_examples(
model=lightning_model,
data_loader=test_dataloader,
unnormalizer=unnormalizer,
class_dict=class_dict)
from torchmetrics import ConfusionMatrix
cmat = ConfusionMatrix(num_classes=len(class_dict))
for x, y in test_dataloader:
pred = lightning_model(x)
cmat(pred, y)
cmat_tensor = cmat.compute()
from helper_plotting import plot_confusion_matrix
plot_confusion_matrix(
cmat_tensor.numpy(),
class_names=class_dict.values())
plt.show()
```
## Single-image usage
```
%matplotlib inline
import matplotlib.pyplot as plt
```
- Assume we have a single image as shown below:
```
from PIL import Image
image = Image.open('data/cifar10_pngs/90_airplane.png')
plt.imshow(image, cmap='Greys')
plt.show()
```
- Note that we have to use the same image transformation that we used earlier in the `DataModule`.
- While we didn't apply any image augmentation, we could use the `to_tensor` function from the torchvision library; however, as a general template that provides flexibility for more complex transformation chains, let's use the `Compose` class for this:
```
transform = data_module.train_transform
image_chw = transform(image)
```
- Note that `ToTensor` returns the image in the CHW format. CHW refers to the dimensions and stands for channel, height, and width.
```
print(image_chw.shape)
```
- However, the PyTorch / PyTorch Lightning model expectes images in NCHW format, where N stands for the number of images (e.g., in a batch).
- We can add the additional channel dimension via `unsqueeze` as shown below:
```
image_nchw = image_chw.unsqueeze(0)
print(image_nchw.shape)
```
- Now that we have the image in the right format, we can feed it to our classifier:
```
with torch.no_grad(): # since we don't need to backprop
logits = lightning_model(image_nchw)
probas = torch.softmax(logits, axis=1)
predicted_label = torch.argmax(probas)
int_to_str = {
0: 'airplane',
1: 'automobile',
2: 'bird',
3: 'cat',
4: 'deer',
5: 'dog',
6: 'frog',
7: 'horse',
8: 'ship',
9: 'truck'}
print(f'Predicted label: {int_to_str[predicted_label.item()]}')
print(f'Class-membership probability {probas[0][predicted_label]*100:.2f}%')
```
| true |
code
| 0.846657 | null | null | null | null |
|
### Abstract
This is an example to show to use use the basic API of TensorFlow, to construct a linear regression model.
This notebook is an exercise adapted from [the Medium.com blog](https://medium.com/@saxenarohan97/intro-to-tensorflow-solving-a-simple-regression-problem-e87b42fd4845).
Note that recent version of TensorFlow does have more advanced API such like LinearClassifier that provides the scikit-learn alike machine learning API.
```
import tensorflow as tf
import numpy as np
from sklearn.datasets import load_boston
from sklearn.preprocessing import scale
from matplotlib import pyplot as plt
%matplotlib inline
from matplotlib.pylab import rcParams
rcParams['figure.figsize'] = 15,6
```
Split the data into training, validation and test sets.
```
# Retrieve the data
bunch = load_boston()
print('total data shape:', bunch.data.shape)
total_features = bunch.data[:, range(12)]
total_prices = bunch.data[:, [12]]
print('features shape:', total_features.shape, 'targe shape:', total_prices.shape)
# new in 0.18 version
# total_features, total_prices = load_boston(True)
# Keep 300 samples for training
train_features = scale(total_features[:300])
train_prices = total_prices[:300]
print('training dataset:', len(train_features))
print('feature example:', train_features[0:1])
print('mean of feature 0:', np.asarray(train_features[:, 0]).mean())
# Keep 100 samples for validation
valid_features = scale(total_features[300:400])
valid_prices = total_prices[300:400]
print('validation dataset:', len(valid_features))
# Keep remaining samples as test set
test_features = scale(total_features[400:])
test_prices = total_prices[400:]
print('test dataset:', len(test_features))
```
#### Linear Regression Model
```
w = tf.Variable(tf.truncated_normal([12, 1], mean=0.0, stddev=1.0, dtype=tf.float64))
b = tf.Variable(tf.zeros(1, dtype = tf.float64))
def calc(x, y):
'''
linear regression model that return (prediction, L2_error)
'''
# Returns predictions and error
predictions = tf.add(b, tf.matmul(x, w))
error = tf.reduce_mean(tf.square(y - predictions))
return [ predictions, error ]
y, cost = calc(train_features, train_prices)
# augment the model with the regularisation
L1_regu_cost = tf.add(cost, tf.reduce_mean(tf.abs(w)))
L2_regu_cost = tf.add(cost, tf.reduce_mean(tf.square(w)))
def train(cost, learning_rate=0.025, epochs=300):
'''
run the cost computation graph with gradient descent optimizer.
'''
errors = [[], []]
init = tf.global_variables_initializer()
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
config = tf.ConfigProto()
config.gpu_options.allow_growth=True
sess = tf.Session(config=config)
with sess:
sess.run(init)
for i in range(epochs):
sess.run(optimizer)
errors[0].append(i+1)
errors[1].append(sess.run(cost))
# Get the parameters of the linear regression model.
print('weights:\n', sess.run(w))
print('bias:', sess.run(b))
valid_cost = calc(valid_features, valid_prices)[1]
print('Validation error =', sess.run(valid_cost), '\n')
test_cost = calc(test_features, test_prices)[1]
print('Test error =', sess.run(test_cost), '\n')
return errors
# with L1 regularisation, the testing error is slightly improved, i.e. 75 vs. 76
# similarly with L1 regularisation, the L2 regularisation improves the testing error to 75 as well.
epochs = 500
errors_lr_005 = train(cost, learning_rate=0.005, epochs=epochs)
errors_lr_025 = train(cost, learning_rate=0.025, epochs=epochs)
ax = plt.subplot(111)
plt.plot(errors_lr_005[1], color='green', label='learning rate 0.005')
plt.plot(errors_lr_025[1], color='red', label='learning rate 0.025')
#ax = plt.plot(errors[0], errors[1], 'r--')
plt.axis([0, epochs, 0, 200])
plt.title('Evolution of L2 errors along each epoch')
plt.xlabel('epoch')
plt.ylabel('L2 error')
_ = plt.legend(loc='best')
plt.show()
```
The **higher** the learning rate, the **faster** that the model converges. But if the learning rate is too large, it could also prevent the model from convergence.
| true |
code
| 0.73911 | null | null | null | null |
|
**This notebook is an exercise in the [AI Ethics](https://www.kaggle.com/learn/ai-ethics) course. You can reference the tutorial at [this link](https://www.kaggle.com/alexisbcook/ai-fairness).**
---
In the tutorial, you learned about different ways of measuring fairness of a machine learning model. In this exercise, you'll train a few models to approve (or deny) credit card applications and analyze fairness. Don't worry if you're new to coding: this exercise assumes no programming knowledge.
# Introduction
We work with a **synthetic** dataset of information submitted by credit card applicants.
To load and preview the data, run the next code cell. When the code finishes running, you should see a message saying the data was successfully loaded, along with a preview of the first five rows of the data.
```
# Set up feedback system
from learntools.core import binder
binder.bind(globals())
from learntools.ethics.ex4 import *
import pandas as pd
from sklearn.model_selection import train_test_split
# Load the data, separate features from target
data = pd.read_csv("../input/synthetic-credit-card-approval/synthetic_credit_card_approval.csv")
X = data.drop(["Target"], axis=1)
y = data["Target"]
# Break into training and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.8, test_size=0.2, random_state=0)
# Preview the data
print("Data successfully loaded!\n")
X_train.head()
```
The dataset contains, for each applicant:
- income (in the `Income` column),
- the number of children (in the `Num_Children` column),
- whether the applicant owns a car (in the `Own_Car` column, the value is `1` if the applicant owns a car, and is else `0`), and
- whether the applicant owns a home (in the `Own_Housing` column, the value is `1` if the applicant owns a home, and is else `0`)
When evaluating fairness, we'll check how the model performs for users in different groups, as identified by the `Group` column:
- The `Group` column breaks the users into two groups (where each group corresponds to either `0` or `1`).
- For instance, you can think of the column as breaking the users into two different races, ethnicities, or gender groupings. If the column breaks users into different ethnicities, `0` could correspond to a non-Hispanic user, while `1` corresponds to a Hispanic user.
Run the next code cell without changes to train a simple model to approve or deny individuals for a credit card. The output shows the performance of the model.
```
from sklearn import tree
from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay
import matplotlib.pyplot as plt
# Train a model and make predictions
model_baseline = tree.DecisionTreeClassifier(random_state=0, max_depth=3)
model_baseline.fit(X_train, y_train)
preds_baseline = model_baseline.predict(X_test)
# Function to plot confusion matrix
def plot_confusion_matrix(estimator, X, y_true, y_pred, display_labels=["Deny", "Approve"],
include_values=True, xticks_rotation='horizontal', values_format='',
normalize=None, cmap=plt.cm.Blues):
cm = confusion_matrix(y_true, y_pred, normalize=normalize)
disp = ConfusionMatrixDisplay(confusion_matrix=cm, display_labels=display_labels)
return cm, disp.plot(include_values=include_values, cmap=cmap, xticks_rotation=xticks_rotation,
values_format=values_format)
# Function to evaluate the fairness of the model
def get_stats(X, y, model, group_one, preds):
y_zero, preds_zero, X_zero = y[group_one==False], preds[group_one==False], X[group_one==False]
y_one, preds_one, X_one = y[group_one], preds[group_one], X[group_one]
print("Total approvals:", preds.sum())
print("Group A:", preds_zero.sum(), "({}% of approvals)".format(round(preds_zero.sum()/sum(preds)*100, 2)))
print("Group B:", preds_one.sum(), "({}% of approvals)".format(round(preds_one.sum()/sum(preds)*100, 2)))
print("\nOverall accuracy: {}%".format(round((preds==y).sum()/len(y)*100, 2)))
print("Group A: {}%".format(round((preds_zero==y_zero).sum()/len(y_zero)*100, 2)))
print("Group B: {}%".format(round((preds_one==y_one).sum()/len(y_one)*100, 2)))
cm_zero, disp_zero = plot_confusion_matrix(model, X_zero, y_zero, preds_zero)
disp_zero.ax_.set_title("Group A")
cm_one, disp_one = plot_confusion_matrix(model, X_one, y_one, preds_one)
disp_one.ax_.set_title("Group B")
print("\nSensitivity / True positive rate:")
print("Group A: {}%".format(round(cm_zero[1,1] / cm_zero[1].sum()*100, 2)))
print("Group B: {}%".format(round(cm_one[1,1] / cm_one[1].sum()*100, 2)))
# Evaluate the model
get_stats(X_test, y_test, model_baseline, X_test["Group"]==1, preds_baseline)
```
The confusion matrices above show how the model performs on some test data. We also print additional information (calculated from the confusion matrices) to assess fairness of the model. For instance,
- The model approved 38246 people for a credit card. Of these individuals, 8028 belonged to Group A, and 30218 belonged to Group B.
- The model is 94.56% accurate for Group A, and 95.02% accurate for Group B. These percentages can be calculated directly from the confusion matrix; for instance, for Group A, the accuracy is (39723+7528)/(39723+500+2219+7528).
- The true positive rate (TPR) for Group A is 77.23%, and the TPR for Group B is 98.03%. These percentages can be calculated directly from the confusion matrix; for instance, for Group A, the TPR is 7528/(7528+2219).
# 1) Varieties of fairness
Consider three different types of fairness covered in the tutorial:
- **Demographic parity**: Which group has an unfair advantage, with more representation in the group of approved applicants? (Roughly 50% of applicants are from Group A, and 50% of applicants are from Group B.)
- **Equal accuracy**: Which group has an unfair advantage, where applicants are more likely to be correctly classified?
- **Equal opportunity**: Which group has an unfair advantage, with a higher true positive rate?
```
# Check your answer (Run this code cell to get credit!)
q_1.check()
```
Run the next code cell without changes to visualize the model.
```
def visualize_model(model, feature_names, class_names=["Deny", "Approve"], impurity=False):
plot_list = tree.plot_tree(model, feature_names=feature_names, class_names=class_names, impurity=impurity)
[process_plot_item(item) for item in plot_list]
def process_plot_item(item):
split_string = item.get_text().split("\n")
if split_string[0].startswith("samples"):
item.set_text(split_string[-1])
else:
item.set_text(split_string[0])
plt.figure(figsize=(20, 6))
plot_list = visualize_model(model_baseline, feature_names=X_train.columns)
```
The flowchart shows how the model makes decisions:
- `Group <= 0.5` checks what group the applicant belongs to: if the applicant belongs to Group A, then `Group <= 0.5` is true.
- Entries like `Income <= 80210.5` check the applicant's income.
To follow the flow chart, we start at the top and trace a path depending on the details of the applicant. If the condition is true at a split, then we move down and to the left branch. If it is false, then we move to the right branch.
For instance, consider an applicant in Group B, who has an income of 75k. Then,
- We start at the top of the flow chart. the applicant has an income of 75k, so `Income <= 80210.5` is true, and we move to the left.
- Next, we check the income again. Since `Income <= 71909.5` is false, we move to the right.
- The last thing to check is what group the applicant belongs to. The applicant belongs to Group B, so `Group <= 0.5` is false, and we move to the right, where the model has decided to approve the applicant.
# 2) Understand the baseline model
Based on the visualization, how can you explain one source of unfairness in the model?
**Hint**: Consider the example applicant, but change the group membership from Group B to Group A (leaving all other characteristics the same). Is this slightly different applicant approved or denied by the model?
```
# Check your answer (Run this code cell to get credit!)
q_2.check()
```
Next, you decide to remove group membership from the training data and train a new model. Do you think this will make the model treat the groups more equally?
Run the next code cell to see how this new **group unaware** model performs.
```
# Create new dataset with gender removed
X_train_unaware = X_train.drop(["Group"],axis=1)
X_test_unaware = X_test.drop(["Group"],axis=1)
# Train new model on new dataset
model_unaware = tree.DecisionTreeClassifier(random_state=0, max_depth=3)
model_unaware.fit(X_train_unaware, y_train)
# Evaluate the model
preds_unaware = model_unaware.predict(X_test_unaware)
get_stats(X_test_unaware, y_test, model_unaware, X_test["Group"]==1, preds_unaware)
```
# 3) Varieties of fairness, part 2
How does this model compare to the first model you trained, when you consider **demographic parity**, **equal accuracy**, and **equal opportunity**? Once you have an answer, run the next code cell.
```
# Check your answer (Run this code cell to get credit!)
q_3.check()
```
You decide to train a third potential model, this time with the goal of having each group have even representation in the group of approved applicants. (This is an implementation of group thresholds, which you can optionally read more about [here](https://pair-code.github.io/what-if-tool/ai-fairness.html).)
Run the next code cell without changes to evaluate this new model.
```
# Change the value of zero_threshold to hit the objective
zero_threshold = 0.11
one_threshold = 0.99
# Evaluate the model
test_probs = model_unaware.predict_proba(X_test_unaware)[:,1]
preds_approval = (((test_probs>zero_threshold)*1)*[X_test["Group"]==0] + ((test_probs>one_threshold)*1)*[X_test["Group"]==1])[0]
get_stats(X_test, y_test, model_unaware, X_test["Group"]==1, preds_approval)
```
# 4) Varieties of fairness, part 3
How does this final model compare to the previous models, when you consider **demographic parity**, **equal accuracy**, and **equal opportunity**?
```
# Check your answer (Run this code cell to get credit!)
q_4.check()
```
This is only a short exercise to explore different types of fairness, and to illustrate the tradeoff that can occur when you optimize for one type of fairness over another. We have focused on model training here, but in practice, to really mitigate bias, or to make ML systems fair, we need to take a close look at every step in the process, from data collection to releasing a final product to users.
For instance, if you take a close look at the data, you'll notice that on average, individuals from Group B tend to have higher income than individuals from Group A, and are also more likely to own a home or a car. Knowing this will prove invaluable to deciding what fairness criterion you should use, and to inform ways to achieve fairness. (*For instance, it would likely be a bad aproach, if you did not remove the historical bias in the data and then train the model to get equal accuracy for each group.*)
In this course, we intentionally avoid taking an opinionated stance on how exactly to minimize bias and ensure fairness in specific projects. This is because the correct answers continue to evolve, since AI fairness is an active area of research. This lesson was a hands-on introduction to the topic, and you can continue your learning by reading blog posts from the [Partnership on AI](https://www.partnershiponai.org/research-lander/) or by following conferences like the [ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT)](https://facctconference.org/).
# Keep going
Continue to **[learn how to use model cards](https://www.kaggle.com/var0101/model-cards)** to make machine learning models transparent to large audiences.
| true |
code
| 0.484807 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/yukinaga/lecture_pytorch/blob/master/lecture4/cnn.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# CNNの実装
PyTorchを使って、畳み込みニューラルネットワーク(CNN)を実装します。
CNN自体はCNNの層を追加するのみで実装可能なのですが、今回はデータ拡張とドロップアウトの実装も行います。
## CIFAR-10
torchvision.datasetsを使い、CIFAR-10を読み込みます。
CIFARは、約6万枚の画像にラベルをつけたたデータセットです。
以下のコードでは、CIFAR-10を読み込み、ランダムな25枚の画像を表示します。
```
from torchvision.datasets import CIFAR10
import torchvision.transforms as transforms
from torch.utils.data import DataLoader
import numpy as np
import matplotlib.pyplot as plt
cifar10_data = CIFAR10(root="./data",
train=False,download=True,
transform=transforms.ToTensor())
cifar10_classes = np.array(["airplane", "automobile", "bird", "cat", "deer",
"dog", "frog", "horse", "ship", "truck"])
print("データの数:", len(cifar10_data))
n_image = 25 # 表示する画像の数
cifar10_loader = DataLoader(cifar10_data, batch_size=n_image, shuffle=True)
dataiter = iter(cifar10_loader) # イテレータ
images, labels = dataiter.next() # 最初のバッチを取り出す
plt.figure(figsize=(10,10)) # 画像の表示サイズ
for i in range(n_image):
plt.subplot(5,5,i+1)
plt.imshow(np.transpose(images[i], (1, 2, 0))) # チャンネルを一番後ろに
label = cifar10_classes[labels[i]]
plt.title(label)
plt.tick_params(labelbottom=False, labelleft=False, bottom=False, left=False) # ラベルとメモリを非表示に
plt.show()
```
## データ拡張
torchvision.transformsを使ってデータ拡張を行います。
今回は、cifar-10の画像に-30〜30°の回転、および0.8〜1.2倍のリサイズを行います。
これらの処理は、バッチを取り出す際に元の画像に対してランダムに加えられます。
```
from torchvision.datasets import CIFAR10
import torchvision.transforms as transforms
from torch.utils.data import DataLoader
import numpy as np
import matplotlib.pyplot as plt
transform = transforms.Compose([transforms.RandomAffine([-30, 30], scale=(0.8, 1.2)), # 回転とリサイズ
transforms.ToTensor()])
cifar10_data = CIFAR10(root="./data",
train=False,download=True,
transform=transform)
cifar10_classes = np.array(["airplane", "automobile", "bird", "cat", "deer",
"dog", "frog", "horse", "ship", "truck"])
print("データの数:", len(cifar10_data))
n_image = 25 # 表示する画像の数
cifar10_loader = DataLoader(cifar10_data, batch_size=n_image, shuffle=True)
dataiter = iter(cifar10_loader) # イテレータ
images, labels = dataiter.next() # 最初のバッチを取り出す
plt.figure(figsize=(10,10)) # 画像の表示サイズ
for i in range(n_image):
plt.subplot(5,5,i+1)
plt.imshow(np.transpose(images[i], (1, 2, 0))) # チャンネルを一番後ろに
label = cifar10_classes[labels[i]]
plt.title(label)
plt.tick_params(labelbottom=False, labelleft=False, bottom=False, left=False) # ラベルとメモリを非表示に
plt.show()
```
## データの前処理
ここからCNNを実装します。
データ拡張として、回転とリサイズ、および左右反転を行います。
また、学習が効率的になるように入力の平均値を0、標準偏差を1にします(標準化)。
DataLoaderは、訓練データ、テストデータそれぞれで設定しますが、テストデータにはミニバッチ法を適用しないのでバッチサイズは元データのサンプル数にします。
```
from torchvision.datasets import CIFAR10
import torchvision.transforms as transforms
from torch.utils.data import DataLoader
affine = transforms.RandomAffine([-15, 15], scale=(0.8, 1.2)) # 回転とリサイズ
flip = transforms.RandomHorizontalFlip(p=0.5) # 左右反転
normalize = transforms.Normalize((0.0, 0.0, 0.0), (1.0, 1.0, 1.0)) # 平均値を0、標準偏差を1に
to_tensor = transforms.ToTensor()
transform_train = transforms.Compose([affine, flip, to_tensor, normalize])
transform_test = transforms.Compose([to_tensor, normalize])
cifar10_train = CIFAR10("./data", train=True, download=True, transform=transform_train)
cifar10_test = CIFAR10("./data", train=False, download=True, transform=transform_test)
# DataLoaderの設定
batch_size = 64
train_loader = DataLoader(cifar10_train, batch_size=batch_size, shuffle=True)
test_loader = DataLoader(cifar10_test, batch_size=len(cifar10_test), shuffle=False)
```
## モデルの構築
`nn.Module`モジュールを継承したクラスとして、モデルを構築します。
今回は、過学習を抑制するためにドロップアウトを導入します。
```
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(3, 6, 5) # 畳み込み層:(入力チャンネル数, フィルタ数、フィルタサイズ)
self.pool = nn.MaxPool2d(2, 2) # プーリング層:(領域のサイズ, ストライド)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16*5*5, 256) # 全結合層
self.dropout = nn.Dropout(p=0.5) # ドロップアウト:(p=ドロップアウト率)
self.fc2 = nn.Linear(256, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16*5*5)
x = F.relu(self.fc1(x))
x = self.dropout(x)
x = self.fc2(x)
return x
net = Net()
net.cuda() # GPU対応
print(net)
```
## 学習
モデルを訓練します。
DataLoaderを使い、ミニバッチを取り出して訓練および評価を行います。
今回は、評価時にミニバッチ法は使わず、テストデータ全体を使って一度に誤差を計算します。
学習には時間がかかりますので、編集→ノートブックの設定のハードウェアアクセラレーターでGPUを選択しましょう。
```
from torch import optim
# 交差エントロピー誤差関数
loss_fnc = nn.CrossEntropyLoss()
# 最適化アルゴリズム
optimizer = optim.Adam(net.parameters())
# 損失のログ
record_loss_train = []
record_loss_test = []
# 学習
x_test, t_test = iter(test_loader).next()
x_test, t_test = x_test.cuda(), t_test.cuda()
for i in range(20): # 20エポック学習
net.train() # 訓練モード
loss_train = 0
for j, (x, t) in enumerate(train_loader): # ミニバッチ(x, t)を取り出す
x, t = x.cuda(), t.cuda() # GPU対応
y = net(x)
loss = loss_fnc(y, t)
loss_train += loss.item()
optimizer.zero_grad()
loss.backward()
optimizer.step()
loss_train /= j+1
record_loss_train.append(loss_train)
net.eval() # 評価モード
y_test = net(x_test)
loss_test = loss_fnc(y_test, t_test).item()
record_loss_test.append(loss_test)
if i%1 == 0:
print("Epoch:", i, "Loss_Train:", loss_train, "Loss_Test:", loss_test)
```
## 誤差の推移
訓練データ、テストデータで誤差の推移をグラフ表示します。
```
import matplotlib.pyplot as plt
plt.plot(range(len(record_loss_train)), record_loss_train, label="Train")
plt.plot(range(len(record_loss_test)), record_loss_test, label="Test")
plt.legend()
plt.xlabel("Epochs")
plt.ylabel("Error")
plt.show()
```
## 正解率
モデルの性能を把握するため、テストデータ使い正解率を測定します。
```
correct = 0
total = 0
net.eval() # 評価モード
for i, (x, t) in enumerate(test_loader):
x, t = x.cuda(), t.cuda() # GPU対応
y = net(x)
correct += (y.argmax(1) == t).sum().item()
total += len(x)
print("正解率:", str(correct/total*100) + "%")
```
## 訓練済みのモデルを使った予測
訓練済みのモデルを使ってみましょう。
画像を入力し、モデルが機能していることを確かめます。
```
cifar10_loader = DataLoader(cifar10_test, batch_size=1, shuffle=True)
dataiter = iter(cifar10_loader)
images, labels = dataiter.next() # サンプルを1つだけ取り出す
plt.imshow(np.transpose(images[0], (1, 2, 0))) # チャンネルを一番後ろに
plt.tick_params(labelbottom=False, labelleft=False, bottom=False, left=False) # ラベルとメモリを非表示に
plt.show()
net.eval() # 評価モード
x, t = images.cuda(), labels.cuda() # GPU対応
y = net(x)
print("正解:", cifar10_classes[labels[0]],
"予測結果:", cifar10_classes[y.argmax().item()])
```
| true |
code
| 0.804771 | null | null | null | null |
|
Week 5 Notebook: Building a Deep Learning Model
===============================================================
Now, we'll look at a deep learning model based on low-level track features.
```
import tensorflow.keras as keras
import numpy as np
from sklearn.metrics import roc_curve, auc
import matplotlib.pyplot as plt
import uproot
import tensorflow
import yaml
with open('definitions.yml') as file:
# The FullLoader parameter handles the conversion from YAML
# scalar values to Python the dictionary format
definitions = yaml.load(file, Loader=yaml.FullLoader)
features = definitions['features']
spectators = definitions['spectators']
labels = definitions['labels']
nfeatures = definitions['nfeatures']
nspectators = definitions['nspectators']
nlabels = definitions['nlabels']
ntracks = definitions['ntracks']
```
## Data Generators
A quick aside on data generators. As training on large datasets is a key component of many deep learning approaches (and especially in high energy physics), and these datasets no longer fit in memory, it is imporatant to write a data generator which can automatically fetch data.
Here we modify one from: https://stanford.edu/~shervine/blog/keras-how-to-generate-data-on-the-fly
```
from DataGenerator import DataGenerator
help(DataGenerator)
# load training and validation generators
train_files = ['root://eospublic.cern.ch//eos/opendata/cms/datascience/HiggsToBBNtupleProducerTool/HiggsToBBNTuple_HiggsToBB_QCD_RunII_13TeV_MC/train/ntuple_merged_10.root']
val_files = ['root://eospublic.cern.ch//eos/opendata/cms/datascience/HiggsToBBNtupleProducerTool/HiggsToBBNTuple_HiggsToBB_QCD_RunII_13TeV_MC/train/ntuple_merged_11.root']
train_generator = DataGenerator(train_files, features, labels, spectators, batch_size=1024, n_dim=ntracks,
remove_mass_pt_window=False,
remove_unlabeled=True, max_entry=8000)
val_generator = DataGenerator(val_files, features, labels, spectators, batch_size=1024, n_dim=ntracks,
remove_mass_pt_window=False,
remove_unlabeled=True, max_entry=2000)
```
## Test Data Generator
Note that the track array has a different "shape." There are also less than the requested `batch_size=1024` because we remove unlabeled samples.
```
X, y = train_generator[1]
print(X.shape)
print(y.shape)
```
Note this generator can be optimized further (storing the data file locally, etc.). It's important to note that I/O is often a bottleneck for training big networks.
## Fully Connected Neural Network Classifier
```
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, Dense, BatchNormalization, Flatten
import tensorflow.keras.backend as K
# define dense keras model
inputs = Input(shape=(ntracks, nfeatures,), name='input')
x = BatchNormalization(name='bn_1')(inputs)
x = Flatten(name='flatten_1')(x)
x = Dense(64, name='dense_1', activation='relu')(x)
x = Dense(32, name='dense_2', activation='relu')(x)
x = Dense(32, name='dense_3', activation='relu')(x)
outputs = Dense(nlabels, name='output', activation='softmax')(x)
keras_model_dense = Model(inputs=inputs, outputs=outputs)
keras_model_dense.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
print(keras_model_dense.summary())
# define callbacks
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping, ReduceLROnPlateau
early_stopping = EarlyStopping(monitor='val_loss', patience=5)
reduce_lr = ReduceLROnPlateau(patience=5, factor=0.5)
model_checkpoint = ModelCheckpoint('keras_model_dense_best.h5', monitor='val_loss', save_best_only=True)
callbacks = [early_stopping, model_checkpoint, reduce_lr]
# fit keras model
history_dense = keras_model_dense.fit(train_generator,
validation_data=val_generator,
steps_per_epoch=len(train_generator),
validation_steps=len(val_generator),
max_queue_size=5,
epochs=20,
shuffle=False,
callbacks=callbacks,
verbose=0)
# reload best weights
keras_model_dense.load_weights('keras_model_dense_best.h5')
plt.figure()
plt.plot(history_dense.history['loss'], label='Loss')
plt.plot(history_dense.history['val_loss'], label='Val. loss')
plt.xlabel('Epoch')
plt.legend()
plt.show()
```
## Deep Sets Classifier
This model uses the `Dense` layer of Keras, but really it's more like the Deep Sets architecture applied to jets, the so-caled Particle-flow network approach{cite:p}`Komiske:2018cqr,NIPS2017_6931`.
We are applying the same fully connected neural network to each track.
Then the `GlobalAveragePooling1D` layer sums over the tracks (actually it takes the mean).
```
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, Dense, BatchNormalization, GlobalAveragePooling1D
import tensorflow.keras.backend as K
# define Deep Sets model with Dense Keras layer
inputs = Input(shape=(ntracks, nfeatures,), name='input')
x = BatchNormalization(name='bn_1')(inputs)
x = Dense(64, name='dense_1', activation='relu')(x)
x = Dense(32, name='dense_2', activation='relu')(x)
x = Dense(32, name='dense_3', activation='relu')(x)
# sum over tracks
x = GlobalAveragePooling1D(name='pool_1')(x)
x = Dense(100, name='dense_4', activation='relu')(x)
outputs = Dense(nlabels, name='output', activation='softmax')(x)
keras_model_deepset = Model(inputs=inputs, outputs=outputs)
keras_model_deepset.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
print(keras_model_deepset.summary())
# define callbacks
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping, ReduceLROnPlateau
early_stopping = EarlyStopping(monitor='val_loss', patience=5)
reduce_lr = ReduceLROnPlateau(patience=5, factor=0.5)
model_checkpoint = ModelCheckpoint('keras_model_deepset_best.h5', monitor='val_loss', save_best_only=True)
callbacks = [early_stopping, model_checkpoint, reduce_lr]
# fit keras model
history_deepset = keras_model_deepset.fit(train_generator,
validation_data=val_generator,
steps_per_epoch=len(train_generator),
validation_steps=len(val_generator),
max_queue_size=5,
epochs=20,
shuffle=False,
callbacks=callbacks,
verbose=0)
# reload best weights
keras_model_deepset.load_weights('keras_model_deepset_best.h5')
plt.figure()
plt.plot(history_deepset.history['loss'], label='Loss')
plt.plot(history_deepset.history['val_loss'], label='Val. loss')
plt.xlabel('Epoch')
plt.legend()
plt.show()
# load testing file
test_files = ['root://eospublic.cern.ch//eos/opendata/cms/datascience/HiggsToBBNtupleProducerTool/HiggsToBBNTuple_HiggsToBB_QCD_RunII_13TeV_MC/test/ntuple_merged_0.root']
test_generator = DataGenerator(test_files, features, labels, spectators, batch_size=1024, n_dim=ntracks,
remove_mass_pt_window=True,
remove_unlabeled=True)
# run model inference on test data set
predict_array_dense = []
predict_array_deepset = []
label_array_test = []
for t in test_generator:
label_array_test.append(t[1])
predict_array_dense.append(keras_model_dense.predict(t[0]))
predict_array_deepset.append(keras_model_deepset.predict(t[0]))
predict_array_dense = np.concatenate(predict_array_dense, axis=0)
predict_array_deepset = np.concatenate(predict_array_deepset, axis=0)
label_array_test = np.concatenate(label_array_test, axis=0)
# create ROC curves
fpr_dense, tpr_dense, threshold_dense = roc_curve(label_array_test[:,1], predict_array_dense[:,1])
fpr_deepset, tpr_deepset, threshold_deepset = roc_curve(label_array_test[:,1], predict_array_deepset[:,1])
# plot ROC curves
plt.figure()
plt.plot(tpr_dense, fpr_dense, lw=2.5, label="Dense, AUC = {:.1f}%".format(auc(fpr_dense, tpr_dense)*100))
plt.plot(tpr_deepset, fpr_deepset, lw=2.5, label="Deep Sets, AUC = {:.1f}%".format(auc(fpr_deepset, tpr_deepset)*100))
plt.xlabel(r'True positive rate')
plt.ylabel(r'False positive rate')
plt.semilogy()
plt.ylim(0.001, 1)
plt.xlim(0, 1)
plt.grid(True)
plt.legend(loc='upper left')
plt.show()
```
We see the more structurally-aware Deep Sets model does better than a simple fully conneted neural network appraoch.
| true |
code
| 0.747771 | null | null | null | null |
|
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from tqdm import tqdm
from astropy.table import Table
import astropy.units as u
import os
# Using `batman` to create & fit fake transit
import batman
# Using astropy BLS and scipy curve_fit to fit transit
from astropy.timeseries import BoxLeastSquares
# Using emcee & corner to find and plot (e, w) distribution with MCMC
import emcee
import corner
# Using dynesty to do the same with nested sampling
import dynesty
import scipy.constants as c
# And importing `photoeccentric`
import photoeccentric as ph
%load_ext autoreload
%autoreload 2
# pandas display option
pd.set_option('display.float_format', lambda x: '%.5f' % x)
```
1. Choose 3 planets with e from a Gaussian distrbution with (0.5, 0.2)
2. Fit them using photoeccentric, find fit e and w
3. Implement Van Eylen equation to find underlying e dist on surface
```
true_es = np.random.normal(loc=0.5, scale=0.2, size=3)
true_ws = np.random.uniform(low=-90, high=270, size=3)
true_es
nwalk = 64
nsteps = 1000
ndiscard = 500
arrlen = (nsteps-ndiscard)*nwalk
smass_kg = 1.9885e30 # Solar mass (kg)
srad_m = 696.34e6 # Solar radius (m)
muirhead_data = pd.read_csv("datafiles/Muirhead2013_isochrones/muirhead_data_incmissing.txt", sep=" ")
# ALL Kepler planets from exo archive
planets = pd.read_csv('datafiles/exoplanetarchive/cumulative_kois.csv')
# Take the Kepler planet archive entries for the planets in Muirhead et al. 2013 sample
spectplanets = pd.read_csv('spectplanets.csv')
# Kepler-Gaia Data
kpgaia = Table.read('datafiles/Kepler-Gaia/kepler_dr2_4arcsec.fits', format='fits').to_pandas();
# Kepler-Gaia data for only the objects in our sample
muirhead_gaia = pd.read_csv("muirhead_gaia.csv")
# Combined spectroscopy data + Gaia/Kepler data for our sample
muirhead_comb = pd.read_csv('muirhead_comb.csv')
# Only targets from table above with published luminosities from Gaia
muirhead_comb_lums = pd.read_csv('muirhead_comb_lums.csv')
# Kepler ID for Kepler-1582 b
kepid = 9710326
kepname = spectplanets.loc[spectplanets['kepid'] == kepid].kepler_name.values[0]
kp737b = muirhead_comb.loc[muirhead_comb['KIC'] == kepid]
KOI = 947
isodf = pd.read_csv("datafiles/isochrones/iso_lums_" + str(kepid) + ".csv")
mstar = isodf["mstar"].mean()
mstar_err = isodf["mstar"].std()
rstar = isodf["radius"].mean()
rstar_err = isodf["radius"].std()
rho_star, mass, radius = ph.find_density_dist_symmetric(mstar, mstar_err, rstar, rstar_err, arrlen)
period, period_uerr, period_lerr, rprs, rprs_uerr, rprs_lerr, a_arc, a_uerr_arc, a_lerr_arc, i, e_arc, w_arc = ph.planet_params_from_archive(spectplanets, kepname)
# We calculate a_rs to ensure that it's consistent with the spec/Gaia stellar density.
a_rs = ph.calc_a(period*86400.0, mstar*smass_kg, rstar*srad_m)
a_rs_err = np.mean((a_uerr_arc, a_lerr_arc))
print('Stellar mass (Msun): ', mstar, 'Stellar radius (Rsun): ', rstar)
print('Period (Days): ', period, 'Rp/Rs: ', rprs)
print('a/Rs: ', a_rs)
print('i (deg): ', i)
inc = 89.99
```
## First Planet
```
# 30 minute cadence
cadence = 0.02142857142857143
time = np.arange(-300, 300, cadence)
e
w
# Define e and w, calculate flux from transit model
e = true_es[0]
w = true_ws[0]
flux = ph.integratedlc(time, period, rprs, a_rs, e, i, w, 0.0)
# Adding some gaussian noise on the order of Kepler noise (by eyeball)
noise = np.random.normal(0,0.0001,len(time))
nflux = flux+noise
flux_err = np.array([0.0001]*len(nflux))
plt.errorbar(time, nflux, yerr=flux_err, fmt='o')
plt.xlabel('Time')
plt.ylabel('Flux')
plt.xlim(-0.5, 0.5)
plt.axvline(0.0, c='r', label='Transit midpoint')
plt.legend()
transitmpt = 0
midpoints = np.unique(np.sort(np.concatenate((np.arange(transitmpt, time[0], -period), np.arange(transitmpt, time[-1], period)))))
```
## Fitting the transit
```
# Remove Out of Transit Data
ttime = []
tflux = []
tflux_err = []
for i in range(len(midpoints)):
m, b, t1bjd, t1, fnorm, fe1 = ph.do_linfit(time, nflux, flux_err, midpoints[i], 11, 5)
ttime.append(t1bjd)
tflux.append(fnorm)
tflux_err.append(fe1)
ttime = np.array(ttime).flatten()
tflux = np.array(tflux).flatten()
tflux_err = np.array(tflux_err).flatten()
tflux = np.nan_to_num(tflux, nan=1.0)
tflux_err = np.nan_to_num(tflux_err, nan=np.nanmedian(tflux_err))
priortransform = [3., 27., 1., 0., 15., 64., 2., 88., 0.1, transitmpt]
nbuffer = 11
dres, perDists, rpDists, arsDists, incDists, t0Dist = ph.fit_keplc_dynesty(KOI, midpoints, ttime, tflux, tflux_err, priortransform, arrlen, nbuffer, spectplanets, muirhead_comb)
#perDists
np.savetxt('S1periods.csv', perDists, delimiter=',')
np.savetxt('S1rprs.csv', rpDists, delimiter=',')
np.savetxt('S1ars.csv', arsDists, delimiter=',')
np.savetxt('S1inc.csv', incDists, delimiter=',')
np.savetxt('S1t0.csv', t0Dist, delimiter=',')
t0Dists = t0Dist
per_f = ph.mode(perDists)
rprs_f = ph.mode(rpDists)
a_f = ph.mode(arsDists)
i_f = ph.mode(incDists)
t0_f = ph.mode(t0Dists)
# Create a light curve with the fit parameters
fit1 = ph.integratedlc_fitter(ttime, per_f, rprs_f, a_f, i_f, t0_f)
plt.errorbar(ttime, tflux, yerr=tflux_err, c='blue', alpha=0.5, label='Original LC')
plt.plot(ttime, fit1, c='red', alpha=1.0, label='Fit LC')
#plt.xlim(-0.1, 0.1)
plt.legend()
print('Stellar mass (Msun): ', mstar, 'Stellar radius (Rsun): ', rstar)
print('\n')
print('Input params:')
print('Rp/Rs: ', rprs)
print('a/Rs: ', a_rs)
print('i (deg): ', i)
print('\n')
print('Fit params:')
print('Rp/Rs: ', rprs_f)
print('a/Rs: ', a_f)
print('i (deg): ', i_f)
```
### Determining T14 and T23
```
pdist = perDists
rdist = rpDists
adist = arsDists
idist = incDists
t0dist = t0Dists
T14dist = ph.get_T14(pdist, rdist, adist, idist)
T14errs = ph.get_sigmas(T14dist)
T23dist = ph.get_T23(pdist, rdist, adist, idist)
T23errs = ph.get_sigmas(T23dist)
```
# Get $g$
```
gs, rho_c = ph.get_g_distribution(rho_star, pdist, rdist, T14dist, T23dist)
g_mean = ph.mode(gs)
g_sigma = np.mean(np.abs(ph.get_sigmas(gs)))
g_mean
g_sigma
#Guesses
w_guess = 0.0
e_guess = 0.0
solnx = (w_guess, e_guess)
pos = solnx + 1e-4 * np.random.randn(32, 2)
nwalkers, ndim = pos.shape
sampler = emcee.EnsembleSampler(nwalkers, ndim, ph.log_probability, args=(g_mean, g_sigma), threads=4)
sampler.run_mcmc(pos, 5000, progress=True);
labels = ["w", "e"]
flat_samples = sampler.get_chain(discard=100, thin=15, flat=True)
fig = corner.corner(flat_samples, labels=labels, title_kwargs={"fontsize": 12}, truths=[w, e], plot_contours=True)
```
## Second Planet
```
# 30 minute cadence
cadence = 0.02142857142857143
time = np.arange(-300, 300, cadence)
i
# Define e and w, calculate flux from transit model
e = true_es[1]
w = true_ws[1]
flux = ph.integratedlc(time, period, rprs, a_rs, e, i, w, 0.0)
inc
# Adding some gaussian noise on the order of Kepler noise (by eyeball)
noise = np.random.normal(0,0.0001,len(time))
nflux = flux+noise
flux_err = np.array([0.0001]*len(nflux))
plt.errorbar(time, flux, yerr=flux_err, fmt='o')
plt.xlabel('Time')
plt.ylabel('Flux')
plt.xlim(-0.5, 0.5)
plt.axvline(0.0, c='r', label='Transit midpoint')
plt.legend()
transitmpt = 0
midpoints = np.unique(np.sort(np.concatenate((np.arange(transitmpt, time[0], -period), np.arange(transitmpt, time[-1], period)))))
```
## Fitting the transit
```
# Remove Out of Transit Data
ttime = []
tflux = []
tflux_err = []
for i in range(len(midpoints)):
m, b, t1bjd, t1, fnorm, fe1 = ph.do_linfit(time, nflux, flux_err, midpoints[i], 11, 5)
ttime.append(t1bjd)
tflux.append(fnorm)
tflux_err.append(fe1)
ttime = np.array(ttime).flatten()
tflux = np.array(tflux).flatten()
tflux_err = np.array(tflux_err).flatten()
tflux = np.nan_to_num(tflux, nan=1.0)
tflux_err = np.nan_to_num(tflux_err, nan=np.nanmedian(tflux_err))
priortransform = [3., 27., 1., 0., 15., 64., 2., 88., 0.1, transitmpt]
nbuffer = 11
ms, bs, timesBJD, timesPhase, fluxNorm, fluxErrs, perDists, rpDists, arsDists, incDists, t0Dist = ph.fit_keplc_dynesty(KOI, midpoints, ttime, tflux, tflux_err, priortransform, arrlen, nbuffer, spectplanets, muirhead_comb)
perDists
np.savetxt('Speriods.csv', perDists, delimiter=',')
np.savetxt('Srprs.csv', rpDists, delimiter=',')
np.savetxt('Sars.csv', arsDists, delimiter=',')
np.savetxt('Sinc.csv', incDists, delimiter=',')
np.savetxt('St0.csv', t0Dists, delimiter=',')
per_f = ph.mode(perDists)
rprs_f = ph.mode(rpDists)
a_f = ph.mode(arsDists)
i_f = ph.mode(incDists)
t0_f = ph.mode(t0Dists)
# Create a light curve with the fit parameters
fit1 = ph.integratedlc_fitter(time1, per_f, rprs_f, a_f, i_f, t0_f)
plt.errorbar(time1, nflux1, yerr=fluxerr1, c='blue', alpha=0.5, label='Original LC')
plt.plot(time1, fit1, c='red', alpha=1.0, label='Fit LC')
#plt.xlim(-0.1, 0.1)
plt.legend()
print('Stellar mass (Msun): ', mstar, 'Stellar radius (Rsun): ', rstar)
print('\n')
print('Input params:')
print('Rp/Rs: ', rprs)
print('a/Rs: ', a_rs)
print('i (deg): ', i)
print('\n')
print('Fit params:')
print('Rp/Rs: ', rprs_f)
print('a/Rs: ', a_f)
print('i (deg): ', i_f)
```
### Determining T14 and T23
```
T14dist = ph.get_T14(pdist, rdist, adist, idist)
T14errs = ph.get_sigmas(T14dist)
T23dist = ph.get_T23(pdist, rdist, adist, idist)
T23errs = ph.get_sigmas(T23dist)
```
# Get $g$
```
gs, rho_c = ph.get_g_distribution(rho_star, pdist, rdist, T14dist, T23dist)
g_mean = ph.mode(gs)
g_sigma = np.mean(np.abs(ph.get_sigmas(gs)))
g_mean
g_sigma
#Guesses
w_guess = 0.0
e_guess = 0.0
solnx = (w_guess, e_guess)
pos = solnx + 1e-4 * np.random.randn(32, 2)
nwalkers, ndim = pos.shape
sampler = emcee.EnsembleSampler(nwalkers, ndim, ph.log_probability, args=(g_mean, g_sigma), threads=4)
sampler.run_mcmc(pos, 5000, progress=True);
labels = ["w", "e"]
flat_samples = sampler.get_chain(discard=100, thin=15, flat=True)
fig = corner.corner(flat_samples, labels=labels, title_kwargs={"fontsize": 12}, truths=[w, e], plot_contours=True)
```
## Third Planet
```
# 30 minute cadence
cadence = 0.02142857142857143
time = np.arange(-300, 300, cadence)
# Define e and w, calculate flux from transit model
e = true_es[2]
w = true_ws[2]
flux = ph.integratedlc(time, period, rprs, a_rs, e, i, w, 0.0)
# Adding some gaussian noise on the order of Kepler noise (by eyeball)
noise = np.random.normal(0,0.0001,len(time))
nflux = flux+noise
flux_err = np.array([0.0001]*len(nflux))
plt.errorbar(time, nflux, yerr=flux_err, fmt='o')
plt.xlabel('Time')
plt.ylabel('Flux')
plt.xlim(-0.5, 0.5)
plt.axvline(0.0, c='r', label='Transit midpoint')
plt.legend()
transitmpt = 0
midpoints = np.unique(np.sort(np.concatenate((np.arange(transitmpt, time[0], -period), np.arange(transitmpt, time[-1], period)))))
```
## Fitting the transit
```
# Remove Out of Transit Data
ttime = []
tflux = []
tflux_err = []
for i in range(len(midpoints)):
m, b, t1bjd, t1, fnorm, fe1 = ph.do_linfit(time, nflux, flux_err, midpoints[i], 11, 5)
ttime.append(t1bjd)
tflux.append(fnorm)
tflux_err.append(fe1)
ttime = np.array(ttime).flatten()
tflux = np.array(tflux).flatten()
tflux_err = np.array(tflux_err).flatten()
tflux = np.nan_to_num(tflux, nan=1.0)
tflux_err = np.nan_to_num(tflux_err, nan=np.nanmedian(tflux_err))
priortransform = [3., 27., 1., 0., 15., 64., 2., 88., 0.1, transitmpt]
nbuffer = 11
ms, bs, timesBJD, timesPhase, fluxNorm, fluxErrs, perDists, rpDists, arsDists, incDists, t0Dist = ph.fit_keplc_dynesty(KOI, midpoints, ttime, tflux, tflux_err, priortransform, arrlen, nbuffer, spectplanets, muirhead_comb)
perDists
np.savetxt('Speriods.csv', perDists, delimiter=',')
np.savetxt('Srprs.csv', rpDists, delimiter=',')
np.savetxt('Sars.csv', arsDists, delimiter=',')
np.savetxt('Sinc.csv', incDists, delimiter=',')
np.savetxt('St0.csv', t0Dists, delimiter=',')
per_f = ph.mode(perDists)
rprs_f = ph.mode(rpDists)
a_f = ph.mode(arsDists)
i_f = ph.mode(incDists)
t0_f = ph.mode(t0Dists)
# Create a light curve with the fit parameters
fit1 = ph.integratedlc_fitter(time1, per_f, rprs_f, a_f, i_f, t0_f)
plt.errorbar(time1, nflux1, yerr=fluxerr1, c='blue', alpha=0.5, label='Original LC')
plt.plot(time1, fit1, c='red', alpha=1.0, label='Fit LC')
#plt.xlim(-0.1, 0.1)
plt.legend()
print('Stellar mass (Msun): ', mstar, 'Stellar radius (Rsun): ', rstar)
print('\n')
print('Input params:')
print('Rp/Rs: ', rprs)
print('a/Rs: ', a_rs)
print('i (deg): ', i)
print('\n')
print('Fit params:')
print('Rp/Rs: ', rprs_f)
print('a/Rs: ', a_f)
print('i (deg): ', i_f)
```
### Determining T14 and T23
```
T14dist = ph.get_T14(pdist, rdist, adist, idist)
T14errs = ph.get_sigmas(T14dist)
T23dist = ph.get_T23(pdist, rdist, adist, idist)
T23errs = ph.get_sigmas(T23dist)
```
# Get $g$
```
gs, rho_c = ph.get_g_distribution(rho_star, pdist, rdist, T14dist, T23dist)
g_mean = ph.mode(gs)
g_sigma = np.mean(np.abs(ph.get_sigmas(gs)))
g_mean
g_sigma
#Guesses
w_guess = 0.0
e_guess = 0.0
solnx = (w_guess, e_guess)
pos = solnx + 1e-4 * np.random.randn(32, 2)
nwalkers, ndim = pos.shape
sampler = emcee.EnsembleSampler(nwalkers, ndim, ph.log_probability, args=(g_mean, g_sigma), threads=4)
sampler.run_mcmc(pos, 5000, progress=True);
labels = ["w", "e"]
flat_samples = sampler.get_chain(discard=100, thin=15, flat=True)
fig = corner.corner(flat_samples, labels=labels, title_kwargs={"fontsize": 12}, truths=[w, e], plot_contours=True)
```
Probability Grid
```
mumesh = np.linspace(0, 1, 100)
sigmesh = np.linspace(0.01, 0.3, 100)
mus, sigmas = np.meshgrid(mumesh, sigmesh)
# Vet 100 values from each e distribution
fit_es = [np.random.normal(loc=0.2, scale=0.05, size=100), np.random.normal(loc=0.3, scale=0.05, size=100), np.random.normal(loc=0.4, scale=0.05, size=100)]
fit_ws = [np.random.normal(loc=90, scale=10, size=100), np.random.normal(loc=-90, scale=10, size=100), np.random.normal(loc=0.0, scale=10, size=100)]
import scipy
# for each planet
# Planet 1: true_es[0]
pethetasum1 = np.zeros((100,100))
# Calculating p(obs|theta) for 10,000 grid points, for N posterior values for 1 panet
for n1 in tqdm(range(len(mus))): # For each grid point x
for n2 in range(len(mus[0])): # For each grid point y
mu_test = mus[n1][n2]
sig_test = sigmas[n1][n2] # x, y of grid point
for N in range(len(fit_es[0])): # For each posterior value (out of 100)
pethetasum1[n1][n2] += scipy.stats.norm.pdf(fit_es[0][N], loc=mu_test, scale=sig_test)
fig, ax = plt.subplots(figsize=(6,6))
ax.imshow(pethetasum1, extent=[0, 1, 0.01, 0.3], aspect = 'auto')
ax.set_xlabel('mean eccentricity')
ax.set_ylabel('sigma')
# Planet 2: true_es[1]
pethetasum2 = np.zeros((100,100))
# Calculating p(obs|theta) for 10,000 grid points, for N posterior values for 1 panet
for n1 in tqdm(range(len(mus))): # For each grid point x
for n2 in range(len(mus[0])): # For each grid point y
mu_test = mus[n1][n2]
sig_test = sigmas[n1][n2] # x, y of grid point
for N in range(len(fit_es[1])): # For each posterior value (out of 100)
pethetasum2[n1][n2] += scipy.stats.norm.pdf(fit_es[1][N], loc=mu_test, scale=sig_test)
fig, ax = plt.subplots(figsize=(6,6))
ax.imshow(pethetasum2, extent=[0, 1, 0.01, 0.3], aspect = 'auto')
ax.set_xlabel('mean eccentricity')
ax.set_ylabel('sigma')
# Planet 2: true_es[1]
pethetasum3 = np.zeros((100,100))
# Calculating p(obs|theta) for 10,000 grid points, for N posterior values for 1 panet
for n1 in tqdm(range(len(mus))): # For each grid point x
for n2 in range(len(mus[0])): # For each grid point y
mu_test = mus[n1][n2]
sig_test = sigmas[n1][n2] # x, y of grid point
for N in range(len(fit_es[1])): # For each posterior value (out of 100)
pethetasum3[n1][n2] += scipy.stats.norm.pdf(fit_es[2][N], loc=mu_test, scale=sig_test)
fig, ax = plt.subplots(figsize=(6,6))
ax.imshow(pethetasum3, extent=[0, 1, 0.01, 0.3], aspect = 'auto')
ax.set_xlabel('mean eccentricity')
ax.set_ylabel('sigma')
P = pethetasum1*pethetasum2*pethetasum3
P = P/np.sqrt(100*100)
fig, ax = plt.subplots(figsize=(6,6))
ax.imshow(P, extent=[0, 1, 0.01, 0.3], aspect = 'auto')
ax.set_xlabel('mean eccentricity')
ax.set_ylabel('sigma')
```
| true |
code
| 0.61257 | null | null | null | null |
|
# Import Dependencies
```
import warnings
warnings.filterwarnings('ignore')
import keras
import matplotlib.pyplot as plt
```
## Define Types
```
from typing import Tuple
ImageShape = Tuple[int, int]
GrayScaleImageShape = Tuple[int, int, int]
```
# MNIST Sandbox Baseline Example
This sandbox example is meant mostly to establish a few baselines for model performance to compare against, and also to get the basic Keras neural network architecture set up. I split the training and testing data and then one-hot encode the targets (one column per target, so ten columns after encoding).
```
from keras.datasets import mnist
import matplotlib.pyplot as plt
from typing import Tuple
import numpy as np
Dataset = Tuple[np.ndarray, np.ndarray]
#download mnist data and split into train and test sets
(X_train, y_train), (X_test, y_test) = mnist.load_data()
print(f"The shape of X_train is {X_train.shape}")
print(f"The shape of y_train is {y_train.shape}")
print(f"The shape of X_test is {X_test.shape}")
print(f"The shape of y_test is {y_test.shape} - some example targets: {y_test[:5]}")
mnist_image_shape: ImageShape = X_train.shape[1:]
print(mnist_image_shape)
from keras.utils import to_categorical
OneHotEncodedTarget = np.ndarray
Categories = int
encoded_y_train: OneHotEncodedTarget = to_categorical(y_train)
encoded_y_test: OneHotEncodedTarget = to_categorical(y_test)
print(f"One-hot encoding y_train {y_train.shape} -> {encoded_y_train.shape}")
print(f"One-hot encoding y_test {y_test.shape} -> {encoded_y_test.shape}")
K: Categories = encoded_y_test.shape[1]
```
# Vanilla CNN Implementation
Build a vanilla CNN implementation, with two convolutional layers, 64 and 32 filters each, with kernel size of `3 x 3`. Then the values are flattened and fed into the final softmax classification dense layer for predictions.
```
from keras.models import Sequential, Model
from keras.layers import Dense, Conv2D, Flatten, Input
from tensorflow.python.framework.ops import Tensor
import warnings
warnings.filterwarnings('ignore')
# define model architecture and hyperparameters
NUM_FILTERS_L1 = 64
NUM_FILTERS_L2 = 32
KERNEL_SIZE = 3
# the images are 28 x 28 (pixel size) x 1 (grayscale - if RGB, then 3)
input_dims: GrayScaleImageShape = (28,28,1)
def build_vanilla_cnn(filters_layer1:int, filters_layer2:int, kernel_size:int, input_dims: GrayScaleImageShape)-> Model:
inputs: Tensor = Input(shape=input_dims)
x: Tensor = Conv2D(filters=filters_layer1, kernel_size=kernel_size, activation='relu')(inputs)
x: Tensor = Conv2D(filters=filters_layer2, kernel_size=kernel_size, activation='relu')(x)
x: Tensor = Flatten()(x)
predictions = Dense(K, activation="softmax")(x)
print(predictions)
#compile model using accuracy to measure model performance
model: Model = Model(inputs=inputs, outputs=predictions)
model.compile(optimizer='adam', loss="categorical_crossentropy", metrics=['accuracy'])
return model
model: Model = build_vanilla_cnn(NUM_FILTERS_L1, NUM_FILTERS_L2, KERNEL_SIZE, input_dims)
```
## Helper Function to Expand Tensor Dimensions By 1
```
X_train.reshape((60000,1,28,28))
def expand_tensor_shape(X_train: np.ndarray)-> np.ndarray:
new_shape: Tuple = X_train.shape + (1,)
# new_tensor = X_train.reshape(new_shape).reshape((-1,1,28,28))
new_tensor = X_train.reshape(new_shape)
print(f"Expanding shape from {X_train.shape} to {new_tensor.shape}")
return new_tensor
X_train_expanded: np.ndarray = expand_tensor_shape(X_train)
X_test_expanded: np.ndarray = expand_tensor_shape(X_test)
# train model and retrieve history
# from keras.callbacks import History
# history: History = model.fit(X_train_expanded, encoded_y_train,
# validation_data=(X_test_expanded, encoded_y_test), epochs=2, batch_size=2058)
```
## Global Average Pooling Layer
Output shape of convolutional layer is typically `batch size x number of filters x width x height`. The GAP layer will take the average of the width/height axis and return a vector of length equal to the number of filters.
```
from keras import backend as K
np.reshape(X_train_expanded, (-1,1,28,28)).shape
from keras.layers import Dense, Conv2D, Flatten, Input, MaxPool2D
###### from keras.layers import Layer, Lambda, Input
from tensorflow.python.framework.ops import Tensor
from keras.models import Sequential, Model
from keras.layers import Dense, Conv2D, Flatten, Input, MaxPool2D
from tensorflow.python.framework.ops import Tensor
def global_average_pooling(x: Layer):
return K.mean(x, axis = (2,3))
def global_average_pooling_shape(input_shape):
# return the dimensions corresponding with batch size and number of filters
return (input_shape[0], input_shape[-1])
def build_global_average_pooling_layer(function, output_shape):
return Lambda(pooling_function, output_shape)
inputs: Tensor = Input(shape=(28,28,1))
x: Tensor = Conv2D(filters=32, kernel_size=5, activation='relu')(inputs)
# x: Tensor = MaxPool2D()(x)
# x: Tensor = Conv2D(filters=64, kernel_size=5, activation='relu')(x)
x: Tensor = Lambda(lambda x: K.mean(x, axis=(1,2)), output_shape=global_average_pooling_shape)(x)
# x: Tensor = Dense(128, activation="relu")(x)
predictions: Tensor = Dense(10, activation="softmax")(x)
model: Model = Model(inputs=inputs, outputs=predictions)
model.summary()
model.compile(optimizer='adam', loss="categorical_crossentropy", metrics=['accuracy'])
from keras.callbacks import History
history: History = model.fit(X_train_expanded, encoded_y_train,
validation_data=(X_test_expanded, encoded_y_test), epochs=100, batch_size=5126)
```
## Save the Class Activation Model Weights
```
import cv2
from keras.layers import Layer, Lambda
def global_average_pooling(x: Layer):
return K.mean(x, axis = (2,3))
def global_average_pooling_shape(input_shape):
# return only the first two dimensions (batch size and number of filters)
return input_shape[0:2]
def build_global_average_pooling_layer(function, output_shape):
return Lambda(pooling_function, output_shape)
def get_output_layer(model, layer_name):
# get the symbolic outputs of each "key" layer (we gave them unique names).
layer_dict = dict([(layer.name, layer) for layer in model.layers])
layer = layer_dict[layer_name]
return layer
# persist mode
save_filepath: str = "basic_cam.h5"
model.save(save_filepath)
first_image = X_train[5]
first_image = first_image.reshape(28,28,1)
img = np.array(first_image).reshape(1, 28, 28, 1)
img.shape
plt.imshow(img.reshape((28,28)))
#img = np.array([np.transpose(np.float32(first_image), (2, 0, 1))])
```
## Load Basic Model
(since the model files are so large, they cannot be pushed to Github- just email me for a copy of the `.h5` model files)
```
from keras.models import load_model
model = load_model("basic_cam.h5")
dense_10_layer: Layer = model.layers[-1]
dense_10_weights = dense_10_layer.get_weights()[0]
print(f"Dense 10 weights: {dense_10_weights.shape}")
dense_128_layer: Layer = model.layers[-2]
dense_128_weights = dense_128_layer.get_weights()[0]
print(f"Dense 128 weights: {dense_128_weights.shape}")
```
## Map the Final Class Activation Map Back to the Original Input Shapes and Visualize
```
import keras.backend as K
class_weights = model.layers[-1].get_weights()[0]
final_conv_layer = get_output_layer(model, "conv2d_1")
get_output = K.function([model.layers[0].input], [final_conv_layer.output, model.layers[-1].output])
[conv_outputs, predictions] = get_output([img])
conv_outputs = conv_outputs[0,:,:,:]
print(conv_outputs.shape)
print(class_weights.shape)
def make_cam(conv_outputs, class_weights, original_shape, target_class):
cam = np.zeros(dtype=np.float32, shape = conv_outputs.shape[0:2])
for i, w in enumerate(class_weights[:, target_class]):
cam += w * conv_outputs[:,:,i]
cam /= np.max(cam)
return cv2.resize(cam, (28, 28))
def make_heatmap(cam):
heatmap = cv2.applyColorMap(np.uint8(255*cam), cv2.COLORMAP_JET)
heatmap[np.where(cam < 0.1)] = 0
return heatmap
cam = make_cam(conv_outputs, class_weights, original_shape=(28,28), target_class=2)
false_cam = make_cam(conv_outputs, class_weights, original_shape=(28,28), target_class=4)
false2_cam = make_cam(conv_outputs, class_weights, original_shape=(28,28), target_class=5)
heatmap = make_heatmap(cam)
false_heatmap = make_heatmap(false_cam)
false2_heatmap = make_heatmap(false2_cam)
new_img = heatmap*0.5 + img
final_img = new_img.reshape((28,28,3))
# f, axarr = plt.subplots(2,1)
# axarr[0,0].imshow(heatmap)
# axarr[0,1].imshow(img.reshape(28,28))
imgs = [heatmap, img.reshape(28,28)]
fig, axes = plt.subplots(nrows=1, ncols=4, figsize=(15, 15))
axes[0].imshow(heatmap)
axes[0].set_title("Activation Map for 2")
axes[1].imshow(false_heatmap)
axes[1].set_title("Activation Map for 4")
axes[2].imshow(false2_heatmap)
axes[2].set_title("Activation Map for 5")
axes[3].imshow(img.reshape((28,28)))
axes[3].set_title("True Image")
import matplotlib.pyplot as plt
plt.imshow(img.reshape((28,28)))
cam /= np.max(cam)
import keras.backend as K
from tensorflow.python.framework.ops import Tensor
dense_weights = model.layers[-2].get_weights()[0]
softmax_weights = model.layers[-1].get_weights()[0]
dense_weights.shape
softmax_weights.shape
final_conv_layer = get_output_layer(model, "conv2d_28")
final_conv_layer.output
import keras.backend as K
from tensorflow.python.framework.ops import Tensor
class_weights: np.ndarray = model.layers[-1].get_weights()[0] # class weights is of shape 32 x 10 (number of filter outputs x classes)
print(f"Class weights is shape {class_weights.shape}")
final_conv_layer: Conv2D = get_output_layer(model, "conv2d_28")
input_tensor: Tensor = model.layers[0].input
final_conv_layer_output: Tensor = final_conv_layer.output
model_class_weights: Tensor = model.layers[-1].output
# K.function is a function factory that accepts arbitrary input layers and outputs arbitrary output layers
get_output = K.function([input_tensor], [final_conv_layer_output, model_class_weights])
[conv_outputs, predictions] = get_output([img])
print("Conv2D output shape:", conv_outputs.shape) # should match the shape of the outputs from the Conv2D layer
print("Predictions:", predictions.shape)
np.argmax(predictions)
conv_outputs = conv_outputs[0,:,:,:]
# [conv_outputs, predictions] = get_output([img])
# conv_outputs = conv_outputs[0, :, :, :]
class_weights.shape
# Create the class activation map
class_activation_map = np.zeros(dtype=np.float32, shape=conv_outputs.shape[1:3])
class_activation_map.shape
#Reshape to the network input shape (3, w, h).
img = np.array([np.transpose(np.float32(original_img), (2, 0, 1))])
#Get the 512 input weights to the softmax.
class_weights = model.layers[-1].get_weights()[0]
final_conv_layer = get_output_layer(model, "conv5_3")
get_output = K.function([model.layers[0].input], \
[final_conv_layer.output,
model.layers[-1].output])
[conv_outputs, predictions] = get_output([img])
conv_outputs = conv_outputs[0, :, :, :]
#Create the class activation map.
cam = np.zeros(dtype = np.float32, shape = conv_outputs.shape[1:3])
target_class = 1
for i, w in enumerate(class_weights[:, target_class]):
cam += w * conv_outputs[i, :, :]
```
# Everything Below This Section Is Doodling
```
image_path =
original_img = cv2.imread(image_path, 1)
width, height, _ = original_image.shape
def build_vanilla_cnn(filters_layer1:int, filters_layer2:int, kernel_size:int, input_dims: GrayScaleImageShape)-> Model:
inputs: Tensor = Input(shape=input_dims)
x: Tensor = Conv2D(filters=filters_layer1, kernel_size=kernel_size, activation='relu')(inputs)
x: Tensor = Conv2D(filters=filters_layer2, kernel_size=kernel_size, activation='relu')(x)
x: Tensor = build_global_average_pooling_layer(global_average_pooling, )
predictions = Dense(K, activation="softmax")(x)
print(predictions)
#compile model using accuracy to measure model performance
model: Model = Model(inputs=inputs, outputs=predictions)
model.compile(optimizer='adam', loss="categorical_crossentropy", metrics=['accuracy'])
return model
from keras.layers import merge
def build_model(input_dim):
inputs = Input(shape=input_dim)
# ATTENTION PART STARTS HERE
attention_probs = Dense(input_dim, activation='softmax', name='attention_vec')(inputs)
attention_mul = merge([inputs, attention_probs], output_shape=32, name='attention_mul', mode='mul')
# ATTENTION PART FINISHES HERE
attention_mul = Dense(64)(attention_mul)
output = Dense(1, activation='sigmoid')(attention_mul)
model = Model(input=[inputs], output=output)
return model
inputs = Input(shape=input_dims)
attention_probs = Dense(input_dims, activation='softmax', name='attention_vec')(inputs)
```
## Compile and Fit Model
```
X_train.reshape((60000,1,28,28))
def expand_tensor_shape(X_train: np.ndarray)-> np.ndarray:
new_shape: Tuple = X_train.shape + (1,)
print(f"Expanding shape from {X_train.shape} to {new_shape}")
return X_train.reshape(new_shape)
X_train_expanded: np.ndarray = expand_tensor_shape(X_train)
X_test_expanded: np.ndarray = expand_tensor_shape(X_test)
```
# FEI Face Dataset
```
from PIL.JpegImagePlugin import JpegImageFile
image: JpegImageFile = load_img('1-01.jpg')
```
| true |
code
| 0.735036 | null | null | null | null |
|
\title{Digital Latches with myHDL}
\author{Steven K Armour}
\maketitle
# Refs
@book{brown_vranesic_2014, place={New York, NY}, edition={3}, title={Fundamentals of digital logic with Verilog design}, publisher={McGraw-Hill}, author={Brown, Stephen and Vranesic, Zvonko G}, year={2014} },
@book{lameres_2017, title={Introduction to logic circuits & logic design with Verilog}, publisher={springer}, author={LaMeres, Brock J}, year={2017} }
# Acknowledgments
Author of **myHDL** [Jan Decaluwe](http://www.myhdl.org/users/jandecaluwe.html) and the author of the **myHDL Peeker** [XESS Corp.](https://github.com/xesscorp/myhdlpeek)
[**Draw.io**](https://www.draw.io/)
**Xilinx**
# Python Libraries Utilized
```
import numpy as np
import pandas as pd
from sympy import *
init_printing()
from myhdl import *
from myhdlpeek import *
import random
#python file of convince tools. Should be located with this notebook
from sympy_myhdl_tools import *
```
# Latches vs Flip-Flops
Latches and Flip-Flops are both metastaple logic circuit tobologies in that once loaded with a state they hold that state information till that state is upset by a new state or a reset command. But the diffrance between the two is that Flip-Flops are clock controlled devices built upon Latches where as Latches are not clock dependent
# SR-Latch
## Symbol and Internals
The Symbol for a SR-Latch and one representation of it's internals is shown below
<img style="float: center;" src="SRLatchSymbolInternal.jpg">
## Definition
## State Diagram
## myHDL SR-Latch Gate and Testing
Need Help Getting this Latch via Combo Cirucits working geting AlwayCombError in using out signal as argument in out signals next state out
## myHDL SR-Latch Behavioral and Testing
```
def SRLatch(S_in, rst, Q_out, Qn_out):
@always_comb
def logic():
if S_in and rst==0:
Q_out.next=1
Qn_out.next=0
elif S_in==0 and rst:
Q_out.next=0
Qn_out.next=1
elif S_in and rst:
Q_out.next=0
Qn_out.next=0
return logic
S_in, rst, Q_out, Qn_out=[Signal(bool(0)) for _ in range(4)]
Peeker.clear()
Peeker(S_in, 'S_in'); Peeker(rst, 'rst')
Peeker(Q_out, 'Q_out'); Peeker(Qn_out, 'Qn_out')
DUT=SRLatch(S_in=S_in, rst=rst, Q_out=Q_out, Qn_out=Qn_out)
inputs=[S_in, rst]
sim=Simulation(DUT, Combo_TB(inputs), *Peeker.instances()).run()
Peeker.to_wavedrom(start_time=0, stop_time=2*2**len(inputs), tock=True,
title='SRLatch Behavioral simulation',
caption=f'after clock cycle {2**len(inputs)-1} ->random input')
MakeDFfromPeeker(Peeker.to_wavejson(start_time=0, stop_time=2**len(inputs) -1))
```
## myHDL SR-Latch Behavioral HDL Synthesis
```
toVerilog(SRLatch, S_in, rst, Q_out, Qn_out)
#toVHDL(SRLatch, S_in, rst, Q_out, Qn_out)
_=VerilogTextReader('SRLatch')
```
The following shows the **Xilinx**'s _Vivado 2016.1_ RTL generated schematic of our Behaviorla SRLatch from the synthesised verilog code. We can see that the systhizied version is quite apstract from fig lakdfjkaj.
<img style="float: center;" src="SRLatchBehaviroalRTLSch.PNG">
# Gated SR-Latch
## myHDL SR-Latch Behavioral and Testing
```
def GSRLatch(S_in, rst, ena, Q_out, Qn_out):
@always_comb
def logic():
if ena:
if S_in and rst==0:
Q_out.next=1
Qn_out.next=0
elif S_in==0 and rst:
Q_out.next=0
Qn_out.next=1
elif S_in and rst:
Q_out.next=0
Qn_out.next=0
else:
pass
return logic
S_in, rst, ena, Q_out, Qn_out=[Signal(bool(0)) for _ in range(5)]
Peeker.clear()
Peeker(S_in, 'S_in'); Peeker(rst, 'rst'); Peeker(ena, 'ena')
Peeker(Q_out, 'Q_out'); Peeker(Qn_out, 'Qn_out')
DUT=GSRLatch(S_in=S_in, rst=rst, ena=ena, Q_out=Q_out, Qn_out=Qn_out)
inputs=[S_in, rst, ena]
sim=Simulation(DUT, Combo_TB(inputs), *Peeker.instances()).run()
Peeker.to_wavedrom(start_time=0, stop_time=2*2**len(inputs), tock=True,
title='GSRLatch Behavioral simulation',
caption=f'after clock cycle {2**len(inputs)-1} ->random input')
MakeDFfromPeeker(Peeker.to_wavejson(start_time=0, stop_time=2**len(inputs) -1))
```
## myHDL SR-Latch Behavioral HDL Synthesis
```
toVerilog(GSRLatch, S_in, rst, ena, Q_out, Qn_out)
#toVHDL(GSRLatch, S_in, rst,ena, Q_out, Qn_out)
_=VerilogTextReader('GSRLatch')
```
The following shows the **Xilinx**'s _Vivado 2016.1_ RTL generated schematic of our Behaviorla Gated SRLatch from the synthesised verilog code. We can see that the systhizied version is quite apstract from fig lakdfjkaj.
<img style="float: center;" src="GSRLatchBehaviroalRTLSch.PNG">
# D-Latch
## myHDL Behavioral D-Latch and Testing
```
def DLatch(D_in, ena, Q_out, Qn_out):
#Normal Qn_out is not specifed since a not gate is so easily implimented
@always_comb
def logic():
if ena:
Q_out.next=D_in
Qn_out.next=not D_in
return logic
D_in, ena, Q_out, Qn_out=[Signal(bool(0)) for _ in range(4)]
Peeker.clear()
Peeker(D_in, 'D_in'); Peeker(ena, 'ena')
Peeker(Q_out, 'Q_out'); Peeker(Qn_out, 'Qn_out')
DUT=DLatch(D_in=D_in, ena=ena, Q_out=Q_out, Qn_out=Qn_out)
inputs=[D_in, ena]
sim=Simulation(DUT, Combo_TB(inputs), *Peeker.instances()).run()
Peeker.to_wavedrom(start_time=0, stop_time=2*2**len(inputs), tock=True,
title='DLatch Behavioral simulation',
caption=f'after clock cycle {2**len(inputs)-1} ->random input')
MakeDFfromPeeker(Peeker.to_wavejson(start_time=0, stop_time=2**len(inputs) -1))
```
## myHDL DLatch Behavioral HDL Synthesis
```
toVerilog(DLatch, D_in, ena, Q_out, Qn_out)
#toVHDL(DLatch,D_in, ena, Q_out, Qn_out)
_=VerilogTextReader('DLatch')
```
The following shows the **Xilinx**'s _Vivado 2016.1_ RTL generated schematic of our myHDL Dlatch with a exsplisit $\bar{Q}$ verilog code. Note that becouse $\bar{Q}$ is not normal declared in HDL code Vivado produced two RTL DLatchs and used a NOT Gate to acount for the negated output
<img style="float: center;" src="DLatchBehavioralRTLSch.PNG">
# Examples
| true |
code
| 0.251878 | null | null | null | null |
|
# Multi-linear regression: how many variables?
[](https://github.com/eabarnes1010/course_objective_analysis/tree/main/code)
[](https://colab.research.google.com/github/eabarnes1010/course_objective_analysis/blob/main/code/minimum_corr_for_added_value.ipynb)
If I have two predictors $x_1$ and $x_2$, under what circumstances is the second one useful for predicting $y$?
```
#.............................................
# IMPORT STATEMENTS
#.............................................
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
import importlib
from sklearn import linear_model
from sklearn import metrics
mpl.rcParams['figure.facecolor'] = 'white'
mpl.rcParams['figure.dpi']= 150
dpiFig = 300.
np.random.seed(300)
```
Let's start by creating two predictors, x1 and x2, and predictand y. x1 will be totally random, and the others will build upon that.
```
x1 = np.random.normal(0.,1.,size=100,)
print(np.shape(x1))
```
Now we create x2.
```
a = 0.8
b = np.sqrt(1. - a**2)
x2 = []
# create red-noise time series iteratively
for it in np.arange(0,100,1):
x2.append(a*x1[it] + b*np.random.normal(size=1))
x2 = np.asarray(x2)[:,0]
print(np.shape(x2))
```
Now let's make $y$, which is composed of pieces of x1, x2 and noise.
```
a = 0.3
b = np.sqrt(1. - a**2)
y = []
# create red-noise time series iteratively
for it in np.arange(0,100,1):
y.append(a*x1[it] + (.05)*x2[it] + b*np.random.normal(size=1))
y = np.asarray(y)[:,0]
print(np.shape(y))
```
We can calculate the correlations of the predictors and predictands just to confirm that they all have some relationship with one another.
```
c12 = np.corrcoef(x1,x2)[0,1]
c1y = np.corrcoef(x1,y)[0,1]
c2y = np.corrcoef(y,x2)[0,1]
print('corr(x1,x2) = ' + str(np.round(c12,3)))
print('corr(x1,y) = ' + str(np.round(c1y,3)))
print('corr(x2,y) = ' + str(np.round(c2y,3)))
```
### Theory
Based on theory, the minimum useful correlation of c2y is the following (from theory)...
```
minUseful = np.abs(c1y*c12)
print('minimum useful corr(x2,y) = ' + str(np.round(minUseful,3)))
```
Furthermore, we can show analytically that the variance explained between using x1 versus x1 and x2 is practically identical since x2 doesn't appear to add additional information (i.e. |c2y| < minUseful).
```
#just using x1
R2 = c1y**2
print('theory: y variance explained by x1 = ' + str(np.round(R2,3)))
#using x1 and x2
R2 = (c1y**2 + c2y**2 - 2*c1y*c2y*c12)/(1-c12**2)
print('theory: y variance explained by x1 & x2 = ' + str(np.round(R2,3)))
```
### Actual fits
We can confirm the theory now through some fun examples where we actually fit y using x1 and x2. In fact, we see that the fits indeed give us exactly what is expected by theory.
```
# only x1 predictor
X = np.swapaxes([x1],1,0)
Y = np.swapaxes([y],1,0)
# with sklearn
regr = linear_model.LinearRegression()
regr.fit(X, Y)
R2_x1 = metrics.r2_score(Y,regr.predict(X))
print('y variance explained by x1 fit = ' + str(np.round(R2_x1,5)))
#---------------------------------------------
# both x1 and x2 predictors
X = np.swapaxes([x1,x2],1,0)
Y = np.swapaxes([y],1,0)
# with sklearn
regr = linear_model.LinearRegression()
regr.fit(X, Y)
R2_x12 = metrics.r2_score(Y,regr.predict(X))
print('y variance explained by x1 & x2 fit = ' + str(np.round(R2_x12,5)))
```
But what is going on here? Why is the $R^2$ slightly higher when we added x2? I thought theory said it shouldn't improve my variance explained _at all_?
## What about more predictors? (aka _overfitting_)
```
X = np.random.normal(0.,1.,size=(100,40))
Y = np.random.normal(0.,1.,size=100,)
rval = []
for n in np.arange(0,np.shape(X)[1]):
# with sklearn
regr = linear_model.LinearRegression()
regr.fit(X[:,0:n+1], Y)
R2 = metrics.r2_score(Y,regr.predict(X[:,0:n+1]))
rval.append(R2)
plt.figure(figsize=(8,6))
plt.plot(np.arange(0,np.shape(X)[1]),rval,'o-')
plt.xlabel('number of random predictors')
plt.ylabel('fraction variance explained')
plt.title('Variance Explained')
plt.show()
```
### Adjusted R$^2$
There is a great solution to this - known as the _adjusted $R^2$_. It is a measure of explained variance, but you are penalized (the number decreases) when too many predictors are used. The adjusted $R^2$ increases only if the new term improves the model more than would be expected by chance.
```
def adjustRsquared(r2,n,p):
adjustR2 = 1 - (1-r2)*(n-1)/(n-p-1)
return adjustR2
# only fitting with x1
p=1
n = len(x1)
adjustR2 = adjustRsquared(R2_x1,n,p)
print('fit with x1 only')
print(' R-squared = ' + str(np.round(R2_x1,3)) + ', Adjusted R-squared = ' + str(np.round(adjustR2,3)))
# fitting with x1 and x2
p = 2
n = len(x1)
adjustR2 = adjustRsquared(R2_x12,n,p)
print('fit with x1 and x2 only')
print(' R-squared = ' + str(np.round(R2_x12,3)) + ', Adjusted R-squared = ' + str(np.round(adjustR2,3)))
```
In our silly example above with 40 predictors, the adjusted R2 is the following...
```
n = len(Y)
p = np.arange(0,np.shape(X)[1]) + 1
adjustR2 = adjustRsquared(np.asarray(rval),n,p)
plt.figure(figsize=(8,6))
plt.axhline(y=0,color='gray')
plt.plot(np.arange(1,np.shape(X)[1]+1),rval,'o-', label='R2')
plt.plot(np.arange(1,np.shape(X)[1]+1),adjustR2,'o-',color='red', label='adjusted R2')
plt.xlabel('number of predictors')
plt.ylabel('fraction variance explained')
plt.legend()
plt.title('Adjusted R-squared')
plt.show()
```
### Significance of Adjusted $R^2$
To end, let's compute the adjusted R-squared many times for a lot of random data to get a feeling of the spread of possible adjusted R-squared values by chance alone.
```
rVec = np.zeros(shape=(40,500))
for nvar in (np.arange(1,np.shape(rVec)[0]+1)):
r = []
for n in np.arange(0,500):
X = np.random.normal(0.,1.,size=(100,nvar))
Y = np.random.normal(0.,1.,size=100,)
# with sklearn
regr = linear_model.LinearRegression()
regr.fit(X[:,0:n+1], Y)
R2 = metrics.r2_score(Y,regr.predict(X[:,0:n+1]))
r.append(R2)
rVec[nvar-1,:] = adjustRsquared(np.asarray(r),100,nvar)
pTop = np.percentile(rVec,97.5,axis=1)
pBot = np.percentile(rVec,2.5,axis=1)
plt.figure(figsize=(8,6))
plt.axhline(y=0,color='gray')
plt.plot(np.arange(1,np.shape(X)[1]+1),adjustR2,'o-',color='red', label='adjusted R2')
plt.fill_between(np.arange(1,len(p)+1), pBot, pTop,color='lightgray', label='confidence bounds')
plt.xlabel('number of predictors')
plt.ylabel('fraction variance explained')
plt.legend()
plt.title('Adjusted R2')
plt.ylim(-1,1)
plt.show()
```
| true |
code
| 0.52409 | null | null | null | null |
|
# Workshop: Deep Learning 3
Outline
1. Regularization
2. Hand-Written Digits with Convolutional Neural Networks
3. Advanced Image Classification with Convolutional Neural Networks
Source: Deep Learning With Python, Part 1 - Chapter 4
## 1. Regularization
To prevent a model from learning misleading or irrelevant patterns found in the
training data, the best solution is to get more training data. However, this is in many times out of our control.
Another approach is called - by now you should know that - regularization.
### 1.1. Reducing the network’s size
```
The simplest way to prevent overfitting is to reduce the size of the model: the number
of learnable parameters in the model (which is determined by the number of layers
and the number of units per layer).
Or put it this way: A network with more parameters can better memorize stuff...
# Unfortunately, there is no closed form solution which gives us the best network size...
# So, we need to try out different models (or use grid search)
# Original Model
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(28 * 28,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(10, activation='softmax'))
# Simpler Model
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
# Bigger Model
model = models.Sequential()
model.add(layers.Dense(512, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
#### You need to load data, compile the network and then train it (with validation/hold out set)
#### Then you plot the validation loss for all these combinations
```
<img src="res/img1.png"></img>
<img src="res/img2.png"></img>
```
# This shows us that the bigger model starts to overfit immediately..
```
Instead of manually searching for the best model architecture (i.e., hyperparameters) you can use a method called grid-search. However, we will not cover this in this lecture - but you can find a tutorial here:
https://machinelearningmastery.com/grid-search-hyperparameters-deep-learning-models-python-keras/
Basically, the author conceleates keras with scikit's grid search module.
### 1.2. Adding weight regularization
```
1. L1 regularization
2. L2 regularization
```
#### 1.2.1 Adding L2 Regularization to the model
```
from keras import regularizers
model = models.Sequential()
# kernel_regularizer = regularizers.l2(0.001), add those weights to the loss with an alpha of 0.001
# you could use also: regularizers.l1(0.001) for L1 regularization
# Documentation: https://keras.io/api/layers/regularizers/
model.add(layers.Dense(16, kernel_regularizer=regularizers.l2(0.001),activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, kernel_regularizer=regularizers.l2(0.001), activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
```
<img src="res/img3.png"></img>
### 1.2.3 Adding Dropout
Idea: Randomly drop out a number of (activation) nodes during training.
**Assume**: [0.2, 0.5, 1.3, 0.8, 1.1] is the output of a layer (after activation function).
Dropout sets randomly some of these weights to 0. For example: [0, 0.5, 1.3, 0, 1.1].
The *dropout rate* is the fraction of features that are zeroed out (usually between 0.2 and 0.5)
```
# Example Code
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
# Pass dropout rate!!!
model.add(layers.Dropout(0.5))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dropout(0.5))
model.add(layers.Dense(1, activation='sigmoid'))
# Compile..
# Fit..
# Evaluate...
# Doc: https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dropout
```
<img src="res/img4.png"></img>
### To recap, these are the most common ways to prevent overfitting in neural networks:
1. Get more training data.
2. Reduce the capacity of the network.
3. Add weight regularization.
4. Add dropout.
5. Data Augmentation (for image classification tasks)
## 2 Gradient Descent
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import california_housing
from sklearn.metrics import mean_squared_error
housing_data = california_housing.fetch_california_housing()
features = pd.DataFrame(housing_data.data, columns=housing_data.feature_names)
target = pd.DataFrame(housing_data.target, columns=['Target'])
df = features.join(target)
X = df.MedInc
Y = df.Target
def gradient_descent(X, y, lr=0.05, iterations=10):
'''
Gradient Descent for a single feature
'''
m, b = 0.2, 0.2 # initial random parameters
log, mse = [], [] # lists to store learning process
N = len(X) # number of samples
# MSE = 1/N SUM (y_i - (m*x_i +b))^2
# MSE' w.r.t. m => 1/N * SUM(-2*x_i*(m*x_i+b))
# MSE' w.r.t. b => 1/N * SUM(-2*(m*x_i+b))
for _ in range(iterations):
f = y - (m*X + b)
# Updating m and b
m -= lr * (-2 * X.dot(f).sum() / N)
b -= lr * (-2 * f.sum() / N)
log.append((m, b))
mse.append(mean_squared_error(y, (m*X + b)))
return m, b, log, mse
m, b, log, mse = gradient_descent(X, Y, lr=0.01, iterations=10)
(m, b)
# Analytical Solution (compaed to )
from sklearn.linear_model import LinearRegression
reg = LinearRegression().fit(features["MedInc"].to_numpy().reshape(-1, 1), Y)
(reg.coef_, reg.intercept_)
```
##### Stochastic Gradient Descent
```
def stochastic_gradient_descent(X, y, lr=0.05, iterations=10, batch_size=10):
'''
Stochastic Gradient Descent for a single feature
'''
m, b = 0.5, 0.5 # initial parameters
log, mse = [], [] # lists to store learning process
for _ in range(iterations):
indexes = np.random.randint(0, len(X), batch_size) # random sample
Xs = np.take(X, indexes)
ys = np.take(y, indexes)
N = len(Xs)
f = ys - (m*Xs + b)
# Updating parameters m and b
m -= lr * (-2 * Xs.dot(f).sum() / N)
b -= lr * (-2 * f.sum() / N)
log.append((m, b))
mse.append(mean_squared_error(y, m*X+b))
return m, b, log, mse
m, b, log, mse = stochastic_gradient_descent(X, Y, lr=0.01, iterations=1000)
(m,b)
```
## 2. Using CNNs to Classify Hand-written Digits on MNIST Dataset
<img src="res/img5.png"></img>
```
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Conv2D, MaxPool2D
from keras.utils import np_utils
# Load Data
(X_train, y_train), (X_test, y_test) = mnist.load_data()
# Shape of data
print("X_train shape", X_train.shape)
print("y_train shape", y_train.shape)
print("X_test shape", X_test.shape)
print("y_test shape", y_test.shape)
# Flattening the images from the 28x28 pixels to 1D 784 pixels
X_train = X_train.reshape(60000, 784)
X_test = X_test.reshape(10000, 784)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
# normalizing the data to help with the training
X_train /= 255
X_test /= 255
# To Categorical (One-Hot Encoding)
n_classes = 10
print("Shape before one-hot encoding: ", y_train.shape)
Y_train = np_utils.to_categorical(y_train, n_classes)
Y_test = np_utils.to_categorical(y_test, n_classes)
print("Shape after one-hot encoding: ", Y_train.shape)
# Let's build again a very boring neural network
model = Sequential()
# hidden layer
model.add(Dense(100, input_shape=(784,), activation='relu'))
# output layer
model.add(Dense(10, activation='softmax'))
# looking at the model summary
model.summary()
# Compile
model.compile(loss='categorical_crossentropy', metrics=['accuracy'], optimizer='adam')
# Traing (####-> Caution, this is dedicated for validation data - I was just lazy...)
model.fit(X_train, Y_train, batch_size=128, epochs=10, validation_data=(X_test, Y_test))
# new imports needed
from keras.layers import Conv2D, MaxPool2D, Flatten
# And now with a convolutional neural network
# Doc: https://keras.io/api/layers/convolution_layers/
# Load again data
(X_train, y_train), (X_test, y_test) = mnist.load_data()
# DONT Vectorize - keep grid structure
X_train = X_train.reshape(X_train.shape[0], 28, 28, 1)
X_test = X_test.reshape(X_test.shape[0], 28, 28, 1)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
# normalize
X_train /= 255
X_test /= 255
# Sequential Model
model = Sequential()
# Convolutional layer
# 2D convolutional data
# filters: number of kernels
# kernel size: (3, 3) pixel filter
# stride: (move one to the right, one to the bottom when you reach the end of the row)
# padding: "valid" => no padding => feature map is reduced
model.add(Conv2D(filters=25, kernel_size=(3,3), strides=(1,1), padding='valid', activation='relu', input_shape=(28,28,1)))
model.add(MaxPool2D(pool_size=(1,1)))
# flatten output such that the "densly" connected network can be attached
model.add(Flatten())
# hidden layer
model.add(Dense(100, activation='relu'))
# output layer
model.add(Dense(10, activation='softmax'))
# compiling the sequential model
model.compile(loss='categorical_crossentropy', metrics=['accuracy'], optimizer='adam')
# training the model for 10 epochs
model.fit(X_train, Y_train, batch_size=128, epochs=10, validation_data=(X_test, Y_test))
# More on Classification with CNNs
```
## 3. Advanced Image Classification with Deep Convolutional Neural Networks
<img src="res/img6.png">
```
# Imports
from keras.datasets import cifar10
from keras.models import Sequential
from keras.layers import Dense, Dropout, Conv2D, MaxPool2D, Flatten
from keras.utils import np_utils
# Load Data
(X_train, y_train), (X_test, y_test) = cifar10.load_data()
# # Keep Grid Structure with 32x32 pixels (times 3; due to color channels)
X_train = X_train.reshape(X_train.shape[0], 32, 32, 3)
X_test = X_test.reshape(X_test.shape[0], 32, 32, 3)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
# Normalize
X_train /= 255
X_test /= 255
# One-Hot Encoding
n_classes = 10
print("Shape before one-hot encoding: ", y_train.shape)
Y_train = np_utils.to_categorical(y_train, n_classes)
Y_test = np_utils.to_categorical(y_test, n_classes)
print("Shape after one-hot encoding: ", Y_train.shape)
# Create Model Object
model = Sequential()
# Add Conv. Layer
model.add(Conv2D(50, kernel_size=(3,3), strides=(1,1), padding='same', activation='relu', input_shape=(32, 32, 3)))
## What happens here?
# Stack 2. Conv. Layer
model.add(Conv2D(75, kernel_size=(3,3), strides=(1,1), padding='same', activation='relu'))
model.add(MaxPool2D(pool_size=(2,2)))
model.add(Dropout(0.25))
# Stack 3. Conv. Layer
model.add(Conv2D(125, kernel_size=(3,3), strides=(1,1), padding='same', activation='relu'))
model.add(MaxPool2D(pool_size=(2,2)))
model.add(Dropout(0.25))
# Flatten Output of Conv. Part such that we can add a densly connected network
model.add(Flatten())
# Add Hidden Layer and Dropout Reg.
model.add(Dense(500, activation='relu'))
model.add(Dropout(0.4))
model.add(Dense(250, activation='relu'))
model.add(Dropout(0.3))
# Output Layer
model.add(Dense(10, activation='softmax'))
# Compile
model.compile(loss='categorical_crossentropy', metrics=['accuracy'], optimizer='adam')
# Train
model.fit(X_train, Y_train, batch_size=128, epochs=2, validation_data=(X_test, Y_test))
```
| true |
code
| 0.778439 | null | null | null | null |
|
# Inter-annotator agreement between the first 10 annotators of WS-353
Measured in Kappa and Rho:
- against the gold standard which is the mean of all annotators, as described in Hill et al 2014 (footnote 6)
- against each other
Using Kohen's kappa, which is binary, so I average across pairs of annotators.
```
%cd ~/NetBeansProjects/ExpLosion/
from notebooks.common_imports import *
from skll.metrics import kappa
from scipy.stats import spearmanr
from itertools import combinations
sns.timeseries.algo.bootstrap = my_bootstrap
sns.categorical.bootstrap = my_bootstrap
columns = 'Word 1,Word 2,Human (mean),1,2,3,4,5,6,7,8,9,10,11,12,13'.split(',')
df1 = pd.read_csv('../thesisgenerator/similarity-data/wordsim353/set1.csv')[columns]
df2 = pd.read_csv('../thesisgenerator/similarity-data/wordsim353/set2.csv')[columns]
df = pd.concat([df1, df2], ignore_index=True)
df_gold = pd.read_csv('../thesisgenerator/similarity-data/wordsim353/combined.csv',
names='w1 w2 sim'.split())
# had to remove trailing space from their files to make it parse with pandas
marco = pd.read_csv('../thesisgenerator/similarity-data/MEN/agreement/marcos-men-ratings.txt',
sep='\t', index_col=[0,1], names=['w1', 'w2', 'sim']).sort_index().convert_objects(convert_numeric=True)
elia = pd.read_csv('../thesisgenerator/similarity-data/MEN/agreement/elias-men-ratings.txt',
sep='\t', index_col=[0,1], names=['w1', 'w2', 'sim']).sort_index().convert_objects(convert_numeric=True)
df.head()
# Each index ``i`` returned is such that ``bins[i-1] <= x < bins[i]``
def bin(arr, nbins=2, debug=False):
bins = np.linspace(arr.min(), arr.max(), nbins+1)
if debug:
print('bins are', bins)
return np.digitize(arr, bins[1:-1])
bin(df['1'], nbins=5, debug=True)[:10]
bin(np.array([0, 2.1, 5.8, 7.9, 10]), debug=True) # 0 and 10 are needed to define the range of values
bin(np.array([0, 2.1, 5.8, 7.9, 10]), nbins=3, debug=True)
df.describe()
elia.describe()
```
# WS353: Kappa against each other/ against mean
```
bin_counts = range(2, 6)
# pair, bin count, kappa
kappas_pair = []
for name1, name2 in combinations(range(1,14), 2):
for b in bin_counts:
kappas_pair.append(['%d-%d'%(name1, name2),
b,
kappa(bin(df[str(name1)], b), bin(df[str(name2)], b))])
kappas_mean = []
for name in range(1, 14):
for b in bin_counts:
kappas_mean.append(['%d-m'%name,
b,
kappa(bin(df[str(name)], b), bin(df_gold.sim, b))])
kappas_men = [] # MEN data set- marco vs elia
for b in bin_counts:
kappas_men.append(['marco-elia',
b,
kappa(bin(marco.sim.values, b), bin(elia.sim.values, b))])
kappas1 = pd.DataFrame(kappas_pair, columns=['pair', 'bins', 'kappa'])
kappas1['kind'] = 'WS353-pairwise'
kappas2 = pd.DataFrame(kappas_mean, columns=['pair', 'bins', 'kappa'])
kappas2['kind'] = 'WS353-to mean'
kappas3 = pd.DataFrame(kappas_men, columns=['pair', 'bins', 'kappa'])
kappas3['kind'] = 'MEN'
kappas = pd.concat([kappas1, kappas2, kappas3], ignore_index=True)
kappas.head(3)
with sns.color_palette("cubehelix", 3):
ax = sns.tsplot(kappas, time='bins', unit='pair', condition='kind', value='kappa',
marker='s', linewidth=4);
ax.set_xticklabels(np.arange(kappas.bins.min(), kappas.bins.max() + 0.01, 0.5).astype(np.int))
sparsify_axis_labels(ax)
plt.savefig('plot-intrinsic-ws353-kappas.pdf', format='pdf', dpi=300, bbox_inches='tight', pad_inches=0.1)
# sns.tsplot(kappas, time='bins', unit='pair', condition='kind', value='kappa',
sns.factorplot(data=kappas, x='bins', y='kappa', hue='kind', kind='box')
kappas.groupby(['bins', 'kind']).mean()
rhos_pair = []
for name1, name2 in combinations(range(1,14), 2):
rhos_pair.append(spearmanr(bin(df[str(name1)], b), bin(df[str(name2)], b))[0])
rhos_mean = []
for name in range(1,14):
rhos_mean.append(spearmanr(bin(df[str(name)], b), bin(df_gold.sim, b))[0])
sns.distplot(rhos_pair, label='pairwise');
# plt.axvline(np.mean(rhos_pair));
sns.distplot(rhos_mean, label='to mean');
# plt.axvline(np.mean(rhos_mean), color='g');
plt.legend(loc='upper left');
plt.savefig('plot-intrinsic-ws353-rhos.pdf', format='pdf', dpi=300, bbox_inches='tight', pad_inches=0.1)
print(np.mean(rhos_pair), np.mean(rhos_mean))
# Fisher transform: http://stats.stackexchange.com/a/19825 and wiki article therein
np.tanh(np.arctanh(rhos_pair).mean()), np.tanh(np.arctanh(rhos_mean).mean())
from nltk.metrics.agreement import AnnotationTask
AnnotationTask(data=[
('coder1', 'obj1', 'label1'),
('coder1', 'obj2', 'label2'),
('coder2', 'obj1', 'label1'),
('coder2', 'obj2', 'label2'),
('coder3', 'obj1', 'label1'),
('coder3', 'obj2', 'label1'),
]).multi_kappa()
multikappas = []
for name in range(1, 14):
for b in bin_counts:
labels = bin(df[str(name)], b)
# gold_labels = bin(df_gold.sim, b)
for i, label in enumerate(labels):
multikappas.append(('coder%d'%name, 'wordpair%d'%i, label))
AnnotationTask(multikappas).multi_kappa()
# WTF nltk, you are great
```
# The same thing for the MEN dataset
Annotations by Marco and Elia
```
spearmanr(marco.sim, elia.sim) # they report .6845
```
| true |
code
| 0.429011 | null | null | null | null |
|
This Jupyter notebook details theoretically the architecture and the mechanism of the Convolutional Neural Network (ConvNet) step by step. Then, we implement the CNN code for multi-class classification task using pytorch. <br>
The notebook was implemented by <i>Nada Chaari</i>, PhD student at Istanbul Technical University (ITU). <br>
# Table of Contents:
1)Convolution layer
1-1) Input image
1-2) Filter
1-3) Output image
1-4) Multiple filters
1-5) One-layer of a convolutional neural network
2)Pooling layer
3)Fully connected layer
4)Softmax
5)Application of CNN using CIFAR dataset
5-1) Dataset
5-2) Load and normalize the CIFAR10 training and test datasets
5-3) Define a Convolutioanl Neural Network
5-4) Define a Loss function and optimizer
5-5) Train the CNN
5-6) Test the network on the test data
Sources used to build this Jupiter Notebook:
* https://towardsdatascience.com/understanding-images-with-skimage-python-b94d210afd23
* https://gombru.github.io/2018/05/23/cross_entropy_loss/
* https://medium.com/@toprak.mhmt/activation-functions-for-deep-learning-13d8b9b20e
* https://github.com/python-engineer/pytorchTutorial/blob/master/14_cnn.py
* https://medium.com/machine-learning-bites/deeplearning-series-convolutional-neural-networks-a9c2f2ee1524
* https://towardsdatascience.com/stochastic-gradient-descent-clearly-explained-53d239905d31
# CNN (ConvNet) definition
Convolutional Neural Network is a sequence of layers made up of neurons that have learnable weights and biases. Each neuron receives some inputs, performs a dot product and optionally follows it with a non-linearity. The whole network still expresses a single differentiable score function: from the raw image pixels on one end to class scores at the other. CNNs have a loss function (e.g. SVM/Softmax) on the last (fully-connected) layer.
* There are 3 types of layers to build the ConvNet architectures:
* Convolution (CONV)
* Pooling (POOL)
* Fully connected (FC)
# 1) Convolution layer
## 1-1) Input image
* Image with color has three channels: red, green and blue, which can be represented as three 2d-matrices stacked over each other (one for each color), each having pixel values in the range 0 to 255.
<img src='https://miro.medium.com/max/1400/1*icINeO4H7UKe3NlU1fXqlA.jpeg' width='400' align="center">
## 1-2) Filter
<img src='https://miro.medium.com/max/933/1*7S266Kq-UCExS25iX_I_AQ.png' width='500' align="center">
* In the filter the value '1' allows filtering brightness,
* While '-1' highlights the darkness,
* Furthermore, '0' highlights the grey.
* The convolution layer in the case of a ConvNet extracts features from the input image:
* choose a filter (kernel) of a certain dimension
* slide the filter from the top left to the right until we reach the bottom of the image.
* The convolution operation is an element-wise multiplication between the two matrices (filter and the part of the image) and an addition of the multiplication outputs.
* The final integer of this computation forms a single element of the output matrix.
* Stride: is the step that the filter moves horizontally and vertically by pixel.
In the above example, the value of a stride equal to 1.
Because the pixels on the edges are “touched” less by the filter than the pixels within the image, we apply padding.
* Padding: is to pad the image with zeros all around its border to allow the filter to slide on top and maintain the output size equal to the input
<img src='https://miro.medium.com/max/684/1*PBnmjdDqn-OF8JEyRgKm9Q.png' width='200' align="center">
<font color='red'> Important </font>: The goal of a convolutional neural network is to learn the values of filters. They are treated as parameters, which the network learns using backpropagation.
## 1-3) Output image
The size of the output image after applying the filter, knowing the filter size (f), stride (s), pad (p), and input size (n) is given as:
<img src='https://miro.medium.com/max/933/1*rOyHQ2teFXX5rIIFHwYDsg.png' width='400' align="center">
<img src='https://miro.medium.com/max/933/1*IBWQJSnW19WIYsObZcMTNg.png' width='500' align="center">
## 1-4) Multiple filters
We can generalize the application of one filter at a time to multiple filters to detect several different features. This is the concept for building convolutional neural networks. Each filter brings its own output and we stack them all together and create an output volume, such as:
<img src='https://miro.medium.com/max/933/1*ySaRmKSilLahyK2WxXC1bA.png' width='500' align="center">
The general formula of the output image can be written as:
<img src='https://miro.medium.com/max/933/1*pN09gs3rXeTh_EwED1d76Q.png' width='500' align="center">
where nc is the number of filters
## 1-5) One-layer of a convolutional neural network
The final step that takes us to a convolutional neural layer is to add the bias and a non-linear function.
The goal of the activation function is to add a non-linearity to the network so that it can model non-linear relationships. The most used is Rectified Linear (RELU) defined as max(0,z) with thresholding at zero. This function assigns zeros to all negatives inputs and keep the same values to the positives inputs. This leaves the size of the output volume unchanged ([4x4x1]).
<img src='https://miro.medium.com/max/933/1*LiBZo_FcnKWqoU7M3GRKbA.png' width='300' align="center">
<img src='https://miro.medium.com/max/933/1*EpeM8rTf5RFKYphZwYItkg.png' width='500' align="center">
The parameters involved in one layer are the elements forming the filters and the bias.
Example: if we have 10 filters that are of size 3x3x3 in one layer of a neural network. Each filter has 27 (3x3x3) + 1 bias => 28 parameters. Therefore, the total amount of parameters in the layer is 280 (10x28).
## Deep Convolutional Network
<img src='https://miro.medium.com/max/933/1*PT1sP_kCvdFEiJEsoKU88Q.png' width='600' align="center">
# 2) Pooling layer
Pooling layer performs a downsampling operation by progressively reducing the spatial size of the representation (input volume) to reduce the amount of learnable parameters and thus the computational cost; and to avoid overfitting by providing an abstracted form of the input. The Pooling Layer operates independently on every depth slice of the input and resizes it.
There are two types of pooling layers: max and average pooling.
* Max pooling: a filter which takes take the largest element within the region it covers.
* Average pooling: a filter which retains the average of the values encountered within the region it covers.
Note: pooling layer does not have any parameters to learn.
<img src='https://miro.medium.com/max/933/1*voEBfjohEDVRK7RpNvxd-Q.png' width='300' align="center">
# 3) Fully connected layer
Fully connected layer (FC) is a layer where all the layer inputs are connectd to all layer outputs. In classification task, FC is used to extract features from the data to make the classification work. Also, FC computes the class scores to classifier the data. In general, FC layer is added to make the model end-to-end trainable by learning a function between the high-level features given as an output from the convolutional layers.
<img src='https://miro.medium.com/max/933/1*_l-0PeSh3oL2Wc2ri2sVWA.png' width='600' align="center">
It’s common that, as we go deeper into the network, the sizes (nh, nw) decrease, while the number of channels (nc) increases.
# 4) Softmax
The softmax function is a type of a sigmoid function, not a loss, used in classification problems. The softmax function is ideally used in the output layer of the classifier where we are actually trying to get the probabilities to define the class of each input.
The Softmax function cannot be applied independently to each $s_i$, since it depends on all elements of $s$. For a given class $s_i$, the Softmax function can be computed as:
$$ f(s)_{i} = \frac{e^{s_{i}}}{\sum_{j}^{C} e^{s_{j}}} $$
Where $s_j$ are the scores inferred by the net for each class in C. Note that the Softmax activation for a class $s_i$ depends on all the scores in $s$.
So, if a network with 3 neurons in the output layer outputs [1.6, 0.55, 0.98], then with a softmax activation function, the outputs get converted to [0.51, 0.18, 0.31]. This way, it is easier for us to classify a given data point and determine to which category it belongs.
<img src='https://gombru.github.io/assets/cross_entropy_loss/intro.png' width='400' align="center">
# 5) Application of CNN using CIFAR dataset
## 5-1) dataset
For the CNN application, we will use the CIFAR10 dataset. It has the classes: ‘airplane’, ‘automobile’, ‘bird’, ‘cat’, ‘deer’, ‘dog’, ‘frog’, ‘horse’, ‘ship’, ‘truck’. The images in CIFAR-10 are of size 3x32x32, i.e. 3-channel color images of 32x32 pixels in size.
<img src='https://cs231n.github.io/assets/cnn/convnet.jpeg' width='600' align="center">
## 5-2) Load and normalize the CIFAR10 training and test datasets using torchvision
```
import torch
import torchvision # torchvision is for loading the dataset (CIFAR10)
import torchvision.transforms as transforms # torchvision.transforms is for data transformers for images
import numpy as np
# Hyper-parameters
num_epochs = 5
batch_size = 4
learning_rate = 0.001
# dataset has PILImage images of range [0, 1].
# We transform them to Tensors of normalized range [-1, 1]
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
# A CIFAR10 dataset are available in pytorch. We load CIFAR from torchvision.datasets
# CIFAR10: 60000 32x32 color images in 10 classes, with 6000 images per class
train_dataset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
test_dataset = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
# We define the pytorch data loader so that we can do the batch optimazation and batch training
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size,
shuffle=True)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size,
shuffle=False)
# Define the classes
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
```
## 5-3) Define a Convolutional Neural Network
```
import torch.nn as nn # for the the neural network
import torch.nn.functional as F # import activation function (relu; softmax)
# Implement the ConvNet
class ConvNet(nn.Module):
def __init__(self):
super(ConvNet, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5) # create the first conv layer-- 3: num of channel; 6: output layer; 5: kernel size
self.pool = nn.MaxPool2d(2, 2) # create the first pool layer -- 2: kernel size; 2: stride size
self.conv2 = nn.Conv2d(6, 16, 5) # create the second conv layer -- 6: the input channel size must be equal to the last output channel size; 16: the output; 5: kernel size
self.fc1 = nn.Linear(16 * 5 * 5, 120) # # create the FC layer (classification layer) to flattern 3-d tensor to 1-d tensor
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
# -> size of x: [3, 32, 32]
x = self.pool(F.relu(self.conv1(x))) # -> size of x: [6, 14, 14] # call an activation function (relu)
x = self.pool(F.relu(self.conv2(x))) # -> size of x: [16, 5, 5]
x = x.view(-1, 16 * 5 * 5) # -> size of x: [400]
x = F.relu(self.fc1(x)) # -> size of x: [120]
x = F.relu(self.fc2(x)) # -> size of x: [84]
x = self.fc3(x) # -> size of x: [10]
return x
# Create the model
model = ConvNet()
```
<img src='https://miro.medium.com/max/933/1*rOyHQ2teFXX5rIIFHwYDsg.png' width='400' align="center">
## 5-4) Define a Loss function and optimizer
```
# Create the loss function (multiclass-classification problem)--> CrossEntropy
criterion = nn.CrossEntropyLoss() # the softmax is included in the loss
# Create the optimizer (use the stochastic gradient descent to optimize the model parameters given the lr)
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
```
### Stochastic gradient descent (SGD)
Unlike the gradiend descent that takes the sum of squared residuals of all data points for each iteration of the algorithm, which is computaionally costed, SGD randomly picks one data point from the whole data set at each iteration to reduce the computations enormously.
## 5-5) Train the CNN
```
# training loop
n_total_steps = len(train_loader)
for epoch in range(num_epochs):# loop over the number of epochs (5)
for i, (images, labels) in enumerate(train_loader):
# origin shape: [4, 3, 32, 32] = 4, 3, 1024
# input_layer: 3 input channels, 6 output channels, 5 kernel size
images = images # get the inputs images
labels = labels # get the inputs labels
# Forward pass
outputs = model(images) # forward: calculate the loss between the predicted scores and the ground truth
loss = criterion(outputs, labels) # compute the CrossEntropy loss between the predicted and the real labels
# Backward and optimize
optimizer.zero_grad() # zero the parameter gradients
loss.backward() # the backward propagates the error (loss) back into the network and update each weight and bias for each layer in the CNN using SGD optimizer
optimizer.step() # compute the SGD to find the next
if (i+1) % 2000 == 0: # print every 2000 mini-batches
print (f'Epoch [{epoch+1}/{num_epochs}], Step [{i+1}/{n_total_steps}], Loss: {loss.item():.4f}')
print('Finished Training')
```
## 5-6) Test the network on the test data
```
# Evaluating the model
with torch.no_grad(): # since we're not training, we don't need to calculate the gradients for our outputs
n_correct = 0
n_samples = 0
n_class_correct = [0 for i in range(10)]
n_class_samples = [0 for i in range(10)]
for images, labels in test_loader:
outputs = model(images) # run images through the network and output the probability distribution that image belongs to each class over 10 classes
# max returns (value ,index)
_, predicted = torch.max(outputs, 1) # returns the index having the highest probability score of each image over one batch
n_samples += labels.size(0)
n_correct += (predicted == labels).sum().item() # returns the number of corrected classified samples in each batch and increment them to the total right classified samples
for i in range(batch_size):
label = labels[i]
pred = predicted[i]
if (label == pred): # test if the predicted label of a sample is equal to the real label
n_class_correct[label] += 1 # calculate the number of corrected classified samples in each class
n_class_samples[label] += 1 # calculate the number of samples in each class (test data)
acc = 100.0 * n_correct / n_samples # calculate the accuracy classification of the network
outputs
```
* We will visualize the outputs which represent the classes probability scores of 4 samples in one batch.
* Each sample has 10 classes probability scores. The index of the class having the highest score will be the predicted value and which will be compared with the ground truth later on.
```
import pandas as pd # Visualizing Statistical Data
import seaborn as sns # Visualizing Statistical Data
df = pd.DataFrame({'accuracy_sample 1': outputs[0, 0:10].tolist(), 'classes': ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'], })
sns.set_style('darkgrid')
# plot the accuracy classification for each class
sns.barplot(x ='classes', y ='accuracy_sample 1', data = df, palette ='plasma')
df = pd.DataFrame({'accuracy_sample 2': outputs[1, 0:10].tolist(), 'classes': ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'], })
sns.set_style('darkgrid')
# plot the accuracy classification for each class
sns.barplot(x ='classes', y ='accuracy_sample 2', data = df, palette ='plasma')
df = pd.DataFrame({'accuracy_sample 3': outputs[2, 0:10].tolist(), 'classes': ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'], })
sns.set_style('darkgrid')
# plot the accuracy classification for each class
sns.barplot(x ='classes', y ='accuracy_sample 3', data = df, palette ='plasma')
df = pd.DataFrame({'accuracy_sample 4': outputs[3, 0:10].tolist(), 'classes': ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'], })
sns.set_style('darkgrid')
# plot the accuracy classification for each class
sns.barplot(x ='classes', y ='accuracy_sample 4', data = df, palette ='plasma')
predicted
labels
n_samples
n_correct
acc = 100.0 * n_correct / n_samples # calculate the accuracy classification of the network
print('The accuracy classification of the network is:', acc)
list_class = []
for i in range(10): # calculate the accuracy classification for each class
acc = 100.0 * n_class_correct[i] / n_class_samples[i]
list_class.append(acc)
print(f'Accuracy of {classes[i]}: {acc} %')
list_class
df = pd.DataFrame({'accuracy': [42.6, 49.9, 25.7, 40.9, 34.8, 26.7, 57.6, 62.6, 68.2, 66.4], 'classes': ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'], })
sns.set_style('darkgrid')
# plot the accuracy classification for each class
sns.barplot(x ='classes', y ='accuracy', data = df, palette ='plasma')
```
The classes that performed well are: car, ship, frog, plane and horse (choose a threshold rate equal to 0.5).
For the classes that did not perform well are: bird, cat, deer, dog and truck.
Thanks!
| true |
code
| 0.833731 | null | null | null | null |
|
# Activity 02
```
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from keras.models import Sequential
from keras.layers import Dense
from tensorflow import random
import matplotlib.pyplot as plt
import matplotlib
%matplotlib inline
# Load The dataset
X = pd.read_csv('../data/HCV_feats.csv')
y = pd.read_csv('../data/HCV_target.csv')
# Print the sizes of the dataset
print("Number of Examples in the Dataset = ", X.shape[0])
print("Number of Features for each example = ", X.shape[1])
print("Possible Output Classes = ", y['AdvancedFibrosis'].unique())
```
Set up a seed for random number generator so the result will be reproducible
Split the dataset into training set and test set with a 80-20 ratio
```
seed = 1
np.random.seed(seed)
random.set_seed(seed)
sc = StandardScaler()
X = pd.DataFrame(sc.fit_transform(X), columns=X.columns)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=seed)
# Print the information regarding dataset sizes
print(X_train.shape)
print(y_train.shape)
print(X_test.shape)
print(y_test.shape)
print ("Number of examples in training set = ", X_train.shape[0])
print ("Number of examples in test set = ", X_test.shape[0])
np.random.seed(seed)
random.set_seed(seed)
# define the keras model
classifier = Sequential()
classifier.add(Dense(units = 3, activation = 'tanh', input_dim=X_train.shape[1]))
classifier.add(Dense(units = 1, activation = 'sigmoid'))
classifier.compile(optimizer = 'sgd', loss = 'binary_crossentropy', metrics = ['accuracy'])
classifier.summary()
# train the model while storing all loss values
history=classifier.fit(X_train, y_train, batch_size = 20, epochs = 100, validation_split=0.1, shuffle=False)
matplotlib.rcParams['figure.figsize'] = (10.0, 8.0)
# plot training error and test error plots
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train loss', 'validation loss'], loc='upper right')
# print the best accuracy reached on training set and the test set
print(f"Best Accuracy on training set = {max(history.history['accuracy'])*100:.3f}%")
print(f"Best Accuracy on validation set = {max(history.history['val_accuracy'])*100:.3f}%")
test_loss, test_acc = classifier.evaluate(X_test, y_test['AdvancedFibrosis'])
print(f'The loss on the test set is {test_loss:.4f} and the accuracy is {test_acc*100:.3f}%')
# set up a seed for random number generator so the result will be reproducible
np.random.seed(seed)
random.set_seed(seed)
# define the keras model
classifier = Sequential()
classifier.add(Dense(units = 4, activation = 'tanh', input_dim = X_train.shape[1]))
classifier.add(Dense(units = 2, activation = 'tanh'))
classifier.add(Dense(units = 1, activation = 'sigmoid'))
classifier.compile(optimizer = 'sgd', loss = 'binary_crossentropy', metrics = ['accuracy'])
classifier.summary()
# train the model while storing all loss values
history=classifier.fit(X_train, y_train, batch_size = 20, epochs = 100, validation_split=0.1, shuffle=False)
# plot training error and test error plots
matplotlib.rcParams['figure.figsize'] = (10.0, 8.0)
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train loss', 'validation loss'], loc='upper right')
# print the best accuracy reached on training set and the test set
print(f"Best Accuracy on training set = {max(history.history['accuracy'])*100:.3f}%")
print(f"Best Accuracy on test set = {max(history.history['val_accuracy'])*100:.3f}%")
test_loss, test_acc = classifier.evaluate(X_test, y_test['AdvancedFibrosis'])
print(f'The loss on the test set is {test_loss:.4f} and the accuracy is {test_acc*100:.3f}%')
```
| true |
code
| 0.731406 | null | null | null | null |
|
# Autoregressions
This notebook introduces autoregression modeling using the `AutoReg` model. It also covers aspects of `ar_select_order` assists in selecting models that minimize an information criteria such as the AIC.
An autoregressive model has dynamics given by
$$ y_t = \delta + \phi_1 y_{t-1} + \ldots + \phi_p y_{t-p} + \epsilon_t. $$
`AutoReg` also permits models with:
* Deterministic terms (`trend`)
* `n`: No deterministic term
* `c`: Constant (default)
* `ct`: Constant and time trend
* `t`: Time trend only
* Seasonal dummies (`seasonal`)
* `True` includes $s-1$ dummies where $s$ is the period of the time series (e.g., 12 for monthly)
* Custom deterministic terms (`deterministic`)
* Accepts a `DeterministicProcess`
* Exogenous variables (`exog`)
* A `DataFrame` or `array` of exogenous variables to include in the model
* Omission of selected lags (`lags`)
* If `lags` is an iterable of integers, then only these are included in the model.
The complete specification is
$$ y_t = \delta_0 + \delta_1 t + \phi_1 y_{t-1} + \ldots + \phi_p y_{t-p} + \sum_{i=1}^{s-1} \gamma_i d_i + \sum_{j=1}^{m} \kappa_j x_{t,j} + \epsilon_t. $$
where:
* $d_i$ is a seasonal dummy that is 1 if $mod(t, period) = i$. Period 0 is excluded if the model contains a constant (`c` is in `trend`).
* $t$ is a time trend ($1,2,\ldots$) that starts with 1 in the first observation.
* $x_{t,j}$ are exogenous regressors. **Note** these are time-aligned to the left-hand-side variable when defining a model.
* $\epsilon_t$ is assumed to be a white noise process.
This first cell imports standard packages and sets plots to appear inline.
```
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import pandas_datareader as pdr
import seaborn as sns
from statsmodels.tsa.ar_model import AutoReg, ar_select_order
from statsmodels.tsa.api import acf, pacf, graphics
```
This cell sets the plotting style, registers pandas date converters for matplotlib, and sets the default figure size.
```
sns.set_style('darkgrid')
pd.plotting.register_matplotlib_converters()
# Default figure size
sns.mpl.rc('figure',figsize=(16, 6))
```
The first set of examples uses the month-over-month growth rate in U.S. Housing starts that has not been seasonally adjusted. The seasonality is evident by the regular pattern of peaks and troughs. We set the frequency for the time series to "MS" (month-start) to avoid warnings when using `AutoReg`.
```
data = pdr.get_data_fred('HOUSTNSA', '1959-01-01', '2019-06-01')
housing = data.HOUSTNSA.pct_change().dropna()
# Scale by 100 to get percentages
housing = 100 * housing.asfreq('MS')
fig, ax = plt.subplots()
ax = housing.plot(ax=ax)
```
We can start with an AR(3). While this is not a good model for this data, it demonstrates the basic use of the API.
```
mod = AutoReg(housing, 3, old_names=False)
res = mod.fit()
print(res.summary())
```
`AutoReg` supports the same covariance estimators as `OLS`. Below, we use `cov_type="HC0"`, which is White's covariance estimator. While the parameter estimates are the same, all of the quantities that depend on the standard error change.
```
res = mod.fit(cov_type="HC0")
print(res.summary())
sel = ar_select_order(housing, 13, old_names=False)
sel.ar_lags
res = sel.model.fit()
print(res.summary())
```
`plot_predict` visualizes forecasts. Here we produce a large number of forecasts which show the string seasonality captured by the model.
```
fig = res.plot_predict(720, 840)
```
`plot_diagnositcs` indicates that the model captures the key features in the data.
```
fig = plt.figure(figsize=(16,9))
fig = res.plot_diagnostics(fig=fig, lags=30)
```
## Seasonal Dummies
`AutoReg` supports seasonal dummies which are an alternative way to model seasonality. Including the dummies shortens the dynamics to only an AR(2).
```
sel = ar_select_order(housing, 13, seasonal=True, old_names=False)
sel.ar_lags
res = sel.model.fit()
print(res.summary())
```
The seasonal dummies are obvious in the forecasts which has a non-trivial seasonal component in all periods 10 years in to the future.
```
fig = res.plot_predict(720, 840)
fig = plt.figure(figsize=(16,9))
fig = res.plot_diagnostics(lags=30, fig=fig)
```
## Seasonal Dynamics
While `AutoReg` does not directly support Seasonal components since it uses OLS to estimate parameters, it is possible to capture seasonal dynamics using an over-parametrized Seasonal AR that does not impose the restrictions in the Seasonal AR.
```
yoy_housing = data.HOUSTNSA.pct_change(12).resample("MS").last().dropna()
_, ax = plt.subplots()
ax = yoy_housing.plot(ax=ax)
```
We start by selecting a model using the simple method that only chooses the maximum lag. All lower lags are automatically included. The maximum lag to check is set to 13 since this allows the model to next a Seasonal AR that has both a short-run AR(1) component and a Seasonal AR(1) component, so that
$$ (1-\phi_s L^{12})(1-\phi_1 L)y_t = \epsilon_t $$
which becomes
$$ y_t = \phi_1 y_{t-1} +\phi_s Y_{t-12} - \phi_1\phi_s Y_{t-13} + \epsilon_t $$
when expanded. `AutoReg` does not enforce the structure, but can estimate the nesting model
$$ y_t = \phi_1 y_{t-1} +\phi_{12} Y_{t-12} - \phi_{13} Y_{t-13} + \epsilon_t. $$
We see that all 13 lags are selected.
```
sel = ar_select_order(yoy_housing, 13, old_names=False)
sel.ar_lags
```
It seems unlikely that all 13 lags are required. We can set `glob=True` to search all $2^{13}$ models that include up to 13 lags.
Here we see that the first three are selected, as is the 7th, and finally the 12th and 13th are selected. This is superficially similar to the structure described above.
After fitting the model, we take a look at the diagnostic plots that indicate that this specification appears to be adequate to capture the dynamics in the data.
```
sel = ar_select_order(yoy_housing, 13, glob=True, old_names=False)
sel.ar_lags
res = sel.model.fit()
print(res.summary())
fig = plt.figure(figsize=(16,9))
fig = res.plot_diagnostics(fig=fig, lags=30)
```
We can also include seasonal dummies. These are all insignificant since the model is using year-over-year changes.
```
sel = ar_select_order(yoy_housing, 13, glob=True, seasonal=True, old_names=False)
sel.ar_lags
res = sel.model.fit()
print(res.summary())
```
## Industrial Production
We will use the industrial production index data to examine forecasting.
```
data = pdr.get_data_fred('INDPRO', '1959-01-01', '2019-06-01')
ind_prod = data.INDPRO.pct_change(12).dropna().asfreq('MS')
_, ax = plt.subplots(figsize=(16,9))
ind_prod.plot(ax=ax)
```
We will start by selecting a model using up to 12 lags. An AR(13) minimizes the BIC criteria even though many coefficients are insignificant.
```
sel = ar_select_order(ind_prod, 13, 'bic', old_names=False)
res = sel.model.fit()
print(res.summary())
```
We can also use a global search which allows longer lags to enter if needed without requiring the shorter lags. Here we see many lags dropped. The model indicates there may be some seasonality in the data.
```
sel = ar_select_order(ind_prod, 13, 'bic', glob=True, old_names=False)
sel.ar_lags
res_glob = sel.model.fit()
print(res.summary())
```
`plot_predict` can be used to produce forecast plots along with confidence intervals. Here we produce forecasts starting at the last observation and continuing for 18 months.
```
ind_prod.shape
fig = res_glob.plot_predict(start=714, end=732)
```
The forecasts from the full model and the restricted model are very similar. I also include an AR(5) which has very different dynamics
```
res_ar5 = AutoReg(ind_prod, 5, old_names=False).fit()
predictions = pd.DataFrame({"AR(5)": res_ar5.predict(start=714, end=726),
"AR(13)": res.predict(start=714, end=726),
"Restr. AR(13)": res_glob.predict(start=714, end=726)})
_, ax = plt.subplots()
ax = predictions.plot(ax=ax)
```
The diagnostics indicate the model captures most of the the dynamics in the data. The ACF shows a patters at the seasonal frequency and so a more complete seasonal model (`SARIMAX`) may be needed.
```
fig = plt.figure(figsize=(16,9))
fig = res_glob.plot_diagnostics(fig=fig, lags=30)
```
# Forecasting
Forecasts are produced using the `predict` method from a results instance. The default produces static forecasts which are one-step forecasts. Producing multi-step forecasts requires using `dynamic=True`.
In this next cell, we produce 12-step-heard forecasts for the final 24 periods in the sample. This requires a loop.
**Note**: These are technically in-sample since the data we are forecasting was used to estimate parameters. Producing OOS forecasts requires two models. The first must exclude the OOS period. The second uses the `predict` method from the full-sample model with the parameters from the shorter sample model that excluded the OOS period.
```
import numpy as np
start = ind_prod.index[-24]
forecast_index = pd.date_range(start, freq=ind_prod.index.freq, periods=36)
cols = ['-'.join(str(val) for val in (idx.year, idx.month)) for idx in forecast_index]
forecasts = pd.DataFrame(index=forecast_index,columns=cols)
for i in range(1, 24):
fcast = res_glob.predict(start=forecast_index[i], end=forecast_index[i+12], dynamic=True)
forecasts.loc[fcast.index, cols[i]] = fcast
_, ax = plt.subplots(figsize=(16, 10))
ind_prod.iloc[-24:].plot(ax=ax, color="black", linestyle="--")
ax = forecasts.plot(ax=ax)
```
## Comparing to SARIMAX
`SARIMAX` is an implementation of a Seasonal Autoregressive Integrated Moving Average with eXogenous regressors model. It supports:
* Specification of seasonal and nonseasonal AR and MA components
* Inclusion of Exogenous variables
* Full maximum-likelihood estimation using the Kalman Filter
This model is more feature rich than `AutoReg`. Unlike `SARIMAX`, `AutoReg` estimates parameters using OLS. This is faster and the problem is globally convex, and so there are no issues with local minima. The closed-form estimator and its performance are the key advantages of `AutoReg` over `SARIMAX` when comparing AR(P) models. `AutoReg` also support seasonal dummies, which can be used with `SARIMAX` if the user includes them as exogenous regressors.
```
from statsmodels.tsa.api import SARIMAX
sarimax_mod = SARIMAX(ind_prod, order=((1,5,12,13),0, 0), trend='c')
sarimax_res = sarimax_mod.fit()
print(sarimax_res.summary())
sarimax_params = sarimax_res.params.iloc[:-1].copy()
sarimax_params.index = res_glob.params.index
params = pd.concat([res_glob.params, sarimax_params], axis=1, sort=False)
params.columns = ["AutoReg", "SARIMAX"]
params
```
## Custom Deterministic Processes
The `deterministic` parameter allows a custom `DeterministicProcess` to be used. This allows for more complex deterministic terms to be constructed, for example one that includes seasonal components with two periods, or, as the next example shows, one that uses a Fourier series rather than seasonal dummies.
```
from statsmodels.tsa.deterministic import DeterministicProcess
dp = DeterministicProcess(housing.index, constant=True, period=12, fourier=2)
mod = AutoReg(housing,2, trend="n",seasonal=False, deterministic=dp)
res = mod.fit()
print(res.summary())
fig = res.plot_predict(720, 840)
```
| true |
code
| 0.632559 | null | null | null | null |
|
# Data description & Problem statement:
This dataset is originally from the National Institute of Diabetes and Digestive and Kidney Diseases. The objective of the dataset is to diagnostically predict whether or not a patient has diabetes, based on certain diagnostic measurements included in the dataset. Several constraints were placed on the selection of these instances from a larger database. In particular, all patients here are females at least 21 years old of Pima Indian heritage.
The type of dataset and problem is a classic supervised binary classification. Given a number of elements all with certain characteristics (features), we want to build a machine learning model to identify people affected by type 2 diabetes.
# Workflow:
- Load the dataset, and define the required functions (e.g. for detecting the outliers)
- Data Cleaning/Wrangling: Manipulate outliers, missing data or duplicate values, Encode categorical variables, etc.
- Split data into training & test parts (utilize the training part for training & hyperparameter tuning of model, and test part for the final evaluation of model)
# Model Training:
- Build an initial XGBoost model, and evaluate it via C-V approach
- Use grid-search along with C-V approach to find the best hyperparameters of XGBoost model: Find the best XGBoost model (Note: I've utilized SMOTE technique via imblearn toolbox to synthetically over-sample the minority category and even the dataset imbalances.)
# Model Evaluation:
- Evaluate the best XGBoost model with optimized hyperparameters on Test Dataset, by calculating:
- AUC score
- Confusion matrix
- ROC curve
- Precision-Recall curve
- Average precision
Finally, calculate the Feature Importance.
```
import sklearn
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn import preprocessing
%matplotlib inline
from scipy import stats
import warnings
warnings.filterwarnings("ignore")
# Function to remove outliers (all rows) by Z-score:
def remove_outliers(X, y, name, thresh=3):
L=[]
for name in name:
drop_rows = X.index[(np.abs(X[name] - X[name].mean()) >= (thresh * X[name].std()))]
L.extend(list(drop_rows))
X.drop(np.array(list(set(L))), axis=0, inplace=True)
y.drop(np.array(list(set(L))), axis=0, inplace=True)
print('number of outliers removed : ' , len(L))
df=pd.read_csv('C:/Users/rhash/Documents/Datasets/pima-indian-diabetes/indians-diabetes.csv')
df.columns=['NP', 'GC', 'BP', 'ST', 'I', 'BMI', 'PF', 'Age', 'Class']
# To Shuffle the data:
np.random.seed(42)
df=df.reindex(np.random.permutation(df.index))
df.reset_index(inplace=True, drop=True)
df.info()
df['ST'].replace(0, df[df['ST']!=0]['ST'].mean(), inplace=True)
df['GC'].replace(0, df[df['GC']!=0]['GC'].mean(), inplace=True)
df['BP'].replace(0, df[df['BP']!=0]['BP'].mean(), inplace=True)
df['BMI'].replace(0, df[df['BMI']!=0]['BMI'].mean(), inplace=True)
df['I'].replace(0, df[df['I']!=0]['I'].mean(), inplace=True)
df.head()
X=df.drop('Class', axis=1)
y=df['Class']
# We initially devide data into training & test folds: We do the Grid-Search only on training part
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42, stratify=y)
#remove_outliers(X_train, y_train, ['NP', 'GC', 'BP', 'ST', 'I', 'BMI', 'PF', 'Age'], thresh=5)
# Building the Initial Model & Cross-Validation:
import xgboost
from xgboost import XGBClassifier
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import StratifiedKFold
model=XGBClassifier()
kfold=StratifiedKFold(n_splits=4, shuffle=True, random_state=42)
scores=cross_val_score(model, X_train, y_train, cv=kfold)
print(scores, "\n")
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std()))
# Grid-Search for the best model parameters:
# We create a sample_weight list for this imbalanced dataset:
from sklearn.utils.class_weight import compute_sample_weight
sw=compute_sample_weight(class_weight='balanced', y=y_train)
from sklearn.model_selection import GridSearchCV
param={'max_depth':[2, 4, 6, 8], 'min_child_weight':[1, 2, 3], 'gamma': [ 0, 0.05, 0.1], 'subsample':[0.7, 1]}
kfold=StratifiedKFold(n_splits=3, shuffle=True, random_state=42)
grid_search=GridSearchCV(XGBClassifier(), param, cv=kfold, n_jobs=-1, scoring="roc_auc")
grid_search.fit(X_train, y_train, sample_weight=sw)
# Grid-Search report:
G=pd.DataFrame(grid_search.cv_results_).sort_values("rank_test_score")
G.head(3)
print("Best parameters: ", grid_search.best_params_)
print("Best validation accuracy: %0.2f (+/- %0.2f)" % (np.round(grid_search.best_score_, decimals=2), np.round(G.loc[grid_search.best_index_,"std_test_score" ], decimals=2)))
print("Test score: ", np.round(grid_search.score(X_test, y_test),2))
from sklearn.metrics import roc_curve, auc, confusion_matrix, classification_report
# Plot a confusion matrix.
# cm is the confusion matrix, names are the names of the classes.
def plot_confusion_matrix(cm, names, title='Confusion matrix', cmap=plt.cm.Blues):
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(names))
plt.xticks(tick_marks, names, rotation=45)
plt.yticks(tick_marks, names)
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
names = ["0", "1"]
# Compute confusion matrix
cm = confusion_matrix(y_test, grid_search.predict(X_test))
np.set_printoptions(precision=2)
print('Confusion matrix, without normalization')
print(cm)
# Normalize the confusion matrix by row (i.e by the number of samples in each class)
cm_normalized = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print('Normalized confusion matrix')
print(cm_normalized)
plt.figure()
plot_confusion_matrix(cm_normalized, names, title='Normalized confusion matrix')
plt.show()
# Classification report:
report=classification_report(y_test, grid_search.predict(X_test))
print(report)
# ROC curve & auc:
from sklearn.metrics import precision_recall_curve, roc_curve, roc_auc_score, average_precision_score
fpr, tpr, thresholds=roc_curve(np.array(y_test),grid_search.predict_proba(X_test)[:, 1] , pos_label=1)
roc_auc=roc_auc_score(np.array(y_test), grid_search.predict_proba(X_test)[:, 1])
plt.figure()
plt.step(fpr, tpr, color='darkorange', lw=2, label='ROC curve (auc = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], color='navy', alpha=0.4, lw=2, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC curve')
plt.legend(loc="lower right")
plt.plot([cm_normalized[0,1]], [cm_normalized[1,1]], 'or')
plt.show()
# Precision-Recall trade-off:
precision, recall, thresholds=precision_recall_curve(y_test,grid_search.predict_proba(X_test)[:, 1], pos_label=1)
ave_precision=average_precision_score(y_test,grid_search.predict_proba(X_test)[:, 1])
plt.step(recall, precision, color='navy')
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.xlim([0, 1.001])
plt.ylim([0, 1.02])
plt.title('Precision-Recall curve: AP={0:0.2f}'.format(ave_precision))
plt.plot([cm_normalized[1,1]], [cm[1,1]/(cm[1,1]+cm[0,1])], 'ob')
plt.show()
# Feature Importance:
im=XGBClassifier().fit(X,y).feature_importances_
# Sort & Plot:
d=dict(zip(np.array(X.columns), im))
k=sorted(d,key=lambda i: d[i], reverse= True)
[print((i,d[i])) for i in k]
# Plot:
c1=pd.DataFrame(np.array(im), columns=["Importance"])
c2=pd.DataFrame(np.array(X.columns[0:8]),columns=["Feature"])
fig, ax = plt.subplots(figsize=(8,6))
sns.barplot(x="Feature", y="Importance", data=pd.concat([c2,c1], axis=1), color="blue", ax=ax)
```
| true |
code
| 0.560253 | null | null | null | null |
|
# Integrate 3rd party transforms into MONAI program
This tutorial shows how to integrate 3rd party transforms into MONAI program.
Mainly shows transforms from `BatchGenerator`, `TorchIO`, `Rising` and `ITK`.
```
! pip install batchgenerators==0.20.1
! pip install torchio==0.16.21
! pip install rising==0.2.0
! pip install itk==5.1.0
# Copyright 2020 MONAI Consortium
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import glob
import numpy as np
import matplotlib.pyplot as plt
from monai.transforms import \
LoadNiftid, AddChanneld, ScaleIntensityRanged, CropForegroundd, \
Spacingd, Orientationd, SqueezeDimd, ToTensord, adaptor, Compose
import monai
from monai.utils import set_determinism
from batchgenerators.transforms.color_transforms import ContrastAugmentationTransform
from torchio.transforms import RescaleIntensity
from rising.random import DiscreteParameter
from rising.transforms import Mirror
from itk import median_image_filter
```
## Set MSD Spleen dataset path
The Spleen dataset can be downloaded from http://medicaldecathlon.com/.
```
data_root = '/workspace/data/medical/Task09_Spleen'
train_images = sorted(glob.glob(os.path.join(data_root, 'imagesTr', '*.nii.gz')))
train_labels = sorted(glob.glob(os.path.join(data_root, 'labelsTr', '*.nii.gz')))
data_dicts = [{'image': image_name, 'label': label_name}
for image_name, label_name in zip(train_images, train_labels)]
```
## Set deterministic training for reproducibility
```
set_determinism(seed=0)
```
## Setup MONAI transforms
```
monai_transforms = [
LoadNiftid(keys=['image', 'label']),
AddChanneld(keys=['image', 'label']),
Spacingd(keys=['image', 'label'], pixdim=(1.5, 1.5, 2.), mode=('bilinear', 'nearest')),
Orientationd(keys=['image', 'label'], axcodes='RAS'),
ScaleIntensityRanged(keys=['image'], a_min=-57, a_max=164, b_min=0.0, b_max=1.0, clip=True),
CropForegroundd(keys=['image', 'label'], source_key='image')
]
```
## Setup BatchGenerator transforms
Note:
1. BatchGenerator requires the arg is `**data`, can't compose with MONAI transforms directly, need `adaptor`.
2. BatchGenerator requires data shape is [B, C, H, W, D], MONAI requires [C, H, W, D].
```
batch_generator_transforms = ContrastAugmentationTransform(data_key='image')
```
## Setup TorchIO transforms
Note:
1. TorchIO specifies several keys internally, use `adaptor` if conflicts.
2. There are few example or tutorial, hard to quickly get start.
3. The TorchIO transforms depend on many TorchIO modules(Subject, Image, ImageDataset, etc.), not easy to support MONAI dict input data.
4. It can handle PyTorch Tensor data(shape: [C, H, W, D]) directly, so used it to handle Tensor in this tutorial.
5. If input data is Tensor, it can't support dict type, need `adaptor`.
```
torchio_transforms = RescaleIntensity(out_min_max=(0., 1.), percentiles=(0.05, 99.5))
```
## Setup Rising transforms
Note:
1. Rising inherits from PyTorch `nn.Module`, expected input data type is PyTorch Tensor, so can only work after `ToTensor`.
2. Rising requires data shape is [B, C, H, W, D], MONAI requires [C, H, W, D].
3. Rising requires the arg is `**data`, need `adaptor`.
```
rising_transforms = Mirror(dims=DiscreteParameter((0, 1, 2)), keys=['image', 'label'])
```
## Setup ITK transforms
Note:
1. ITK transform function API has several args(not only `data`), need to set args in wrapper before Compose.
2. If input data is Numpy, ITK can't support dict type, need wrapper to convert the format.
3. ITK expects input shape [H, W, [D]], so handle every channel and stack the results.
```
def itk_transforms(x):
smoothed = list()
for channel in x['image']:
smoothed.append(median_image_filter(channel, radius=2))
x['image'] = np.stack(smoothed)
return x
```
## Compose all transforms
```
transform = Compose(monai_transforms + [
itk_transforms,
# add another dim as BatchGenerator and Rising expects shape [B, C, H, W, D]
AddChanneld(keys=['image', 'label']),
adaptor(batch_generator_transforms, {'image': 'image'}),
ToTensord(keys=['image', 'label']),
adaptor(rising_transforms, {'image': 'image', 'label': 'label'}),
# squeeze shape from [B, C, H, W, D] to [C, H, W, D] for TorchIO transforms
SqueezeDimd(keys=['image', 'label'], dim=0),
adaptor(torchio_transforms, 'image', {'image': 'data'})
])
```
## Check transforms in DataLoader
```
check_ds = monai.data.Dataset(data=data_dicts, transform=transform)
check_loader = monai.data.DataLoader(check_ds, batch_size=1)
check_data = monai.utils.misc.first(check_loader)
image, label = (check_data['image'][0][0], check_data['label'][0][0])
print(f"image shape: {image.shape}, label shape: {label.shape}")
# plot the slice [:, :, 80]
plt.figure('check', (12, 6))
plt.subplot(1, 2, 1)
plt.title('image')
plt.imshow(image[:, :, 80], cmap='gray')
plt.subplot(1, 2, 2)
plt.title('label')
plt.imshow(label[:, :, 80])
plt.show()
```
| true |
code
| 0.815563 | null | null | null | null |
|
# Transfer Learning
## Imports and Version Selection
```
# TensorFlow ≥2.0 is required for this notebook
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
# check if GPU is available as this notebook will be very slow without GPU
if not tf.test.is_gpu_available():
print("No GPU was detected. CNNs can be very slow without a GPU.")
if IS_COLAB:
print("Go to Runtime > Change runtime and select a GPU hardware accelerator.")
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import tensorflow_datasets as tfds
import numpy as np
from tensorflow.keras.datasets import mnist
from tensorflow.keras.layers import Dense, Activation, Input, Dropout, Conv2D, MaxPooling2D, Flatten, BatchNormalization, GaussianNoise
from tensorflow.keras.models import Model
import matplotlib.pyplot as plt
!pip install --upgrade deeplearning2020
from deeplearning2020 import helpers
# jupyters magic command
%matplotlib inline
# resize the images to a uniform size
def preprocess(image, label):
resized_image = tf.image.resize(image, [224, 224])
# run Xceptions preprocessing function
preprocessed_image = tf.keras.applications.xception.preprocess_input(resized_image)
return preprocessed_image, label
```
## Loading and Preprocessing
```
# download the dataset with labels and with information about the data
data, info = tfds.load("tf_flowers", as_supervised=True, with_info=True)
# print the most important information
dataset_size = info.splits['train'].num_examples
print('dataset size: ', dataset_size)
class_names = info.features['label'].names
print('class names: ', class_names)
n_classes = info.features['label'].num_classes
print('number of classes: ', n_classes)
batch_size = 32
try:
train_data = tfds.load('tf_flowers', split="train[:80%]", as_supervised=True)
test_data = tfds.load('tf_flowers', split="train[80%:100%]", as_supervised=True)
train_data = train_data.shuffle(1000).map(preprocess).batch(batch_size).prefetch(1)
test_data = test_data.map(preprocess).batch(batch_size).prefetch(1)
except(Exception):
# split the data into train and test data with a 8:2 ratio
train_split, test_split = tfds.Split.TRAIN.subsplit([8, 2])
train_data = tfds.load('tf_flowers', split=train_split, as_supervised=True)
test_data = tfds.load('tf_flowers', split=test_split, as_supervised=True)
train_data = train_data.shuffle(1000).map(preprocess).batch(batch_size).prefetch(1)
test_data = test_data.map(preprocess).batch(batch_size).prefetch(1)
# show some images from the dataset
helpers.plot_images(train_data.unbatch().take(9).map(lambda x, y: ((x + 1) / 2, y)), class_names)
```
## Definition and Training
```
from tensorflow.keras.applications.xception import Xception
from tensorflow.keras.layers import GlobalAveragePooling2D
# build a transfer learning model with Xception and a new Fully-Connected-Classifier
base_model = Xception(
weights='imagenet',
include_top=False
)
model = GlobalAveragePooling2D()(base_model.output)
model = Dropout(0.5)(model)
# include new Fully-Connected-Classifier
output_layer = Dense(n_classes, activation='softmax')(model)
# create Model
model = Model(base_model.input, output_layer)
model.summary()
# set the pretrained layers to not trainable because
# there are already trained and we don't want to destroy
# their weights
for layer in base_model.layers:
layer.trainable = False
```

```
model.compile(
optimizer=tf.keras.optimizers.SGD(lr=0.2, momentum=0.9, decay=0.01),
loss='sparse_categorical_crossentropy',
metrics=['accuracy']
)
history = model.fit(
train_data,
epochs=5,
validation_data=test_data
)
```

```
# to finetune the model, we have to set more layers to trainable
# and reduce the learning rate drastically to prevent
# destroying of weights
for layer in base_model.layers:
layer.trainable = True
# reduce the learning rate to not damage the pretrained weights
# model will need longer to train because all the layers are trainable
model.compile(
optimizer=tf.keras.optimizers.SGD(lr=0.01, momentum=0.9, decay=0.001),
loss='sparse_categorical_crossentropy',
metrics=['accuracy']
)
history_finetune=model.fit(
train_data,
epochs=10,
validation_data=test_data
)
```
## Visualization and Evaluation
```
# add the two histories and print the diagram
helpers.plot_two_histories(history, history_finetune)
```
# Transfer Learning with Data Augmentation
## Model Definition
```
from tensorflow.keras.applications.xception import Xception
from tensorflow.keras.layers import GlobalAveragePooling2D
# build a transfer learning model with Xception and a new Fully-Connected-Classifier
base_model_data_augmentation = Xception(
weights='imagenet',
include_top=False
)
model = GlobalAveragePooling2D()(base_model_data_augmentation.output)
model = Dropout(0.5)(model)
# include new Fully-Connected-Classifier
output_layer = Dense(n_classes, activation='softmax')(model)
# create Model
data_augmentation_model = Model(base_model_data_augmentation.input, output_layer)
```
## Adjust Data Augmentation
```
# resize the images to a uniform size
def preprocess_with_data_augmentation(image, label):
resized_image = tf.image.resize(image, [224, 224])
# data augmentation with Tensorflow
augmented_image = tf.image.random_flip_left_right(resized_image)
augmented_image = tf.image.random_hue(augmented_image, 0.08)
augmented_image = tf.image.random_saturation(augmented_image, 0.6, 1.6)
augmented_image = tf.image.random_brightness(augmented_image, 0.05)
augmented_image = tf.image.random_contrast(augmented_image, 0.7, 1.3)
# run Xceptions preprocessing function
preprocessed_image = tf.keras.applications.xception.preprocess_input(augmented_image)
return preprocessed_image, label
batch_size = 32
try:
train_data = tfds.load('tf_flowers', split="train[:80%]", as_supervised=True)
except(Exception):
# split the data into train and test data with a 8:2 ratio
train_split, test_split = tfds.Split.TRAIN.subsplit([8, 2])
train_data = tfds.load('tf_flowers', split=train_split, as_supervised=True)
augmented_train_data = train_data.map(preprocess_with_data_augmentation).batch(batch_size).prefetch(1)
```
## Training
```
# set the pretrained layers to not trainable because
# there are already trained and we don't want to destroy
# their weights
for layer in base_model_data_augmentation.layers:
layer.trainable = False
data_augmentation_model.compile(
optimizer=tf.keras.optimizers.SGD(lr=0.2, momentum=0.9, decay=0.01),
loss='sparse_categorical_crossentropy',
metrics=['accuracy']
)
history_data_augmentation = data_augmentation_model.fit(
augmented_train_data,
epochs=3,
validation_data=test_data
)
```
## Finetuning
```
# to finetune the model, we have to set more layers to trainable
# and reduce the learning rate drastically to prevent
# destroying of weights
for layer in base_model_data_augmentation.layers:
layer.trainable = True
# reduce the learning rate to not damage the pretrained weights
# model will need longer to train because all the layers are trainable
data_augmentation_model.compile(
optimizer=tf.keras.optimizers.SGD(lr=0.01, momentum=0.9, decay=0.001),
loss='sparse_categorical_crossentropy',
metrics=['accuracy']
)
history_finetune_data_augmentation = data_augmentation_model.fit(
augmented_train_data,
epochs=30,
validation_data=test_data
)
```
## Visualization
```
# add the two histories and print the diagram
helpers.plot_two_histories(history_data_augmentation, history_finetune_data_augmentation)
```
| true |
code
| 0.776612 | null | null | null | null |
|
# Example of simple use of active learning API
Compare 3 query strategies: random sampling, uncertainty sampling, and active search.
Observe how we trade off between finding targets and accuracy.
# Imports
```
import warnings
warnings.filterwarnings(action='ignore', category=RuntimeWarning)
from matplotlib import pyplot as plt
import numpy as np
from sklearn.base import clone
from sklearn.datasets import make_moons
from sklearn.svm import SVC
import active_learning
from active_learning.utils import *
from active_learning.query_strats import random_sampling, uncertainty_sampling, active_search
%matplotlib inline
np.random.seed(0)
```
# Load toy data
Have a little binary classification task that is not linearly separable.
```
X, y = make_moons(noise=0.1, n_samples=200)
plt.scatter(X[y==0,0], X[y==0,1])
plt.scatter(X[y==1,0], X[y==1,1])
```
# Training Models
```
# Our basic classifier will be a SVM with rbf kernel
base_clf = SVC(probability=True)
# size of the initial labeled set
init_L_size = 5
# Make 30 queries
n_queries = 30
# set random state for consistency in training data
random_state = 123
```
### Random Sampling
```
random_experiment_data = perform_experiment(
X, y,
base_estimator=clone(base_clf),
query_strat=random_sampling,
n_queries=n_queries,
init_L_size=init_L_size,
random_state=random_state
)
```
### Uncertainty Sampling
```
uncertainty_experiment_data = perform_experiment(
X, y,
base_estimator=clone(base_clf),
query_strat=uncertainty_sampling,
n_queries=n_queries,
init_L_size=init_L_size,
random_state=random_state
)
```
### Active Search
```
as_experiment_data = perform_experiment(
X, y,
base_estimator=clone(base_clf),
query_strat=active_search,
n_queries=n_queries,
init_L_size=init_L_size,
random_state=random_state
)
```
# Compare
```
xx = np.arange(n_queries)
plt.plot(xx, random_experiment_data["accuracy"], label="Random")
plt.plot(xx, uncertainty_experiment_data["accuracy"], label="Uncertainty")
plt.plot(xx, as_experiment_data["accuracy"], label="AS")
plt.title("Accuracy on Test Set vs Num Queries")
plt.ylabel("accuracy")
plt.xlabel("# queries")
plt.legend()
plt.plot(xx, random_experiment_data["history"], label="Random")
plt.plot(xx, uncertainty_experiment_data["history"], label="Uncertainty")
plt.plot(xx, as_experiment_data["history"], label="AS")
plt.title("Number of targets found")
plt.ylabel("# of targets")
plt.xlabel("# queries")
plt.legend()
```
| true |
code
| 0.680189 | null | null | null | null |
|
```
import seaborn as sns
import matplotlib.pyplot as plt
sns.set(style="ticks")
%matplotlib inline
import numpy as np
np.random.seed(sum(map(ord, "axis_grids")))
```
```
tips = sns.load_dataset("tips")
g = sns.FacetGrid(tips, col="time")
```
```
g = sns.FacetGrid(tips, col="time")
g.map(plt.hist, "tip");
```
```
g = sns.FacetGrid(tips, col="sex", hue="smoker")
g.map(plt.scatter, "total_bill", "tip", alpha=.7)
g.add_legend();
```
```
g = sns.FacetGrid(tips, row="smoker", col="time", margin_titles=True)
g.map(sns.regplot, "size", "total_bill", color=".3", fit_reg=False, x_jitter=.1);
```
```
g = sns.FacetGrid(tips, col="day", height=4, aspect=.5)
g.map(sns.barplot, "sex", "total_bill");
```
```
ordered_days = tips.day.value_counts().index
g = sns.FacetGrid(tips, row="day", row_order=ordered_days,
height=1.7, aspect=4,)
g.map(sns.distplot, "total_bill", hist=False, rug=True);
```
```
pal = dict(Lunch="seagreen", Dinner="gray")
g = sns.FacetGrid(tips, hue="time", palette=pal, height=5)
g.map(plt.scatter, "total_bill", "tip", s=50, alpha=.7, linewidth=.5, edgecolor="white")
g.add_legend();
```
```
g = sns.FacetGrid(tips, hue="sex", palette="Set1", height=5, hue_kws={"marker": ["^", "v"]})
g.map(plt.scatter, "total_bill", "tip", s=100, linewidth=.5, edgecolor="white")
g.add_legend();
```
```
attend = sns.load_dataset("attention").query("subject <= 12")
g = sns.FacetGrid(attend, col="subject", col_wrap=4, height=2, ylim=(0, 10))
g.map(sns.pointplot, "solutions", "score", color=".3", ci=None);
```
```
with sns.axes_style("white"):
g = sns.FacetGrid(tips, row="sex", col="smoker", margin_titles=True, height=2.5)
g.map(plt.scatter, "total_bill", "tip", color="#334488", edgecolor="white", lw=.5);
g.set_axis_labels("Total bill (US Dollars)", "Tip");
g.set(xticks=[10, 30, 50], yticks=[2, 6, 10]);
g.fig.subplots_adjust(wspace=.02, hspace=.02);
```
```
g = sns.FacetGrid(tips, col="smoker", margin_titles=True, height=4)
g.map(plt.scatter, "total_bill", "tip", color="#338844", edgecolor="white", s=50, lw=1)
for ax in g.axes.flat:
ax.plot((0, 50), (0, .2 * 50), c=".2", ls="--")
g.set(xlim=(0, 60), ylim=(0, 14));
```
```
from scipy import stats
def quantile_plot(x, **kwargs):
qntls, xr = stats.probplot(x, fit=False)
plt.scatter(xr, qntls, **kwargs)
g = sns.FacetGrid(tips, col="sex", height=4)
g.map(quantile_plot, "total_bill");
```
```
def qqplot(x, y, **kwargs):
_, xr = stats.probplot(x, fit=False)
_, yr = stats.probplot(y, fit=False)
plt.scatter(xr, yr, **kwargs)
g = sns.FacetGrid(tips, col="smoker", height=4)
g.map(qqplot, "total_bill", "tip");
```
```
g = sns.FacetGrid(tips, hue="time", col="sex", height=4)
g.map(qqplot, "total_bill", "tip")
g.add_legend();
```
```
g = sns.FacetGrid(tips, hue="time", col="sex", height=4,
hue_kws={"marker": ["s", "D"]})
g.map(qqplot, "total_bill", "tip", s=40, edgecolor="w")
g.add_legend();
```
```
def hexbin(x, y, color, **kwargs):
cmap = sns.light_palette(color, as_cmap=True)
plt.hexbin(x, y, gridsize=15, cmap=cmap, **kwargs)
with sns.axes_style("dark"):
g = sns.FacetGrid(tips, hue="time", col="time", height=4)
g.map(hexbin, "total_bill", "tip", extent=[0, 50, 0, 10]);
```
```
iris = sns.load_dataset("iris")
g = sns.PairGrid(iris)
g.map(plt.scatter);
```
```
g = sns.PairGrid(iris)
g.map_diag(plt.hist)
g.map_offdiag(plt.scatter);
```
```
g = sns.PairGrid(iris, hue="species")
g.map_diag(plt.hist)
g.map_offdiag(plt.scatter)
g.add_legend();
```
```
g = sns.PairGrid(iris, vars=["sepal_length", "sepal_width"], hue="species")
g.map(plt.scatter);
```
```
g = sns.PairGrid(iris)
g.map_upper(plt.scatter)
g.map_lower(sns.kdeplot)
g.map_diag(sns.kdeplot, lw=3, legend=False);
```
```
g = sns.PairGrid(tips, y_vars=["tip"], x_vars=["total_bill", "size"], height=4)
g.map(sns.regplot, color=".3")
g.set(ylim=(-1, 11), yticks=[0, 5, 10]);
```
```
g = sns.PairGrid(tips, hue="size", palette="GnBu_d")
g.map(plt.scatter, s=50, edgecolor="white")
g.add_legend();
```
```
sns.pairplot(iris, hue="species", height=2.5);
```
```
g = sns.pairplot(iris, hue="species", palette="Set2", diag_kind="kde", height=2.5)
```
| true |
code
| 0.60871 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_08_4_bayesian_hyperparameter_opt.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# T81-558: Applications of Deep Neural Networks
**Module 8: Kaggle Data Sets**
* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)
* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).
# Module 8 Material
* Part 8.1: Introduction to Kaggle [[Video]](https://www.youtube.com/watch?v=v4lJBhdCuCU&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_08_1_kaggle_intro.ipynb)
* Part 8.2: Building Ensembles with Scikit-Learn and Keras [[Video]](https://www.youtube.com/watch?v=LQ-9ZRBLasw&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_08_2_keras_ensembles.ipynb)
* Part 8.3: How Should you Architect Your Keras Neural Network: Hyperparameters [[Video]](https://www.youtube.com/watch?v=1q9klwSoUQw&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_08_3_keras_hyperparameters.ipynb)
* **Part 8.4: Bayesian Hyperparameter Optimization for Keras** [[Video]](https://www.youtube.com/watch?v=sXdxyUCCm8s&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_08_4_bayesian_hyperparameter_opt.ipynb)
* Part 8.5: Current Semester's Kaggle [[Video]](https://www.youtube.com/watch?v=48OrNYYey5E) [[Notebook]](t81_558_class_08_5_kaggle_project.ipynb)
# Google CoLab Instructions
The following code ensures that Google CoLab is running the correct version of TensorFlow.
```
# Startup Google CoLab
try:
%tensorflow_version 2.x
COLAB = True
print("Note: using Google CoLab")
except:
print("Note: not using Google CoLab")
COLAB = False
# Nicely formatted time string
def hms_string(sec_elapsed):
h = int(sec_elapsed / (60 * 60))
m = int((sec_elapsed % (60 * 60)) / 60)
s = sec_elapsed % 60
return "{}:{:>02}:{:>05.2f}".format(h, m, s)
```
# Part 8.4: Bayesian Hyperparameter Optimization for Keras
Snoek, J., Larochelle, H., & Adams, R. P. (2012). [Practical bayesian optimization of machine learning algorithms](https://arxiv.org/pdf/1206.2944.pdf). In *Advances in neural information processing systems* (pp. 2951-2959).
* [bayesian-optimization](https://github.com/fmfn/BayesianOptimization)
* [hyperopt](https://github.com/hyperopt/hyperopt)
* [spearmint](https://github.com/JasperSnoek/spearmint)
```
# Ignore useless W0819 warnings generated by TensorFlow 2.0. Hopefully can remove this ignore in the future.
# See https://github.com/tensorflow/tensorflow/issues/31308
import logging, os
logging.disable(logging.WARNING)
os.environ["TF_CPP_MIN_LOG_LEVEL"] = "3"
import pandas as pd
from scipy.stats import zscore
# Read the data set
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv",
na_values=['NA','?'])
# Generate dummies for job
df = pd.concat([df,pd.get_dummies(df['job'],prefix="job")],axis=1)
df.drop('job', axis=1, inplace=True)
# Generate dummies for area
df = pd.concat([df,pd.get_dummies(df['area'],prefix="area")],axis=1)
df.drop('area', axis=1, inplace=True)
# Missing values for income
med = df['income'].median()
df['income'] = df['income'].fillna(med)
# Standardize ranges
df['income'] = zscore(df['income'])
df['aspect'] = zscore(df['aspect'])
df['save_rate'] = zscore(df['save_rate'])
df['age'] = zscore(df['age'])
df['subscriptions'] = zscore(df['subscriptions'])
# Convert to numpy - Classification
x_columns = df.columns.drop('product').drop('id')
x = df[x_columns].values
dummies = pd.get_dummies(df['product']) # Classification
products = dummies.columns
y = dummies.values
import pandas as pd
import os
import numpy as np
import time
import tensorflow.keras.initializers
import statistics
import tensorflow.keras
from sklearn import metrics
from sklearn.model_selection import StratifiedKFold
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation, Dropout, InputLayer
from tensorflow.keras import regularizers
from tensorflow.keras.callbacks import EarlyStopping
from sklearn.model_selection import StratifiedShuffleSplit
from tensorflow.keras.layers import LeakyReLU,PReLU
from tensorflow.keras.optimizers import Adam
def generate_model(dropout, neuronPct, neuronShrink):
# We start with some percent of 5000 starting neurons on the first hidden layer.
neuronCount = int(neuronPct * 5000)
# Construct neural network
# kernel_initializer = tensorflow.keras.initializers.he_uniform(seed=None)
model = Sequential()
# So long as there would have been at least 25 neurons and fewer than 10
# layers, create a new layer.
layer = 0
while neuronCount>25 and layer<10:
# The first (0th) layer needs an input input_dim(neuronCount)
if layer==0:
model.add(Dense(neuronCount,
input_dim=x.shape[1],
activation=PReLU()))
else:
model.add(Dense(neuronCount, activation=PReLU()))
layer += 1
# Add dropout after each hidden layer
model.add(Dropout(dropout))
# Shrink neuron count for each layer
neuronCount = neuronCount * neuronShrink
model.add(Dense(y.shape[1],activation='softmax')) # Output
return model
# Generate a model and see what the resulting structure looks like.
model = generate_model(dropout=0.2, neuronPct=0.1, neuronShrink=0.25)
model.summary()
def evaluate_network(dropout,lr,neuronPct,neuronShrink):
SPLITS = 2
# Bootstrap
boot = StratifiedShuffleSplit(n_splits=SPLITS, test_size=0.1)
# Track progress
mean_benchmark = []
epochs_needed = []
num = 0
# Loop through samples
for train, test in boot.split(x,df['product']):
start_time = time.time()
num+=1
# Split train and test
x_train = x[train]
y_train = y[train]
x_test = x[test]
y_test = y[test]
model = generate_model(dropout, neuronPct, neuronShrink)
model.compile(loss='categorical_crossentropy', optimizer=Adam(lr=lr))
monitor = EarlyStopping(monitor='val_loss', min_delta=1e-3,
patience=100, verbose=0, mode='auto', restore_best_weights=True)
# Train on the bootstrap sample
model.fit(x_train,y_train,validation_data=(x_test,y_test),callbacks=[monitor],verbose=0,epochs=1000)
epochs = monitor.stopped_epoch
epochs_needed.append(epochs)
# Predict on the out of boot (validation)
pred = model.predict(x_test)
# Measure this bootstrap's log loss
y_compare = np.argmax(y_test,axis=1) # For log loss calculation
score = metrics.log_loss(y_compare, pred)
mean_benchmark.append(score)
m1 = statistics.mean(mean_benchmark)
m2 = statistics.mean(epochs_needed)
mdev = statistics.pstdev(mean_benchmark)
# Record this iteration
time_took = time.time() - start_time
#print(f"#{num}: score={score:.6f}, mean score={m1:.6f}, stdev={mdev:.6f}, epochs={epochs}, mean epochs={int(m2)}, time={hms_string(time_took)}")
tensorflow.keras.backend.clear_session()
return (-m1)
print(evaluate_network(
dropout=0.2,
lr=1e-3,
neuronPct=0.2,
neuronShrink=0.2))
from bayes_opt import BayesianOptimization
import time
# Supress NaN warnings, see: https://stackoverflow.com/questions/34955158/what-might-be-the-cause-of-invalid-value-encountered-in-less-equal-in-numpy
import warnings
warnings.filterwarnings("ignore",category =RuntimeWarning)
# Bounded region of parameter space
pbounds = {'dropout': (0.0, 0.499),
'lr': (0.0, 0.1),
'neuronPct': (0.01, 1),
'neuronShrink': (0.01, 1)
}
optimizer = BayesianOptimization(
f=evaluate_network,
pbounds=pbounds,
verbose=2, # verbose = 1 prints only when a maximum is observed, verbose = 0 is silent
random_state=1,
)
start_time = time.time()
optimizer.maximize(init_points=10, n_iter=100,)
time_took = time.time() - start_time
print(f"Total runtime: {hms_string(time_took)}")
print(optimizer.max)
```
{'target': -0.6500334282952827, 'params': {'dropout': 0.12771198428037775, 'lr': 0.0074010841641111965, 'neuronPct': 0.10774655638231533, 'neuronShrink': 0.2784788676498257}}
| true |
code
| 0.701777 | null | null | null | null |
|
# Basic Bayesian Linear Regression Implementation
```
# Pandas and numpy for data manipulation
import pandas as pd
import numpy as np
# Matplotlib and seaborn for visualization
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
# Linear Regression to verify implementation
from sklearn.linear_model import LinearRegression
# Scipy for statistics
import scipy
# PyMC3 for Bayesian Inference
import pymc3 as pm
```
# Load in Exercise Data
```
exercise = pd.read_csv('data/exercise.csv')
calories = pd.read_csv('data/calories.csv')
df = pd.merge(exercise, calories, on = 'User_ID')
df = df[df['Calories'] < 300]
df = df.reset_index()
df['Intercept'] = 1
df.head()
```
# Plot Relationship
```
plt.figure(figsize=(8, 8))
plt.plot(df['Duration'], df['Calories'], 'bo');
plt.xlabel('Duration (min)', size = 18); plt.ylabel('Calories', size = 18);
plt.title('Calories burned vs Duration of Exercise', size = 20);
# Create the features and response
X = df.loc[:, ['Intercept', 'Duration']]
y = df.ix[:, 'Calories']
```
# Implement Ordinary Least Squares Linear Regression by Hand
```
# Takes a matrix of features (with intercept as first column)
# and response vector and calculates linear regression coefficients
def linear_regression(X, y):
# Equation for linear regression coefficients
beta = np.matmul(np.matmul(np.linalg.inv(np.matmul(X.T, X)), X.T), y)
return beta
# Run the by hand implementation
by_hand_coefs = linear_regression(X, y)
print('Intercept calculated by hand:', by_hand_coefs[0])
print('Slope calculated by hand: ', by_hand_coefs[1])
xs = np.linspace(4, 31, 1000)
ys = by_hand_coefs[0] + by_hand_coefs[1] * xs
plt.figure(figsize=(8, 8))
plt.plot(df['Duration'], df['Calories'], 'bo', label = 'observations', alpha = 0.8);
plt.xlabel('Duration (min)', size = 18); plt.ylabel('Calories', size = 18);
plt.plot(xs, ys, 'r--', label = 'OLS Fit', linewidth = 3)
plt.legend(prop={'size': 16})
plt.title('Calories burned vs Duration of Exercise', size = 20);
```
## Prediction for Datapoint
```
print('Exercising for 15.5 minutes will burn an estimated {:.2f} calories.'.format(
by_hand_coefs[0] + by_hand_coefs[1] * 15.5))
```
# Verify with Scikit-learn Implementation
```
# Create the model and fit on the data
lr = LinearRegression()
lr.fit(X.Duration.reshape(-1, 1), y)
print('Intercept from library:', lr.intercept_)
print('Slope from library:', lr.coef_[0])
```
# Bayesian Linear Regression
### PyMC3 for Bayesian Inference
Implement MCMC to find the posterior distribution of the model parameters. Rather than a single point estimate of the model weights, Bayesian linear regression will give us a posterior distribution for the model weights.
## Model with 500 Observations
```
with pm.Model() as linear_model_500:
# Intercept
intercept = pm.Normal('Intercept', mu = 0, sd = 10)
# Slope
slope = pm.Normal('slope', mu = 0, sd = 10)
# Standard deviation
sigma = pm.HalfNormal('sigma', sd = 10)
# Estimate of mean
mean = intercept + slope * X.loc[0:499, 'Duration']
# Observed values
Y_obs = pm.Normal('Y_obs', mu = mean, sd = sigma, observed = y.values[0:500])
# Sampler
step = pm.NUTS()
# Posterior distribution
linear_trace_500 = pm.sample(1000, step)
```
## Model with all Observations
```
with pm.Model() as linear_model:
# Intercept
intercept = pm.Normal('Intercept', mu = 0, sd = 10)
# Slope
slope = pm.Normal('slope', mu = 0, sd = 10)
# Standard deviation
sigma = pm.HalfNormal('sigma', sd = 10)
# Estimate of mean
mean = intercept + slope * X.loc[:, 'Duration']
# Observed values
Y_obs = pm.Normal('Y_obs', mu = mean, sd = sigma, observed = y.values)
# Sampler
step = pm.NUTS()
# Posterior distribution
linear_trace = pm.sample(1000, step)
```
# Bayesian Model Results
The Bayesian Model provides more opportunities for interpretation than the ordinary least squares regression because it provides a posterior distribution. We can use this distribution to find the most likely single value as well as the entire range of likely values for our model parameters.
PyMC3 has many built in tools for visualizing and inspecting model runs. These let us see the distributions and provide estimates with a level of uncertainty, which should be a necessary part of any model.
## Trace of All Model Parameters
```
pm.traceplot(linear_trace, figsize = (12, 12));
```
## Posterior Distribution of Model Parameters
```
pm.plot_posterior(linear_trace, figsize = (12, 10), text_size = 20);
```
## Confidence Intervals for Model Parameters
```
pm.forestplot(linear_trace);
```
# Predictions of Response Sampled from the Posterior
We can now generate predictions of the linear regression line using the model results. The following plot shows 1000 different estimates of the regression line drawn from the posterior. The distribution of the lines gives an estimate of the uncertainty in the estimate. Bayesian Linear Regression has the benefit that it gives us a posterior __distribution__ rather than a __single point estimate__ in the frequentist ordinary least squares regression.
## All Observations
```
plt.figure(figsize = (8, 8))
pm.plot_posterior_predictive_glm(linear_trace, samples = 100, eval=np.linspace(2, 30, 100), linewidth = 1,
color = 'red', alpha = 0.8, label = 'Bayesian Posterior Fits',
lm = lambda x, sample: sample['Intercept'] + sample['slope'] * x);
plt.scatter(X['Duration'], y.values, s = 12, alpha = 0.8, c = 'blue', label = 'Observations')
plt.plot(X['Duration'], by_hand_coefs[0] + X['Duration'] * by_hand_coefs[1], 'k--', label = 'OLS Fit', linewidth = 1.4)
plt.title('Posterior Predictions with all Observations', size = 20); plt.xlabel('Duration (min)', size = 18);
plt.ylabel('Calories', size = 18);
plt.legend(prop={'size': 16});
pm.df_summary(linear_trace)
```
## Limited Observations
```
plt.figure(figsize = (8, 8))
pm.plot_posterior_predictive_glm(linear_trace_500, samples = 100, eval=np.linspace(2, 30, 100), linewidth = 1,
color = 'red', alpha = 0.8, label = 'Bayesian Posterior Fits',
lm = lambda x, sample: sample['Intercept'] + sample['slope'] * x);
plt.scatter(X['Duration'][:500], y.values[:500], s = 12, alpha = 0.8, c = 'blue', label = 'Observations')
plt.plot(X['Duration'], by_hand_coefs[0] + X['Duration'] * by_hand_coefs[1], 'k--', label = 'OLS Fit', linewidth = 1.4)
plt.title('Posterior Predictions with Limited Observations', size = 20); plt.xlabel('Duration (min)', size = 18);
plt.ylabel('Calories', size = 18);
plt.legend(prop={'size': 16});
pm.df_summary(linear_trace_500)
```
# Specific Prediction for One Datapoint
```
bayes_prediction = linear_trace['Intercept'] + linear_trace['slope'] * 15.5
plt.figure(figsize = (8, 8))
plt.style.use('fivethirtyeight')
sns.kdeplot(bayes_prediction, label = 'Bayes Posterior Prediction')
plt.vlines(x = by_hand_coefs[0] + by_hand_coefs[1] * 15.5,
ymin = 0, ymax = 2.5,
label = 'OLS Prediction',
colors = 'red', linestyles='--')
plt.legend();
plt.xlabel('Calories Burned', size = 18), plt.ylabel('Probability Density', size = 18);
plt.title('Posterior Prediction for 15.5 Minutes', size = 20);
```
| true |
code
| 0.730671 | null | null | null | null |
|
# Text recognition
We have a set of water meter images. We need to get each water meter’s readings. We ask performers to look at the images and write down the digits on each water meter.
To get acquainted with Toloka tools for free, you can use the promo code **TOLOKAKIT1** on $20 on your [profile page](https://toloka.yandex.com/requester/profile?utm_source=github&utm_medium=site&utm_campaign=tolokakit) after registration.
Prepare environment and import all we'll need.
```
!pip install toloka-kit==0.1.15
!pip install crowd-kit==0.0.7
!pip install ipyplot
import datetime
import os
import sys
import time
import logging
import ipyplot
import pandas
import numpy as np
import toloka.client as toloka
import toloka.client.project.template_builder as tb
from crowdkit.aggregation import ROVER
logging.basicConfig(
format='[%(levelname)s] %(name)s: %(message)s',
level=logging.INFO,
stream=sys.stdout,
)
```
Сreate toloka-client instance. All api calls will go through it. More about OAuth token in our [Learn the basics example](https://github.com/Toloka/toloka-kit/tree/main/examples/0.getting_started/0.learn_the_basics) [](https://colab.research.google.com/github/Toloka/toloka-kit/blob/main/examples/0.getting_started/0.learn_the_basics/learn_the_basics.ipynb)
```
toloka_client = toloka.TolokaClient(input("Enter your token:"), 'PRODUCTION') # Or switch to 'SANDBOX'
logging.info(toloka_client.get_requester())
```
## Creating new project
Enter a clear project name and description.
> The project name and description will be visible to the performers.
```
project = toloka.Project(
public_name='Write down the digits in an image',
public_description='Look at the image and write down the digits shown on the water meter.',
)
```
Create task interface.
- Read about configuring the [task interface](https://toloka.ai/docs/guide/reference/interface-spec.html?utm_source=github&utm_medium=site&utm_campaign=tolokakit) in the Requester’s Guide.
- Check the [Interfaces section](https://toloka.ai/knowledgebase/interface?utm_source=github&utm_medium=site&utm_campaign=tolokakit) of our Knowledge Base for more tips on interface design.
- Read more about the [Template builder](https://toloka.ai/docs/template-builder/index.html?utm_source=github&utm_medium=site&utm_campaign=tolokakit) in the Requester’s Guide.
```
header_viewer = tb.MarkdownViewV1("""1. Look at the image
2. Find boxes with the numbers
3. Write down the digits in black section. (Put '0' if there are no digits there)
4. Put '.'
5. Write down the digits in red section""")
image_viewer = tb.ImageViewV1(tb.InputData('image_url'), rotatable=True)
output_field = tb.TextFieldV1(
tb.OutputData('value'),
label='Write down the digits. Format: 365.235',
placeholder='Enter value',
hint="Make sure your format of number is '365.235' or '0.112'",
validation=tb.SchemaConditionV1(
schema={
'type': 'string',
'pattern': r'^\d+\.?\d{0,3}$',
'minLength': 1,
'maxLength': 9,
}
)
)
task_width_plugin = tb.TolokaPluginV1('scroll', task_width=600)
project_interface = toloka.project.TemplateBuilderViewSpec(
view=tb.ListViewV1([header_viewer, image_viewer, output_field]),
plugins=[task_width_plugin],
)
```
Set data specification. And set task interface to project.
> Specifications are a description of input data that will be used in a project and the output data that will be collected from the performers.
Read more about [input and output data specifications](https://yandex.ru/support/toloka-tb/operations/create-specs.html?utm_source=github&utm_medium=site&utm_campaign=tolokakit) in the Requester’s Guide.
```
input_specification = {'image_url': toloka.project.UrlSpec()}
output_specification = {'value': toloka.project.StringSpec()}
project.task_spec = toloka.project.task_spec.TaskSpec(
input_spec=input_specification,
output_spec=output_specification,
view_spec=project_interface,
)
```
Write short and clear instructions.
> Though the task itself is simple, be sure to add examples for non-obvious cases (like when there are no red digits on an image). This helps to eliminate noise in the labels.
Get more tips on designing [instructions](https://toloka.ai/knowledgebase/instruction?utm_source=github&utm_medium=site&utm_campaign=tolokakit) in our Knowledge Base.
```
project.public_instructions = """This task is to solve machine learning problem of digit recognition on the image.<br>
The more precise you read the information from the image the more precise would be algorithm<br>
Your contribution here is to get exact information even if there are any complicated and uncertain cases.<br>
We hope for your skills to solve one of the important science problem.<br><br>
<b>Basic steps:</b><br>
<ul><li>Look at the image and find meter with the numbers in the boxes</li>
<li>Find black numbers/section and red numbers/section</li>
<li>Put black and red numbers separated with '.' to text field</li></ul>"""
```
Create a project.
```
project = toloka_client.create_project(project)
```
## Preparing data
This example uses [Toloka WaterMeters](https://toloka.ai/datasets?utm_source=github&utm_medium=site&utm_campaign=tolokakit) dataset collected by Roman Kucev.
```
!curl https://s3.mds.yandex.net/tlk/dataset/TlkWaterMeters/data.tsv --output data.tsv
raw_dataset = pandas.read_csv('data.tsv', sep='\t', dtype={'value': 'str'})
raw_dataset = raw_dataset[['image_url', 'value']]
with pandas.option_context("max_colwidth", 100):
display(raw_dataset)
```
Lets look at the images from this dataset:
<table align="center">
<tr>
<td>
<img src="https://tlk.s3.yandex.net/dataset/TlkWaterMeters/images/id_53_value_595_825.jpg" alt="value 595.825">
</td>
<td>
<img src="https://tlk.s3.yandex.net/dataset/TlkWaterMeters/images/id_553_value_65_475.jpg" alt="value 65.475">
</td>
<td>
<img src="https://tlk.s3.yandex.net/dataset/TlkWaterMeters/images/id_407_value_21_86.jpg" alt="value 21.860">
</td>
</tr>
<tr><td align="center" colspan="3">
<b>Figure 1.</b> Images from dataset
</td></tr>
</table>
Split this dataset into three parts
- Training tasks - we'll put them into training. This type of task must contain ground truth and hint about how to perform it.
- Golden tasks - we'll put it into the regular pool. This type of task must contain ground truth.
- Regular tasks - for regular pool. Only image url as input.
```
raw_dataset = raw_dataset.sample(frac=1).reset_index(drop=True)
training_dataset, golden_dataset, main_dataset, _ = np.split(raw_dataset, [10, 20, 120], axis=0)
print(f'training_dataset - {len(training_dataset)}')
print(f'golden_dataset - {len(golden_dataset)}')
print(f'main_dataset - {len(main_dataset)}')
```
## Create a training pool
> Training is an essential part of almost every crowdsourcing project. It allows you to select performers who have really mastered the task, and thus improve quality. Training is also a great tool for scaling your task because you can run it any time you need new performers.
Read more about [selecting performers](https://toloka.ai/knowledgebase/quality-control?utm_source=github&utm_medium=site&utm_campaign=tolokakit) in our Knowledge Base.
```
training = toloka.Training(
project_id=project.id,
private_name='Text recognition training',
may_contain_adult_content=False,
assignment_max_duration_seconds=60*10,
mix_tasks_in_creation_order=False,
shuffle_tasks_in_task_suite=False,
training_tasks_in_task_suite_count=2,
task_suites_required_to_pass=5,
retry_training_after_days=5,
inherited_instructions=True,
)
training = toloka_client.create_training(training)
```
Upload training tasks to the pool.
> It’s important to include examples for all сases in the training. Make sure the training set is balanced and the comments explain why an answer is correct. Don’t just name the correct answers.
```
training_tasks = [
toloka.Task(
pool_id=training.id,
input_values={'image_url': row.image_url},
known_solutions = [toloka.task.BaseTask.KnownSolution(output_values={'value': row.value})],
message_on_unknown_solution=f'Black section is {row.value.split(".")[0]}. Red section is {row.value.split(".")[1]}.',
)
for row in training_dataset.itertuples()
]
result = toloka_client.create_tasks(training_tasks, allow_defaults=True)
print(len(result.items))
```
## Create the main pool
A pool is a set of paid tasks grouped into task pages. These tasks are sent out for completion at the same time.
> All tasks within a pool have the same settings (price, quality control, etc.)
```
pool = toloka.Pool(
project_id=project.id,
# Give the pool any convenient name. You are the only one who will see it.
private_name='Write down the digits in an image.',
may_contain_adult_content=False,
# Set the price per task page.
reward_per_assignment=0.02,
will_expire=datetime.datetime.utcnow() + datetime.timedelta(days=365),
# Overlap. This is the number of users who will complete the same task.
defaults=toloka.Pool.Defaults(default_overlap_for_new_task_suites=3),
# Time allowed for completing a task page
assignment_max_duration_seconds=600,
)
```
- Read more about [pricing principles](https://toloka.ai/knowledgebase/pricing?utm_source=github&utm_medium=site&utm_campaign=tolokakit) in our Knowledge Base.
- To understand [how overlap works](https://toloka.ai/docs/guide/concepts/mvote.html?utm_source=github&utm_medium=site&utm_campaign=tolokakit), go to the Requester’s Guide.
- To understand how much time it should take to complete a task suite, try doing it yourself.
Attach the training you created earlier and select the accuracy level that is required to reach the main pool.
```
pool.set_training_requirement(training_pool_id=training.id, training_passing_skill_value=75)
```
Select English-speaking performers
```
pool.filter = toloka.filter.Languages.in_('EN')
```
Set up [Quality control](https://toloka.ai/docs/guide/concepts/control.html?utm_source=github&utm_medium=site&utm_campaign=tolokakit). Ban performers who give incorrect responses to control tasks.
> Since tasks such as these have an answer that can be used as ground truth, we can use standard quality control rules like golden sets.
Read more about [quality control principles](https://toloka.ai/knowledgebase/quality-control?utm_source=github&utm_medium=site&utm_campaign=tolokakit) in our Knowledge Base or check out [control tasks settings](https://toloka.ai/docs/guide/concepts/goldenset.html?utm_source=github&utm_medium=site&utm_campaign=tolokakit) in the Requester’s Guide.
```
pool.quality_control.add_action(
collector=toloka.collectors.GoldenSet(),
conditions=[
toloka.conditions.GoldenSetCorrectAnswersRate < 80.0,
toloka.conditions.GoldenSetAnswersCount >= 3
],
action=toloka.actions.RestrictionV2(
scope='PROJECT',
duration=2,
duration_unit='DAYS',
private_comment='Control tasks failed'
)
)
pool.quality_control.add_action(
collector=toloka.collectors.AssignmentSubmitTime(history_size=5, fast_submit_threshold_seconds=7),
conditions=[toloka.conditions.FastSubmittedCount >= 1],
action=toloka.actions.RestrictionV2(
scope='PROJECT',
duration=2,
duration_unit='DAYS',
private_comment='Fast response'
))
```
Specify the number of tasks per page. For example: 3 main tasks and 1 control task.
> We recommend putting as many tasks on one page as a performer can complete in 1 to 5 minutes. That way, performers are less likely to get tired, and they won’t lose a significant amount of data if a technical issue occurs.
To learn more about [grouping tasks](https://toloka.ai/docs/search/?utm_source=github&utm_medium=site&utm_campaign=tolokakit&query=smart+mixing) into suites, read the Requester’s Guide.
```
pool.set_mixer_config(
real_tasks_count=3,
golden_tasks_count=1
)
```
Create pool
```
pool = toloka_client.create_pool(pool)
```
**Uploading tasks**
Create control tasks. In small pools, control tasks should account for 10–20% of all tasks.
> Control tasks are tasks that already contain the correct response. They are used for checking the quality of responses from performers. The performer's response is compared to the response you provided. If they match, it means the performer answered correctly.
> Make sure to include different variations of correct responses in equal amounts.
To learn more about [creating control tasks](https://toloka.ai/docs/guide/concepts/task_markup.html?utm_source=github&utm_medium=site&utm_campaign=tolokakit), go to the Requester’s Guide.
```
golden_tasks = [
toloka.Task(
pool_id=pool.id,
input_values={'image_url': row.image_url},
known_solutions = [
toloka.task.BaseTask.KnownSolution(
output_values={'value': row.value}
)
],
infinite_overlap=True,
)
for row in golden_dataset.itertuples()
]
```
Create pool tasks
```
tasks = [
toloka.Task(
pool_id=pool.id,
input_values={'image_url': url},
)
for url in main_dataset['image_url']
]
```
Upload tasks
```
created_tasks = toloka_client.create_tasks(golden_tasks + tasks, allow_defaults=True)
print(len(created_tasks.items))
```
You can visit created pool in web-interface and preview tasks and control tasks.
<table align="center">
<tr>
<td>
<img src="./img/performer_interface.png" alt="Possible performer interface">
</td>
</tr>
<tr><td align="center">
<b>Figure 2.</b> Possible performer interface.
</td></tr>
</table>
Start the pool.
**Important.** Remember that real Toloka performers will complete the tasks.
Double check that everything is correct
with your project configuration before you start the pool
```
training = toloka_client.open_training(training.id)
print(f'training - {training.status}')
pool = toloka_client.open_pool(pool.id)
print(f'main pool - {pool.status}')
```
## Receiving responses
Wait until the pool is completed.
```
pool_id = pool.id
def wait_pool_for_close(pool_id, minutes_to_wait=1):
sleep_time = 60 * minutes_to_wait
pool = toloka_client.get_pool(pool_id)
while not pool.is_closed():
op = toloka_client.get_analytics([toloka.analytics_request.CompletionPercentagePoolAnalytics(subject_id=pool.id)])
op = toloka_client.wait_operation(op)
percentage = op.details['value'][0]['result']['value']
logging.info(
f' {datetime.datetime.now().strftime("%H:%M:%S")}\t'
f'Pool {pool.id} - {percentage}%'
)
time.sleep(sleep_time)
pool = toloka_client.get_pool(pool.id)
logging.info('Pool was closed.')
wait_pool_for_close(pool_id)
```
Get responses
When all the tasks are completed, look at the responses from performers.
```
answers = []
for assignment in toloka_client.get_assignments(pool_id=pool.id, status='ACCEPTED'):
for task, solution in zip(assignment.tasks, assignment.solutions):
if not task.known_solutions:
answers.append([task.input_values['image_url'], solution.output_values['value'], assignment.user_id])
print(f'answers count: {len(answers)}')
# Prepare dataframe
answers_df = pandas.DataFrame(answers, columns=['task', 'text', 'performer'])
```
Aggregation results using the ROVER model impemented in [Crowd-Kit](https://github.com/Toloka/crowd-kit#crowd-kit-computational-quality-control-for-crowdsourcing).
```
rover_agg_df = ROVER(tokenizer=lambda x: list(x), detokenizer=lambda x: ''.join(x)).fit_predict(answers_df)
```
Look at the results.
Some preparations for displaying the results
```
images = rover_agg_df.index.values
labels = rover_agg_df.values
start_with = 0
```
Note: The cell below can be run several times.
```
if start_with >= len(rover_agg_df):
logging.info('no more images')
else:
ipyplot.plot_images(
images=images[start_with:],
labels=labels[start_with:],
max_images=8,
img_width=300,
)
start_with += 8
```
**You** can see the labeled images. Some possible results are shown in figure 3 below.
<table align="center">
<tr><td>
<img src="./img/possible_result.png"
alt="Possible results">
</td></tr>
<tr><td align="center">
<b>Figure 3.</b> Possible results.
</td></tr>
</table>
| true |
code
| 0.280481 | null | null | null | null |
|
# Density estimation demo
Here we demonstrate how to use the ``inference.pdf`` module for estimating univariate probability density functions from sample data.
```
from numpy import linspace, zeros, exp, log, sqrt, pi
from numpy.random import normal, exponential
from scipy.special import erfc
import matplotlib.pyplot as plt
```
## Kernel-density estimation
Gaussian kernel-density estimation is implemented via the `GaussianKDE` class:
```
# generate some sample data to use as a test-case
N = 150000
sample = zeros(N)
sample[:N//3] = normal(size=N//3)*0.5 + 1.8
sample[N//3:] = normal(size=2*(N//3))*0.5 + 3.5
# GaussianKDE takes an array of sample values as its only argument
from inference.pdf import GaussianKDE
PDF = GaussianKDE(sample)
```
Instances of density estimator classes like `GaussianKDE` can be called as functions to return the estimate of the PDF at given spatial points:
```
x = linspace(0, 6, 1000) # make an axis on which to evaluate the PDF estimate
p = PDF(x) # call the instance to get the estimate
```
We could plot the estimate manually, but for convenience the `plot_summary()` method will generate a plot automatically as well as summary statistics:
```
PDF.plot_summary()
```
The summary statistics can be accessed via properties or methods:
```
# the location of the mode is a property
mode = PDF.mode
# The highest-density interval for any fraction of total probability is returned by the interval() method
hdi_95 = PDF.interval(frac = 0.95)
# the mean, variance, skewness and excess kurtosis are returned by the moments() method:
mean, variance, skewness, kurtosis = PDF.moments()
```
By default, `GaussianKDE` uses a simple but easy to compute estimate of the bandwidth (the standard deviation of each Gaussian kernel).
However, when estimating strongly non-normal distributions, this simple approach will over-estimate required bandwidth.
In these cases, the cross-validation bandwidth selector can be used to obtain better results, but with higher computational cost.
```
# to demonstrate, lets create a new sample:
N = 30000
sample = zeros(N)
sample[:N//3] = normal(size=N//3)
sample[N//3:] = normal(size=2*(N//3)) + 10
# now construct estimators using the simple and cross-validation estimators
pdf_simple = GaussianKDE(sample)
pdf_crossval = GaussianKDE(sample, cross_validation = True)
# now build an axis on which to evaluate the estimates
x = linspace(-4,14,500)
# for comparison also compute the real distribution
exact = (exp(-0.5*x**2)/3 + 2*exp(-0.5*(x-10)**2)/3)/sqrt(2*pi)
# plot everything together
plt.plot(x, pdf_simple(x), label = 'simple')
plt.plot(x, pdf_crossval(x), label = 'cross-validation')
plt.plot(x, exact, label = 'exact')
plt.ylabel('probability density')
plt.xlabel('x')
plt.grid()
plt.legend()
plt.show()
```
## Functional density estimation for unimodal PDFs
If we know that the distribution being estimated is a single (but potentially highly skewed) peak, the `UnimodalPdf` class can robustly estimate the PDF even at smaller sample sizes. It works by fitting a heavily modified Student-t distribution to the sample data.
```
# Create some samples from the exponentially-modified Gaussian distribution
L = 0.3 # decay constant of the exponential distribution
sample = normal(size = 3000) + exponential(scale = 1./L, size = 3000)
# create an instance of the density estimator
from inference.pdf import UnimodalPdf
PDF = UnimodalPdf(sample)
# plot the estimate along with the exact PDF for comparison
x = linspace(-5, 15, 1000)
exact = 0.5*L*exp(0.5*L*(L-2*x))*erfc((L-x)/sqrt(2)) # exact PDF for the exp-gaussian distribution
plt.plot(x, PDF(x), label = 'UnimodalPdf estimate', lw = 3)
plt.plot(x, exact, label = 'exact distribution', ls = 'dashed', lw = 3)
plt.ylabel('probability density')
plt.xlabel('x')
plt.legend()
plt.grid()
plt.tight_layout()
plt.show()
```
| true |
code
| 0.866415 | null | null | null | null |
|
# Assignment 3: RTRL
Implement an RNN with RTRL. The ds/dw partial derivative is 2D hidden x (self.n_hidden * self.n_input) instead of 3d.
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
class RNN(object):
def __init__(self, n_input, n_hidden, n_output):
# init weights and biases
self.n_input = n_input
self.n_hidden = n_hidden
self.n_output = n_output
self.W = np.random.normal(scale=0.1, size=(n_hidden, n_input))
self.R = np.eye(n_hidden)
self.V = np.random.normal(scale=0.1, size=(n_output, n_hidden))
self.bh = np.zeros((n_hidden, 1))
self.bo = np.zeros((n_output, 1))
self.grad = {}
self.reset()
def reset(self):
# init hidden activation
self.s = np.zeros((self.n_hidden, 1))
self.a = np.zeros((self.n_hidden, 1))
# init buffers for recursive gradients
self.ds_dW = np.zeros((self.n_hidden, self.n_hidden * self.n_input))
self.ds_dR = np.zeros((self.n_hidden, self.n_hidden * self.n_hidden))
self.ds_db = np.zeros((self.n_hidden, self.n_hidden))
def forward(self, x):
assert x.shape[1] == self.n_input
assert len(x.shape) == 2
"""your code goes here, method must return model's prediction"""
# partial derivative for accumulation. this is the R * f' * f that can be reused
der = self.R * np.tile(1-self.a**2, self.n_hidden)
# accumulate gradients
self.ds_dW = der @ self.ds_dW + np.kron(np.eye(self.n_hidden), x)
self.ds_dR = der @ self.ds_dR + np.kron(np.eye(self.n_hidden), self.a.T)
self.ds_db = der @ self.ds_db + np.eye(self.n_hidden)
# do regular 1 step forward pass
self.s = self.W @ x.T + self.R @ self.a + self.bh
self.a = np.tanh(self.s) # can be reused in backward pass
return (self.V @ self.a + self.bo).T
def backward(self, y_hat, y):
assert y_hat.shape[1] == self.n_output
assert len(y_hat.shape) == 2
assert y_hat.shape == y.shape, f"shape mismatch {y_hat.shape} {y.shape}"
e = (y_hat - y).T # error == derivative{L}/derivative{s} == dL_dy
dL_ds = ((self.V.T @ e) * (1 - self.a**2)) # transposed to fit shape
# 1:1 copy from ex1, only depend on error
self.grad["bo"] = e
self.grad["V"] = e @ self.a.T
# collect new gradients
self.grad["W"] = (self.ds_dW.T @ dL_ds).reshape(self.W.shape)
self.grad["R"] = (self.ds_dR.T @ dL_ds).reshape(self.R.shape).T
self.grad["bh"]= self.ds_db.T @ dL_ds
# compute loss (halved squared error)
return np.sum(0.5 * (y - y_hat)**2)
def fast_forward(self, x_seq):
# this is a forward pass without gradient computation for gradient checking
s = np.zeros_like(self.s)
for x in x_seq:
s = self.W @ x.reshape(*x.shape, 1) + self.R.T @ np.tanh(s) + self.bh
return self.V @ np.tanh(s) + self.bo
def gradient_check(self, x, y, eps=1e-5, thresh=1e-5, verbose=True):
for name, ga in self.grad.items():
if verbose:
print("weight\t",name)
gn = np.zeros_like(ga)
w = self.__dict__[name]
for idx, w_orig in np.ndenumerate(w):
w[idx] = w_orig + eps/2
hi = np.sum(0.5 * (y - self.fast_forward(x))**2)
w[idx] = w_orig - eps/2
lo = np.sum(0.5 * (y - self.fast_forward(x))**2)
w[idx] = w_orig
gn[idx] = (hi - lo) / eps
dev = abs(gn[idx] - ga[idx])
if verbose: # extended error
print(f"numeric {gn[idx]}\tanalytic {ga[idx]}\tdeviation {dev}")
assert dev < thresh
def update(self, eta):
# update weights
for name, grad in self.grad.items():
self.__dict__[name] -= eta * grad
def generate_samples(seq_length, batch_size, input_size):
while True:
x = np.random.uniform(low=-1, high=1, size=(seq_length, batch_size, input_size))
y = x[0,:,:]
yield x, y
def check_gradients():
rnn = RNN(2, 5, 2)
data = generate_samples(seq_length=10, batch_size=1, input_size=2)
for i, (x, y) in zip(range(1), data):
rnn.reset()
for x_t in x:
y_hat = rnn.forward(x_t)
rnn.backward(y_hat, y)
rnn.gradient_check(x, y.T)
check_gradients()
```
# Train gradient and plot weights
```
def train():
iter_steps = 15000
lr = 1e-2
seq_length = 5
rnn = RNN(1, 10, 1)
data = generate_samples(seq_length=seq_length, batch_size=1, input_size=1)
loss = []
for i, (x, y) in zip(range(iter_steps), data):
rnn.reset()
for x_t in x:
y_hat = rnn.forward(x_t)
loss.append(rnn.backward(y_hat, y))
rnn.update(lr)
# plot learnin g curve
plt.title('sequence length %d' % seq_length)
plt.plot(range(len(loss)), loss)
plt.show()
train()
```
| true |
code
| 0.723145 | null | null | null | null |
|
<img src="../../images/qiskit-heading.gif" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="500 px" align="left">
# Getting Started with Qiskit
Here, we provide an overview of working with Qiskit. Qiskit provides the basic building blocks necessary to program quantum computers. The basic concept of Qiskit is an array of quantum circuits. A workflow using Qiskit consists of two stages: **Build** and **Execute**. **Build** allows you to make different quantum circuits that represent the problem you are solving, and **Execute** allows you to run them on different backends. After the jobs have been run, the data is collected. There are methods for putting this data together, depending on the program. This either gives you the answer you wanted, or allows you to make a better program for the next instance.
**Contents**
[Circuit basics](#circuit_basics)
[Simulating circuits with Qiskit Aer](#aer_simulation)
[Running circuits using the IBMQ provider](#ibmq_provider)
**Code imports**
```
import numpy as np
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister
from qiskit import execute
```
## Circuit Basics <a id='circuit_basics'></a>
### Building the circuit
The basic elements needed for your first program are the QuantumCircuit, and QuantumRegister.
```
# Create a Quantum Register with 3 qubits.
q = QuantumRegister(3, 'q')
# Create a Quantum Circuit acting on the q register
circ = QuantumCircuit(q)
```
<div class="alert alert-block alert-info">
<b>Note:</b> Naming the QuantumRegister is optional and not required.
</div>
After you create the circuit with its registers, you can add gates ("operations") to manipulate the registers. As you proceed through the documentation you will find more gates and circuits; the below is an example of a quantum circuit that makes a three-qubit GHZ state
$$|\psi\rangle = \left(|000\rangle+|111\rangle\right)/\sqrt{2}.$$
To create such a state, we start with a 3-qubit quantum register. By default, each qubit in the register is initialized to $|0\rangle$. To make the GHZ state, we apply the following gates:
* A Hadamard gate $H$ on qubit 0, which puts it into a superposition state.
* A controlled-Not operation ($C_{X}$) between qubit 0 and qubit 1.
* A controlled-Not operation between qubit 0 and qubit 2.
On an ideal quantum computer, the state produced by running this circuit would be the GHZ state above.
In Qiskit, operations can be added to the circuit one-by-one, as shown below.
```
# Add a H gate on qubit 0, putting this qubit in superposition.
circ.h(q[0])
# Add a CX (CNOT) gate on control qubit 0 and target qubit 1, putting
# the qubits in a Bell state.
circ.cx(q[0], q[1])
# Add a CX (CNOT) gate on control qubit 0 and target qubit 2, putting
# the qubits in a GHZ state.
circ.cx(q[0], q[2])
```
## Visualize Circuit
You can visualize your circuit using Qiskit `QuantumCircuit.draw()`, which plots circuit in the form found in many textbooks.
```
circ.draw()
```
In this circuit, the qubits are put in order with qubit zero at the top and qubit two at the bottom. The circuit is read left-to-right (meaning that gates which are applied earlier in the circuit show up further to the left).
## Simulating circuits using Qiskit Aer <a id='aer_simulation'></a>
Qiskit Aer is our package for simulating quantum circuits. It provides many different backends for doing a simulation. Here we use the basic python version.
### Statevector backend
The most common backend in Qiskit Aer is the `statevector_simulator`. This simulator returns the quantum
state which is a complex vector of dimensions $2^n$ where $n$ is the number of qubits
(so be careful using this as it will quickly get too large to run on your machine).
<div class="alert alert-block alert-info">
When representing the state of a multi-qubit system, the tensor order used in qiskit is different than that use in most physics textbooks. Suppose there are $n$ qubits, and qubit $j$ is labeled as $Q_{j}$. In most textbooks (such as Nielsen and Chuang's "Quantum Computation and Information"), the basis vectors for the $n$-qubit state space would be labeled as $Q_{0}\otimes Q_{1} \otimes \cdots \otimes Q_{n}$. **This is not the ordering used by qiskit!** Instead, qiskit uses an ordering in which the $n^{\mathrm{th}}$ qubit is on the <em><strong>left</strong></em> side of the tensor product, so that the basis vectors are labeled as $Q_n\otimes \cdots \otimes Q_1\otimes Q_0$.
For example, if qubit zero is in state 0, qubit 1 is in state 0, and qubit 2 is in state 1, qiskit would represent this state as $|100\rangle$, whereas most physics textbooks would represent it as $|001\rangle$.
This difference in labeling affects the way multi-qubit operations are represented as matrices. For example, qiskit represents a controlled-X ($C_{X}$) operation with qubit 0 being the control and qubit 1 being the target as
$$C_X = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\\end{pmatrix}.$$
</div>
To run the above circuit using the statevector simulator, first you need to import Aer and then set the backend to `statevector_simulator`.
```
# Import Aer
from qiskit import BasicAer
# Run the quantum circuit on a statevector simulator backend
backend = BasicAer.get_backend('statevector_simulator')
```
Now we have chosen the backend it's time to compile and run the quantum circuit. In Qiskit we provide the `execute` function for this. ``execute`` returns a ``job`` object that encapsulates information about the job submitted to the backend.
<div class="alert alert-block alert-info">
<b>Tip:</b> You can obtain the above parameters in Jupyter. Simply place the text cursor on a function and press Shift+Tab.
</div>
```
# Create a Quantum Program for execution
job = execute(circ, backend)
```
When you run a program, a job object is made that has the following two useful methods:
`job.status()` and `job.result()` which return the status of the job and a result object respectively.
<div class="alert alert-block alert-info">
<b>Note:</b> Jobs run asynchronously but when the result method is called it switches to synchronous and waits for it to finish before moving on to another task.
</div>
```
result = job.result()
```
The results object contains the data and Qiskit provides the method
`result.get_statevector(circ)` to return the state vector for the quantum circuit.
```
outputstate = result.get_statevector(circ, decimals=3)
print(outputstate)
```
Qiskit also provides a visualization toolbox to allow you to view these results.
Below, we use the visualization function to plot the real and imaginary components of the state vector.
```
from qiskit.tools.visualization import plot_state_city
plot_state_city(outputstate)
```
### Unitary backend
Qiskit Aer also includes a `unitary_simulator` that works _provided all the elements in the circuit are unitary operations_. This backend calculates the $2^n \times 2^n$ matrix representing the gates in the quantum circuit.
```
# Run the quantum circuit on a unitary simulator backend
backend = BasicAer.get_backend('unitary_simulator')
job = execute(circ, backend)
result = job.result()
# Show the results
print(result.get_unitary(circ, decimals=3))
```
### OpenQASM backend
The simulators above are useful because they provide information about the state output by the ideal circuit and the matrix representation of the circuit. However, a real experiment terminates by _measuring_ each qubit (usually in the computational $|0\rangle, |1\rangle$ basis). Without measurement, we cannot gain information about the state. Measurements cause the quantum system to collapse into classical bits.
For example, suppose we make independent measurements on each qubit of the three-qubit GHZ state
$$|\psi\rangle = |000\rangle +|111\rangle)/\sqrt{2},$$
and let $xyz$ denote the bitstring that results. Recall that, under the qubit labeling used by Qiskit, $x$ would correspond to the outcome on qubit 2, $y$ to the outcome on qubit 1, and $z$ to the outcome on qubit 0. This representation of the bitstring puts the most significant bit (MSB) on the left, and the least significant bit (LSB) on the right. This is the standard ordering of binary bitstrings. We order the qubits in the same way, which is why Qiskit uses a non-standard tensor product order.
The probability of obtaining outcome $xyz$ is given by
$$\mathrm{Pr}(xyz) = |\langle xyz | \psi \rangle |^{2}.$$
By explicit computation, we see there are only two bitstrings that will occur: $000$ and $111$. If the bitstring $000$ is obtained, the state of the qubits is $|000\rangle$, and if the bitstring is $111$, the qubits are left in the state $|111\rangle$. The probability of obtaining 000 or 111 is the same; namely, 1/2:
$$\begin{align}
\mathrm{Pr}(000) &= |\langle 000 | \psi \rangle |^{2} = \frac{1}{2}\\
\mathrm{Pr}(111) &= |\langle 111 | \psi \rangle |^{2} = \frac{1}{2}.
\end{align}$$
To simulate a circuit that includes measurement, we need to add measurements to the original circuit above, and use a different Aer backend.
```
# Create a Classical Register with 3 bits.
c = ClassicalRegister(3, 'c')
# Create a Quantum Circuit
meas = QuantumCircuit(q, c)
meas.barrier(q)
# map the quantum measurement to the classical bits
meas.measure(q,c)
# The Qiskit circuit object supports composition using
# the addition operator.
qc = circ+meas
#drawing the circuit
qc.draw()
```
This circuit adds a classical register, and three measurements that are used to map the outcome of qubits to the classical bits.
To simulate this circuit, we use the ``qasm_simulator`` in Qiskit Aer. Each run of this circuit will yield either the bitstring 000 or 111. To build up statistics about the distribution of the bitstrings (to, e.g., estimate $\mathrm{Pr}(000)$), we need to repeat the circuit many times. The number of times the circuit is repeated can be specified in the ``execute`` function, via the ``shots`` keyword.
```
# Use Aer's qasm_simulator
backend_sim = BasicAer.get_backend('qasm_simulator')
# Execute the circuit on the qasm simulator.
# We've set the number of repeats of the circuit
# to be 1024, which is the default.
job_sim = execute(qc, backend_sim, shots=1024)
# Grab the results from the job.
result_sim = job_sim.result()
```
Once you have a result object, you can access the counts via the function `get_counts(circuit)`. This gives you the _aggregated_ binary outcomes of the circuit you submitted.
```
counts = result_sim.get_counts(qc)
print(counts)
```
Approximately 50 percent of the time the output bitstring is 000. Qiskit also provides a function `plot_histogram` which allows you to view the outcomes.
```
from qiskit.tools.visualization import plot_histogram
plot_histogram(counts)
```
The estimated outcome probabilities $\mathrm{Pr}(000)$ and $\mathrm{Pr}(111)$ are computed by taking the aggregate counts and dividing by the number of shots (times the circuit was repeated). Try changing the ``shots`` keyword in the ``execute`` function and see how the estimated probabilities change.
## Running circuits using the IBMQ provider <a id='ibmq_provider'></a>
To faciliate access to real quantum computing hardware, we have provided a simple API interface.
To access IBMQ devices, you'll need an API token. For the public IBM Q devices, you can generate an API token [here](https://quantumexperience.ng.bluemix.net/qx/account/advanced) (create an account if you don't already have one). For Q Network devices, login to the q-console, click your hub, group, and project, and expand "Get Access" to generate your API token and access url.
Our IBMQ provider lets you run your circuit on real devices or on our HPC simulator. Currently, this provider exists within Qiskit, and can be imported as shown below. For details on the provider, see [The IBMQ Provider](the_ibmq_provider.ipynb).
```
from qiskit import IBMQ
```
After generating your API token, call: `IBMQ.save_account('MY_TOKEN')`. For Q Network users, you'll also need to include your access url: `IBMQ.save_account('MY_TOKEN', 'URL')`
This will store your IBMQ credentials in a local file. Unless your registration information has changed, you only need to do this once. You may now load your accounts by calling,
```
IBMQ.load_accounts()
```
Once your account has been loaded, you can view the list of backends available to you.
```
print("Available backends:")
IBMQ.backends()
```
### Running circuits on real devices
Today's quantum information processors are small and noisy, but are advancing at a fast pace. They provide a great opportunity to explore what [noisy, intermediate-scale quantum (NISQ)](https://arxiv.org/abs/1801.00862) computers can do.
The IBMQ provider uses a queue to allocate the devices to users. We now choose a device with the least busy queue which can support our program (has at least 3 qubits).
```
from qiskit.providers.ibmq import least_busy
large_enough_devices = IBMQ.backends(filters=lambda x: x.configuration().n_qubits > 4 and
not x.configuration().simulator)
backend = least_busy(large_enough_devices)
print("The best backend is " + backend.name())
```
To run the circuit on the backend, we need to specify the number of shots and the number of credits we are willing to spend to run the circuit. Then, we execute the circuit on the backend using the ``execute`` function.
```
from qiskit.tools.monitor import job_monitor
shots = 1024 # Number of shots to run the program (experiment); maximum is 8192 shots.
max_credits = 3 # Maximum number of credits to spend on executions.
job_exp = execute(qc, backend=backend, shots=shots, max_credits=max_credits)
job_monitor(job_exp)
```
``job_exp`` has a ``.result()`` method that lets us get the results from running our circuit.
<div class="alert alert-block alert-info">
<b>Note:</b> When the .result() method is called, the code block will wait until the job has finished before releasing the cell.
</div>
```
result_exp = job_exp.result()
```
Like before, the counts from the execution can be obtained using ```get_counts(qc)```
```
counts_exp = result_exp.get_counts(qc)
plot_histogram([counts_exp,counts])
```
### Simulating circuits using a HPC simulator
The IBMQ provider also comes with a remote optimized simulator called ``ibmq_qasm_simulator``. This remote simulator is capable of simulating up to 32 qubits. It can be used the
same way as the remote real backends.
```
backend = IBMQ.get_backend('ibmq_qasm_simulator', hub=None)
shots = 1024 # Number of shots to run the program (experiment); maximum is 8192 shots.
max_credits = 3 # Maximum number of credits to spend on executions.
job_hpc = execute(qc, backend=backend, shots=shots, max_credits=max_credits)
result_hpc = job_hpc.result()
counts_hpc = result_hpc.get_counts(qc)
plot_histogram(counts_hpc)
```
### Retrieving a previously ran job
If your experiment takes longer to run then you have time to wait around, or if you simply want to retrieve old jobs back, the IBMQ backends allow you to do that.
First you would need to note your job's ID:
```
jobID = job_exp.job_id()
print('JOB ID: {}'.format(jobID))
```
Given a job ID, that job object can be later reconstructed from the backend using retrieve_job:
```
job_get=backend.retrieve_job(jobID)
```
and then the results can be obtained from the new job object.
```
job_get.result().get_counts(qc)
```
| true |
code
| 0.678433 | null | null | null | null |
|
<p><font size="6"><b>Visualization - Matplotlib</b></font></p>
> *DS Data manipulation, analysis and visualization in Python*
> *May/June, 2021*
>
> *© 2021, Joris Van den Bossche and Stijn Van Hoey (<mailto:[email protected]>, <mailto:[email protected]>). Licensed under [CC BY 4.0 Creative Commons](http://creativecommons.org/licenses/by/4.0/)*
---
# Matplotlib
[Matplotlib](http://matplotlib.org/) is a Python package used widely throughout the scientific Python community to produce high quality 2D publication graphics. It transparently supports a wide range of output formats including PNG (and other raster formats), PostScript/EPS, PDF and SVG and has interfaces for all of the major desktop GUI (graphical user interface) toolkits. It is a great package with lots of options.
However, matplotlib is...
> The 800-pound gorilla — and like most 800-pound gorillas, this one should probably be avoided unless you genuinely need its power, e.g., to make a **custom plot** or produce a **publication-ready** graphic.
> (As we’ll see, when it comes to statistical visualization, the preferred tack might be: “do as much as you easily can in your convenience layer of choice [nvdr e.g. directly from Pandas, or with seaborn], and then use matplotlib for the rest.”)
(quote used from [this](https://dansaber.wordpress.com/2016/10/02/a-dramatic-tour-through-pythons-data-visualization-landscape-including-ggplot-and-altair/) blogpost)
And that's we mostly did, just use the `.plot` function of Pandas. So, why do we learn matplotlib? Well, for the *...then use matplotlib for the rest.*; at some point, somehow!
Matplotlib comes with a convenience sub-package called ``pyplot`` which, for consistency with the wider matplotlib community, should always be imported as ``plt``:
```
import numpy as np
import matplotlib.pyplot as plt
```
## - dry stuff - The matplotlib `Figure`, `axes` and `axis`
At the heart of **every** plot is the figure object. The "Figure" object is the top level concept which can be drawn to one of the many output formats, or simply just to screen. Any object which can be drawn in this way is known as an "Artist" in matplotlib.
Lets create our first artist using pyplot, and then show it:
```
fig = plt.figure()
plt.show()
```
On its own, drawing the figure artist is uninteresting and will result in an empty piece of paper (that's why we didn't see anything above).
By far the most useful artist in matplotlib is the **Axes** artist. The Axes artist represents the "data space" of a typical plot, a rectangular axes (the most common, but not always the case, e.g. polar plots) will have 2 (confusingly named) **Axis** artists with tick labels and tick marks.

There is no limit on the number of Axes artists which can exist on a Figure artist. Let's go ahead and create a figure with a single Axes artist, and show it using pyplot:
```
ax = plt.axes()
type(ax)
type(ax.xaxis), type(ax.yaxis)
```
Matplotlib's ``pyplot`` module makes the process of creating graphics easier by allowing us to skip some of the tedious Artist construction. For example, we did not need to manually create the Figure artist with ``plt.figure`` because it was implicit that we needed a figure when we created the Axes artist.
Under the hood matplotlib still had to create a Figure artist, its just we didn't need to capture it into a variable.
## - essential stuff - `pyplot` versus Object based
Some example data:
```
x = np.linspace(0, 5, 10)
y = x ** 2
```
Observe the following difference:
**1. pyplot style: plt...** (you will see this a lot for code online!)
```
plt.plot(x, y, '-')
```
**2. creating objects**
```
fig, ax = plt.subplots()
ax.plot(x, y, '-')
```
Although a little bit more code is involved, the advantage is that we now have **full control** of where the plot axes are placed, and we can easily add more than one axis to the figure:
```
fig, ax1 = plt.subplots()
ax1.plot(x, y, '-')
ax1.set_ylabel('y')
ax2 = fig.add_axes([0.2, 0.5, 0.4, 0.3]) # inset axes
ax2.set_xlabel('x')
ax2.plot(x, y*2, 'r-')
```
<div class="alert alert-info" style="font-size:18px">
<b>REMEMBER</b>:
<ul>
<li>Use the <b>object oriented</b> power of Matplotlib!</li>
<li>Get yourself used to writing <code>fig, ax = plt.subplots()</code></li>
</ul>
</div>
```
fig, ax = plt.subplots()
ax.plot(x, y, '-')
# ...
```
## An small cheat-sheet reference for some common elements
```
x = np.linspace(-1, 0, 100)
fig, ax = plt.subplots(figsize=(10, 7))
# Adjust the created axes so that its topmost extent is 0.8 of the figure.
fig.subplots_adjust(top=0.9)
ax.plot(x, x**2, color='0.4', label='power 2')
ax.plot(x, x**3, color='0.8', linestyle='--', label='power 3')
ax.vlines(x=-0.75, ymin=0., ymax=0.8, color='0.4', linestyle='-.')
ax.axhline(y=0.1, color='0.4', linestyle='-.')
ax.fill_between(x=[-1, 1.1], y1=[0.65], y2=[0.75], color='0.85')
fig.suptitle('Figure title', fontsize=18,
fontweight='bold')
ax.set_title('Axes title', fontsize=16)
ax.set_xlabel('The X axis')
ax.set_ylabel('The Y axis $y=f(x)$', fontsize=16)
ax.set_xlim(-1.0, 1.1)
ax.set_ylim(-0.1, 1.)
ax.text(0.5, 0.2, 'Text centered at (0.5, 0.2)\nin data coordinates.',
horizontalalignment='center', fontsize=14)
ax.text(0.5, 0.5, 'Text centered at (0.5, 0.5)\nin Figure coordinates.',
horizontalalignment='center', fontsize=14,
transform=ax.transAxes, color='grey')
ax.legend(loc='upper right', frameon=True, ncol=2, fontsize=14)
```
Adjusting specific parts of a plot is a matter of accessing the correct element of the plot:

For more information on legend positioning, check [this post](http://stackoverflow.com/questions/4700614/how-to-put-the-legend-out-of-the-plot) on stackoverflow!
## I do not like the style...
**...understandable**
Matplotlib had a bad reputation in terms of its default styling as figures created with earlier versions of Matplotlib were very Matlab-lookalike and mostly not really catchy.
Since Matplotlib 2.0, this has changed: https://matplotlib.org/users/dflt_style_changes.html!
However...
> *Des goûts et des couleurs, on ne discute pas...*
(check [this link](https://fr.wiktionary.org/wiki/des_go%C3%BBts_et_des_couleurs,_on_ne_discute_pas) if you're not french-speaking)
To account different tastes, Matplotlib provides a number of styles that can be used to quickly change a number of settings:
```
plt.style.available
x = np.linspace(0, 10)
with plt.style.context('seaborn'): # 'seaborn', ggplot', 'bmh', 'grayscale', 'seaborn-whitegrid', 'seaborn-muted'
fig, ax = plt.subplots()
ax.plot(x, np.sin(x) + x + np.random.randn(50))
ax.plot(x, np.sin(x) + 0.5 * x + np.random.randn(50))
ax.plot(x, np.sin(x) + 2 * x + np.random.randn(50))
```
We should not start discussing about colors and styles, just pick **your favorite style**!
```
plt.style.use('seaborn-whitegrid')
```
or go all the way and define your own custom style, see the [official documentation](https://matplotlib.org/3.1.1/tutorials/introductory/customizing.html) or [this tutorial](https://colcarroll.github.io/yourplotlib/#/).
<div class="alert alert-info">
<b>REMEMBER</b>:
<ul>
<li>If you just want <b>quickly a good-looking plot</b>, use one of the available styles (<code>plt.style.use('...')</code>)</li>
<li>Otherwise, the object-oriented way of working makes it possible to change everything!</li>
</ul>
</div>
## Interaction with Pandas
What we have been doing while plotting with Pandas:
```
import pandas as pd
flowdata = pd.read_csv('data/vmm_flowdata.csv',
index_col='Time',
parse_dates=True)
out = flowdata.plot() # print type()
```
Under the hood, it creates an Matplotlib Figure with an Axes object.
### Pandas versus matplotlib
#### Comparison 1: single plot
```
flowdata.plot(figsize=(16, 6)) # SHIFT + TAB this!
```
Making this with matplotlib...
```
fig, ax = plt.subplots(figsize=(16, 6))
ax.plot(flowdata)
ax.legend(["L06_347", "LS06_347", "LS06_348"])
```
is still ok!
#### Comparison 2: with subplots
```
axs = flowdata.plot(subplots=True, sharex=True,
figsize=(16, 8), colormap='viridis', # Dark2
fontsize=15, rot=0)
```
Mimicking this in matplotlib (just as a reference, it is basically what Pandas is doing under the hood):
```
from matplotlib import cm
import matplotlib.dates as mdates
colors = [cm.viridis(x) for x in np.linspace(0.0, 1.0, len(flowdata.columns))] # list comprehension to set up the colors
fig, axs = plt.subplots(3, 1, figsize=(16, 8))
for ax, col, station in zip(axs, colors, flowdata.columns):
ax.plot(flowdata.index, flowdata[station], label=station, color=col)
ax.legend()
if not ax.get_subplotspec().is_last_row():
ax.xaxis.set_ticklabels([])
ax.xaxis.set_major_locator(mdates.YearLocator())
else:
ax.xaxis.set_major_locator(mdates.YearLocator())
ax.xaxis.set_major_formatter(mdates.DateFormatter('%Y'))
ax.set_xlabel('Time')
ax.tick_params(labelsize=15)
```
Is already a bit harder ;-)
### Best of both worlds...
```
fig, ax = plt.subplots() #prepare a Matplotlib figure
flowdata.plot(ax=ax) # use Pandas for the plotting
fig, ax = plt.subplots(figsize=(15, 5)) #prepare a matplotlib figure
flowdata.plot(ax=ax) # use pandas for the plotting
# Provide further adaptations with matplotlib:
ax.set_xlabel("")
ax.grid(which="major", linewidth='0.5', color='0.8')
fig.suptitle('Flow station time series', fontsize=15)
fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(16, 6)) #provide with matplotlib 2 axis
flowdata[["L06_347", "LS06_347"]].plot(ax=ax1) # plot the two timeseries of the same location on the first plot
flowdata["LS06_348"].plot(ax=ax2, color='0.2') # plot the other station on the second plot
# further adapt with matplotlib
ax1.set_ylabel("L06_347")
ax2.set_ylabel("LS06_348")
ax2.legend()
```
<div class="alert alert-info">
<b>Remember</b>:
<ul>
<li>You can do anything with matplotlib, but at a cost... <a href="http://stackoverflow.com/questions/tagged/matplotlib">stackoverflow</a></li>
<li>The preformatting of Pandas provides mostly enough flexibility for quick analysis and draft reporting. It is not for paper-proof figures or customization</li>
</ul>
<br>
If you take the time to make your perfect/spot-on/greatest-ever matplotlib-figure: Make it a <b>reusable function</b>!
</div>
An example of such a reusable function to plot data:
```
%%file plotter.py
#this writes a file in your directory, check it(!)
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
from matplotlib import cm
from matplotlib.ticker import MaxNLocator
def vmm_station_plotter(flowdata, label="flow (m$^3$s$^{-1}$)"):
colors = [cm.viridis(x) for x in np.linspace(0.0, 1.0, len(flowdata.columns))] # list comprehension to set up the color sequence
fig, axs = plt.subplots(3, 1, figsize=(16, 8))
for ax, col, station in zip(axs, colors, flowdata.columns):
ax.plot(flowdata.index, flowdata[station], label=station, color=col) # this plots the data itself
ax.legend(fontsize=15)
ax.set_ylabel(label, size=15)
ax.yaxis.set_major_locator(MaxNLocator(4)) # smaller set of y-ticks for clarity
if not ax.get_subplotspec().is_last_row(): # hide the xticklabels from the none-lower row x-axis
ax.xaxis.set_ticklabels([])
ax.xaxis.set_major_locator(mdates.YearLocator())
else: # yearly xticklabels from the lower x-axis in the subplots
ax.xaxis.set_major_locator(mdates.YearLocator())
ax.xaxis.set_major_formatter(mdates.DateFormatter('%Y'))
ax.tick_params(axis='both', labelsize=15, pad=8) # enlarge the ticklabels and increase distance to axis (otherwise overlap)
return fig, axs
from plotter import vmm_station_plotter
# fig, axs = vmm_station_plotter(flowdata)
fig, axs = vmm_station_plotter(flowdata,
label="NO$_3$ (mg/l)")
fig.suptitle('Ammonium concentrations in the Maarkebeek', fontsize='17')
fig.savefig('ammonium_concentration.pdf')
```
<div class="alert alert-warning">
**NOTE**
- Let your hard work pay off, write your own custom functions!
</div>
<div class="alert alert-info" style="font-size:18px">
**Remember**
`fig.savefig()` to save your Figure object!
</div>
# Need more matplotlib inspiration?
For more in-depth material:
* http://www.labri.fr/perso/nrougier/teaching/matplotlib/
* notebooks in matplotlib section: http://nbviewer.jupyter.org/github/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/Index.ipynb#4.-Visualization-with-Matplotlib
* main reference: [matplotlib homepage](http://matplotlib.org/)
<div class="alert alert-info" style="font-size:18px">
**Remember**
- <a href="https://matplotlib.org/stable/gallery/index.html">matplotlib gallery</a> is an important resource to start from
- Matplotlib has some great [cheat sheets](https://github.com/matplotlib/cheatsheets) available
</div>
| true |
code
| 0.620679 | null | null | null | null |
|
#### The purpose of this notebook is to compare D-REPR with other methods such as KR2RML and R2RML in term of performance
```
import re, numpy as np
import matplotlib.pyplot as plt
from tqdm import tqdm_notebook as tqdm
%matplotlib inline
plt.rcParams["figure.figsize"] = (10.0, 8.0) # set default size of plots
plt.rcParams["image.interpolation"] = "nearest"
plt.rcParams["image.cmap"] = "gray"
%load_ext autoreload
%autoreload 2
%reload_ext autoreload
def read_exec_time(log_file: str, tag_str: str='>>> [DREPR]', print_exec_time: bool=True):
"""Read the executing time of the program"""
with open(log_file, "r") as f:
for line in f:
if line.startswith(">>> [DREPR]"):
m = re.search("((?:\d+\.)?\d+) ?ms", line)
exec_time = m.group(1)
if print_exec_time:
print(line.strip(), "-- extract exec_time:", exec_time)
return float(exec_time)
raise Exception("Doesn't found any output message")
```
#### KR2RML
To setup KR2RML, we need to first download Web-Karma-2.2 from the web, modify the file: `karma-offline/src/main/java/edu/isi/karma/rdf/OfficeRDFGenerator` to add this code to line 184: `System.out.println(">>> [DREPR] Finish converting RDF after " + String.valueOf(System.currentTimeMillis() - l) + "ms");` to print the runtime to stdout.
Then run `mvn install -Dmaven.test.skip=true` at the root directory to install dependencies before actually converting data to RDF
```
%cd /workspace/tools-evaluation/Web-Karma-2.2/karma-offline
DATA_FILE = "/workspace/drepr/drepr/rdrepr/data/insurance.csv"
MODEL_FILE = "/workspace/drepr/drepr/rdrepr/data/insurance.level-0.model.ttl"
OUTPUT_FILE = "/tmp/kr2rml_output.ttl"
karma_exec_times = []
for i in tqdm(range(3)):
!mvn exec:java -Dexec.mainClass="edu.isi.karma.rdf.OfflineRdfGenerator" -Dexec.args=" \
--sourcetype CSV \
--filepath \"{DATA_FILE}\" \
--modelfilepath \"{MODEL_FILE}\" \
--sourcename test \
--outputfile {OUTPUT_FILE}" -Dexec.classpathScope=compile > /tmp/karma_speed_comparison.log
karma_exec_times.append(read_exec_time("/tmp/karma_speed_comparison.log"))
!rm /tmp/karma_speed_comparison.log
print(f"run 3 times, average: {np.mean(karma_exec_times)}ms")
```
<hr />
Report information about the output and input
```
with open(DATA_FILE, "r") as f:
n_records = sum(1 for _ in f) - 1
print("#records:", n_records, f"({round(n_records * 1000 / np.mean(karma_exec_times), 2)} records/s)")
with open(OUTPUT_FILE, "r") as f:
n_triples = sum(1 for line in f if line.strip().endswith("."))
print("#triples:", n_triples, f"({round(n_triples * 1000 / np.mean(karma_exec_times), 2)} triples/s)")
```
#### MorphRDB
Assuming that you have followed their installation guides at [this](https://github.com/oeg-upm/morph-rdb/wiki/Installation) and [usages](https://github.com/oeg-upm/morph-rdb/wiki/Usage#csv-files). We are going to create r2rml mappings and invoke their program to map data into RDF
```
%cd /workspace/tools-evaluation/morph-rdb/morph-examples
!java -cp .:morph-rdb-dist-3.9.17.jar:dependency/\* es.upm.fi.dia.oeg.morph.r2rml.rdb.engine.MorphCSVRunner /workspace/drepr/drepr/rdrepr/data insurance.level-0.morph.properties
```
#### DREPR
```
%cd /workspace/drepr/drepr/rdrepr
DREPR_EXEC_LOG = "/tmp/drepr_exec_log.log"
!cargo run --release > {DREPR_EXEC_LOG}
drepr_exec_times = read_exec_time(DREPR_EXEC_LOG)
!rm {DREPR_EXEC_LOG}
with open("/tmp/drepr_output.ttl", "r") as f:
n_triples = sum(1 for line in f if line.strip().endswith("."))
print("#triples:", n_triples, f"({round(n_triples * 1000 / np.mean(drepr_exec_times), 2)} triples/s)")
```
| true |
code
| 0.333703 | null | null | null | null |
|
```
%matplotlib inline
```
# Tuning a scikit-learn estimator with `skopt`
Gilles Louppe, July 2016
Katie Malone, August 2016
Reformatted by Holger Nahrstaedt 2020
.. currentmodule:: skopt
If you are looking for a :obj:`sklearn.model_selection.GridSearchCV` replacement checkout
`sphx_glr_auto_examples_sklearn-gridsearchcv-replacement.py` instead.
## Problem statement
Tuning the hyper-parameters of a machine learning model is often carried out
using an exhaustive exploration of (a subset of) the space all hyper-parameter
configurations (e.g., using :obj:`sklearn.model_selection.GridSearchCV`), which
often results in a very time consuming operation.
In this notebook, we illustrate how to couple :class:`gp_minimize` with sklearn's
estimators to tune hyper-parameters using sequential model-based optimisation,
hopefully resulting in equivalent or better solutions, but within less
evaluations.
Note: scikit-optimize provides a dedicated interface for estimator tuning via
:class:`BayesSearchCV` class which has a similar interface to those of
:obj:`sklearn.model_selection.GridSearchCV`. This class uses functions of skopt to perform hyperparameter
search efficiently. For example usage of this class, see
`sphx_glr_auto_examples_sklearn-gridsearchcv-replacement.py`
example notebook.
```
print(__doc__)
import numpy as np
```
## Objective
To tune the hyper-parameters of our model we need to define a model,
decide which parameters to optimize, and define the objective function
we want to minimize.
```
from sklearn.datasets import load_boston
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.model_selection import cross_val_score
boston = load_boston()
X, y = boston.data, boston.target
n_features = X.shape[1]
# gradient boosted trees tend to do well on problems like this
reg = GradientBoostingRegressor(n_estimators=50, random_state=0)
```
Next, we need to define the bounds of the dimensions of the search space
we want to explore and pick the objective. In this case the cross-validation
mean absolute error of a gradient boosting regressor over the Boston
dataset, as a function of its hyper-parameters.
```
from skopt.space import Real, Integer
from skopt.utils import use_named_args
# The list of hyper-parameters we want to optimize. For each one we define the
# bounds, the corresponding scikit-learn parameter name, as well as how to
# sample values from that dimension (`'log-uniform'` for the learning rate)
space = [Integer(1, 5, name='max_depth'),
Real(10**-5, 10**0, "log-uniform", name='learning_rate'),
Integer(1, n_features, name='max_features'),
Integer(2, 100, name='min_samples_split'),
Integer(1, 100, name='min_samples_leaf')]
# this decorator allows your objective function to receive a the parameters as
# keyword arguments. This is particularly convenient when you want to set
# scikit-learn estimator parameters
@use_named_args(space)
def objective(**params):
reg.set_params(**params)
return -np.mean(cross_val_score(reg, X, y, cv=5, n_jobs=-1,
scoring="neg_mean_absolute_error"))
```
## Optimize all the things!
With these two pieces, we are now ready for sequential model-based
optimisation. Here we use gaussian process-based optimisation.
```
from skopt import gp_minimize
res_gp = gp_minimize(objective, space, n_calls=50, random_state=0)
"Best score=%.4f" % res_gp.fun
print("""Best parameters:
- max_depth=%d
- learning_rate=%.6f
- max_features=%d
- min_samples_split=%d
- min_samples_leaf=%d""" % (res_gp.x[0], res_gp.x[1],
res_gp.x[2], res_gp.x[3],
res_gp.x[4]))
```
## Convergence plot
```
from skopt.plots import plot_convergence
plot_convergence(res_gp)
```
| true |
code
| 0.745778 | null | null | null | null |
|
# Quadtrees iterating on pairs of neighbouring items
A quadtree is a tree data structure in which each node has exactly four children. It is a particularly efficient way to store elements when you need to quickly find them according to their x-y coordinates.
A common problem with elements in quadtrees is to detect pairs of elements which are closer than a definite threshold.
The proposed implementation efficiently addresses this problem.
```
from smartquadtree import Quadtree
```
## Creation & insertion of elements
As you instantiate your quadtree, you must specify the center of your space then the height and width.
```
q = Quadtree(0, 0, 10, 10)
```
The output of a quadtree on the console is pretty explicit. (You can refer to next section for the meaning of "No mask set")
```
q
```
You can easily insert elements from which you can naturally infer x-y coordinates (e.g. tuples or lists)
```
q.insert((1, 2))
q.insert((-3, 4))
q
```
No error is raised if the element you are trying to insert is outside the scope of the quadtree. But it won't be stored anyway!
```
q.insert((-20, 0))
q
```
If you want to insert other Python objects, be sure to provide `get_x()` and `get_y()` methods to your class!
```
class Point(object):
def __init__(self, x, y, color):
self.x = x
self.y = y
self.color = color
def __repr__(self):
return "(%.2f, %.2f) %s" % (self.x, self.y, self.color)
def get_x(self):
return self.x
def get_y(self):
return self.y
```
You cannot insert elements of a different type from the first element inserted.
```
q.insert(Point(2, -7, "red"))
```
But feel free to create a new one and play with it:
```
point_quadtree = Quadtree(5, 5, 5, 5)
point_quadtree.insert(Point(2, 7, "red"))
point_quadtree
```
## Simple iteration
```
from random import random
q = Quadtree(0, 0, 10, 10, 16)
for a in range(50):
q.insert([random()*20-10, random()*20-10])
```
The `print` function does not display all elements and uses the `__repr__()` method of each element.
```
print(q)
```
We can write our own iterator and print each element we encounter the way we like.
```
from __future__ import print_function
for p in q.elements():
print ("[%.2f, %.2f]" % (p[0], p[1]), end=" ")
```
It is easy to filter the iteration process and apply the function only on elements inside a given polygon. Use the `set_mask()` method and pass a list of x-y coordinates. The polygon will be automatically closed.
```
q.set_mask([(-3, -7), (-3, 7), (3, 7), (3, -7)])
print(q)
```
The same approach can be used to count the number of elements inside the quadtree.
```
print (sum (1 for x in q.elements()))
print (sum (1 for x in q.elements(ignore_mask=True)))
```
As a mask is set on the quadtree, we only counted the elements inside the mask. You can use the `size()` method to count elements and ignore the mask by default. Disabling the mask with `set_mask(None)` is also a possibility.
```
print ("%d elements (size method)" % q.size())
print ("%d elements (don't ignore the mask)" % q.size(False))
q.set_mask(None)
print ("%d elements (disable the mask)" % q.size())
```
## Playing with plots
```
%matplotlib inline
from matplotlib import pyplot as plt
q = Quadtree(5, 5, 5, 5, 10)
for a in range(200):
q.insert([random()*10, random()*10])
fig = plt.figure()
plt.axis([0, 10, 0, 10])
q.set_mask(None)
for p in q.elements():
plt.plot([p[0]], [p[1]], 'o', color='lightgrey')
q.set_mask([(3, 3), (3, 7), (7, 7), (7, 3)])
for p in q.elements():
plt.plot([p[0]], [p[1]], 'ro')
_ = plt.plot([3, 3, 7, 7, 3], [3, 7, 7, 3, 3], 'r')
```
## Iteration on pairs of neighbouring elements
Iterating on pairs of neighbouring elements is possible through the `neighbour_elements()` function. It works as a generator and yields pair of elements, the first one being inside the mask (if specified), the second one being in the same cell or in any neighbouring cell, also in the mask.
Note that if `(a, b)` is yielded by `neighbour_elements()`, `(b, a)` will be omitted from future yields.
```
q = Quadtree(5, 5, 5, 5, 10)
q.set_limitation(2) # do not create a new subdivision if one side of the cell is below 2
for a in range(200):
q.insert([random()*10, random()*10])
fig = plt.figure()
plt.axis([0, 10, 0, 10])
for p in q.elements():
plt.plot([p[0]], [p[1]], 'o', color='lightgrey')
q.set_mask([(1, 1), (4, 1), (5, 4), (2, 5), (1, 1)])
for p in q.elements():
plt.plot([p[0]], [p[1]], 'o', color='green')
for p1, p2 in q.neighbour_elements():
if ((p1[0] - p2[0]) ** 2 + (p1[1] - p2[1]) ** 2 < 1):
plt.plot([p1[0]], [p1[1]], 'o', color='red')
plt.plot([p2[0]], [p2[1]], 'o', color='red')
plt.plot([p1[0], p2[0]], [p1[1], p2[1]], 'red')
_ = plt.plot([1, 4, 5, 2, 1], [1, 1, 4, 5, 1], 'r')
```
| true |
code
| 0.629974 | null | null | null | null |
|
```
from edahelper import *
import sklearn.naive_bayes as NB
import sklearn.linear_model
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, r2_score, accuracy_score
# Resources:
#https://scikit-learn.org/stable/tutorial/text_analytics/working_with_text_data.html
wsb = pd.read_csv('../Data/wsb_cleaned.csv')
#set up appropriate subset, removing comment outliers
#also chose to look at only self posts
dfog=wsb.loc[(wsb.is_self==True) & (wsb.ups>=10) & (wsb.num_comments<=10000) & ~(wsb["title"].str.contains("Thread|thread|Sunday Live Chat|consolidation zone|Containment Zone|Daily Discussion|Daily discussion|Saturday Chat|What Are Your Moves Tomorrow|What Are Your Moves Today|MEGATHREAD",na=False))]
```
## Preprocessing
Removing characters that are not alphanumeric or spaces:
```
def RegexCols(df,cols):
newdf=df
regex = re.compile('[^a-zA-Z ]')
for col in cols:
newdf=newdf.assign(**{col: df.loc[:,col].apply(lambda x : regex.sub('', str(x) ))})
return newdf
df=RegexCols(dfog,['title', 'author', 'selftext'])
#df=pd.DataFrame()
#regex = re.compile('[^a-zA-Z ]')
#for col in ['title', 'author', 'selftext']:
# df.loc[:,col] = dfog.loc[:,col].apply(lambda x : regex.sub('', str(x) ))
```
Filtering the data frame, count vectorizing titles.
# Can we predict the number of upvotes using the self text?
```
#create the train test split
#try to predict ups using the self text
X_train, X_test, y_train, y_test = train_test_split(df['selftext'], df['ups'], test_size=0.2, random_state=46)
#make a pipeline to do bag of words and linear regression
text_clf = Pipeline([
('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', LinearRegression(copy_X=True)),
])
text_clf.fit(X_train,y_train)
#text_clf.predict(X_train)
print(r2_score(y_train,text_clf.predict(X_train)))
print(r2_score(y_test,text_clf.predict(X_test)))
#wow, that is terrible. we do worse than if we just guessed the mean all the time.
```
# Can we predict the number of upvotes using the words in the title?
## NLP on words in the title
```
#this time we don't need only self posts
df2og=wsb.loc[(wsb.ups>=10) & (wsb.num_comments<=10000) & ~(wsb["title"].str.contains("Thread|thread|Sunday Live Chat|consolidation zone|Containment Zone|Daily Discussion|Daily discussion|Saturday Chat|What Are Your Moves Tomorrow|What Are Your Moves Today|MEGATHREAD",na=False))]
df2=RegexCols(df2og,['title', 'author', 'selftext'])
X_train, X_test, y_train, y_test = train_test_split(df2['title'], df2['ups'], test_size=0.2, random_state=46)
text_clf = Pipeline([
('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', LinearRegression(copy_X=True)),
])
text_clf.fit(X_train,y_train)
print(r2_score(y_train,text_clf.predict(X_train)))
print(r2_score(y_test,text_clf.predict(X_test)))
results = pd.DataFrame()
results["predicted"] = text_clf.predict(X_test)
results["true"] = list(y_test)
sns.scatterplot(data = results, x = "predicted", y = "true")
```
Doesn't look particularly useful... neither does using lasso...
```
X_train, X_test, y_train, y_test = train_test_split(df2['title'], df2['ups'], test_size=0.2, random_state=46)
text_clf = Pipeline([
('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', sklearn.linear_model.Lasso()),
])
text_clf.fit(X_train,y_train)
print(r2_score(y_train,text_clf.predict(X_train)))
print(r2_score(y_test,text_clf.predict(X_test)))
results = pd.DataFrame()
results["predicted"] = text_clf.predict(X_test)
results["true"] = list(y_test)
sns.scatterplot(data = results, x = "predicted", y = "true")
```
# Can we predict if a post will be ignored?
```
def PopClassify(ups):
if ups <100:
return 0
elif ups<100000:
return 1
else:
return 2
#df2['popularity']=PopClassify(df2['ups'])
df2['popularity'] = df2['ups'].map(lambda score: PopClassify(score))
#df['ignored'] = df['ups'] <= 100 # What is a good cutoff for being ignored?
#df = wsb[ wsb['ups'] >= 20]
df2.head()
X_train, X_test, y_train, y_test = train_test_split(df2['title'], df2['popularity'], test_size=0.2, random_state=46)
from sklearn.naive_bayes import MultinomialNB
text_clf = Pipeline([('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', MultinomialNB()),
])
text_clf.fit(X_train,y_train)
p=text_clf.predict(X_train)
print(np.where(p==1))
print(np.where(p==2))
np.mean(p==y_train)
p2=text_clf.predict(X_test)
np.mean(p2==y_test)
#what if we just predict 0 all the time?
print(np.mean(y_train==0))
print(np.mean(y_test==0))
def PopClassifyn(ups,n):
if ups <n:
return 0
else:
return 1
#the above shows that the 0 category is too big. maybe cut it down to 50? Also throw out the top category
df2['popularity'] = df2['ups'].map(lambda score: PopClassifyn(score,50))
X_train, X_test, y_train, y_test = train_test_split(df2['title'], df2['popularity'], test_size=0.2, random_state=46)
text_clf = Pipeline([('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', MultinomialNB()),
])
text_clf.fit(X_train,y_train)
print("accuracy on training data:")
p=text_clf.predict(X_train)
print(np.mean(p==y_train))
print(np.mean(y_train==0))
print("accuracy on testing data:")
print(np.mean(text_clf.predict(X_test)==y_test))
print(np.mean(y_test==0))
#slight improvement on the testing data, but lost on the training data...
#what about something more extreme? Let's keep all the posts with a score of 1. Let's try to predict ups>1
df3og=wsb.loc[(wsb.num_comments<=10000) & ~(wsb["title"].str.contains("Thread|thread|Sunday Live Chat|consolidation zone|Containment Zone|Daily Discussion|Daily discussion|Saturday Chat|What Are Your Moves Tomorrow|What Are Your Moves Today|MEGATHREAD",na=False))]
df3=RegexCols(df3og,['title', 'author', 'selftext'])
df3['popularity'] = df3['ups'].map(lambda score: PopClassifyn(score,2))
X_train, X_test, y_train, y_test = train_test_split(df3['title'], df3['popularity'], test_size=0.2, random_state=46,stratify=df3['popularity'])
text_clf = Pipeline([('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', MultinomialNB()),
])
text_clf.fit(X_train,y_train)
print("accuracy on training data:")
p=text_clf.predict(X_train)
print(np.mean(p==y_train))
print(np.mean(y_train==0))
print("accuracy on testing data:")
print(np.mean(text_clf.predict(X_test)==y_test))
print(np.mean(y_test==0))
#nothing!! what if we try using the selftext?
#back to df
df4og=wsb.loc[(wsb.is_self==True) & (wsb.num_comments<=10000) & ~(wsb["title"].str.contains("Thread|thread|Sunday Live Chat|consolidation zone|Containment Zone|Daily Discussion|Daily discussion|Saturday Chat|What Are Your Moves Tomorrow|What Are Your Moves Today|MEGATHREAD",na=False))]
df4=RegexCols(df4og,['title', 'author', 'selftext'])
df4['popularity'] = df4['ups'].map(lambda score: PopClassifyn(score,2))
X_train, X_test, y_train, y_test = train_test_split(df4['selftext'], df4['popularity'], test_size=0.2, random_state=46,stratify=df4['popularity'])
text_clf = Pipeline([('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', MultinomialNB()),
])
text_clf.fit(X_train,y_train)
print("accuracy on training data:")
p=text_clf.predict(X_train)
print(np.mean(p==y_train))
print(np.mean(y_train==0))
print("accuracy on testing data:")
print(np.mean(text_clf.predict(X_test)==y_test))
print(np.mean(y_test==0))
#okay, this is not too bad!
#other ways to measure how well this is doing?
#let's try the ROC AUC score
from sklearn.metrics import roc_curve
#text_clf.predict_proba(X_train)[:,1]
probs=text_clf.predict_proba(X_train)[:,1]
roc_curve(y_train,probs)
fpr,tpr,cutoffs = roc_curve(y_train,probs)
plt.figure(figsize=(12,8))
plt.plot(fpr,tpr)
plt.xlabel("False Positive Rate",fontsize=16)
plt.ylabel("True Positive Rate",fontsize=16)
plt.xticks(fontsize=12)
plt.yticks(fontsize=12)
plt.show()
from sklearn.metrics import roc_auc_score
roc_auc_score(y_train,probs)
#now let's try logistic regression rather than naive Bayes?
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
text_clf = Pipeline([('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
#('standardscaler', StandardScaler()),
('clf', LogisticRegression(max_iter=1000)),
])
text_clf.fit(X_train,y_train)
print("accuracy on training data:")
p=text_clf.predict(X_train)
#print(np.mean(p==y_train))
print(accuracy_score(y_train,p))
print(np.mean(y_train==0))
print("accuracy on testing data:")
print(np.mean(text_clf.predict(X_test)==y_test))
print(np.mean(y_test==0))
#added later, for ROC curve and AUC score
probs=text_clf.predict_proba(X_train)[:,1]
fpr,tpr,cutoffs = roc_curve(y_train,probs)
plt.figure(figsize=(12,8))
plt.plot(fpr,tpr)
plt.xlabel("False Positive Rate",fontsize=16)
plt.ylabel("True Positive Rate",fontsize=16)
plt.xticks(fontsize=12)
plt.yticks(fontsize=12)
plt.show()
print(roc_auc_score(y_train,probs))
from sklearn.model_selection import cross_validate as cv
from sklearn.metrics import SCORERS as sc
from sklearn.metrics import make_scorer as ms
from sklearn.metrics import balanced_accuracy_score as bas
scorer_dict={
'accuracy_scorer' : ms(accuracy_score),
'auc_scorer' : ms(roc_auc_score),
'bas_scorer' : ms(bas)
}
#scores = cross_validate(lasso, X, y, cv=3,
#... scoring=('r2', 'neg_mean_squared_error'),
#... return_train_score=True)
#X_train, X_test, y_train, y_test = train_test_split(df4['selftext'], df4['popularity'], test_size=0.2, random_state=46,stratify=df4['popularity'])
scores=cv(text_clf,df4['selftext'],df4['popularity'],cv=5,scoring=scorer_dict, return_train_score=True)
print(scores)
print(np.mean(scores['test_accuracy_scorer']))
print(np.mean(scores['test_bas_scorer']))
print(np.mean(scores['test_auc_scorer']))
#this is very slightly better than the other one. Might be even better if we can scale the data
text_clf = Pipeline([('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('standardscaler', StandardScaler(with_mean=False)),
('clf', LogisticRegression(max_iter=10000)),
])
text_clf.fit(X_train,y_train)
print("accuracy on training data:")
p=text_clf.predict(X_train)
print(np.mean(p==y_train))
print(np.mean(y_train==0))
print("accuracy on testing data:")
print(np.mean(text_clf.predict(X_test)==y_test))
print(np.mean(y_test==0))
#scaling somehow made it worse on the testing data??
```
# Can we cluster similar posts?
```
df3.sort_values(by="ups")
```
| true |
code
| 0.420124 | null | null | null | null |
|
## Sleep analysis, using Passive Infrared (PIR) data, in 10sec bins from a single central PIR, at 200-220mm above the cage floor. Previously EEG-telemetered animals allow direct comparison of sleep scored by direct and non-invasive methods.
### 1st setup analysis environment:
```
import numpy as np # calculations
import pandas as pd # dataframes and IO
import matplotlib.pyplot as plt # plotting
# show graphs/figures in notebooks
%matplotlib inline
import seaborn as sns # statistical plots and analysis
sns.set(style="ticks") # styling
sns.set_context("poster")
```
### Then import .CSV text file from activity monitoring (with ISO-8601 encoding for the timepoints)
```
PIR = pd.read_csv('../PIRdata/1sensorPIRvsEEGdata.csv',parse_dates=True,index_col=0)
PIR.head()
PIR.pop('PIR4') # remove channels with no Telemetered mice / no sensor
PIR.pop('PIR6')
PIR.columns=('Act_A', 'Act_B','Act_C', 'Act_D', 'Light') # and rename the remaining columns with activity data
#PIR.plot(subplots=True, figsize=(16,12))
```
### next identify time of lights ON (to match start of scored EEG data)
```
PIR['Light']['2014-03-18 08:59:30': '2014-03-18 09:00:40'].plot(figsize =(16,4))
```
### Define period to match EEG data
```
PIR_24 = PIR.truncate(before='2014-03-18 09:00:00', after='2014-03-19 09:00:00')
PIR_24shift = PIR_24.tshift(-9, freq='H') # move data on timescale so 0 represents 'lights on'
PIR_24shift.plot(subplots=True,figsize=(20,10))
```
### Define sleepscan function and run with selected data
```
# run through trace looking for bouts of sleep (defined as 4 or more sequential '0' values) variable 'a' is dataframe of PIR data
def sleepscan(a,bins):
ss = a.rolling(bins).sum()
y = ss==0
return y.astype(int) # if numerical output is required
# for each column of activity data define PIR-derived sleep as a new column
ss =PIR_24shift.assign(PIR_A =sleepscan(PIR_24shift['Act_A'],4),
PIR_B =sleepscan(PIR_24shift['Act_B'],4),
PIR_C =sleepscan(PIR_24shift['Act_C'],4),
PIR_D =sleepscan(PIR_24shift['Act_D'],4)).resample('10S').mean()
ss.head() # show top of new dataframe
```
### Importing EEG data scored by Sibah Hasan (follow correction for channels A and B on EEG recordings)
#### Scored as 10 second bins starting at 9am (lights on) , for clarity we will only import the columns for total sleep, although REM and NREM sleep were scored)
```
eeg10S = pd.read_csv('../PIRdata/EEG_4mice10sec.csv',index_col=False,
usecols=['MouseA Total sleep ','MouseB Total sleep ','MouseC Total sleep ','MouseD Total sleep '])
eeg10S.columns=('EEG_A', 'EEG_B', 'EEG_C','EEG_D') # rename columns
eeg10S.head()
ss.reset_index(inplace=True) # use sequential numbered index to allow concatination (joining) of data
ss_all = pd.concat([ss,eeg10S], axis=1) # join data
ss_all.set_index('Time',inplace=True) # Time as index
ss_all.head()
#ss_all.pop('index') # and drop old index
ss_all.head()
```
### Then resample as an average of 30min to get proportion sleep (scored from immobility)
```
EEG30 = ss_all.resample('30T').mean()
EEG30.tail()
EEG30.loc[:,['PIR_A','EEG_A']].plot(figsize=(18,4)) # show data for one mouse
# red #A10000 and blue #011C4E colour pallette for figure2
EEGred = ["#A10000", "#011C4E"]
sns.palplot(sns.color_palette(EEGred)) # show colours
sns.set_palette(EEGred)
sns.set_context('poster')
fig, (ax1,ax2, ax3, ax4) = plt.subplots(nrows=4, ncols=1)
fig.text(1, 0.87,'A',fontsize=24, horizontalalignment='center',verticalalignment='center')
fig.text(1, 0.635,'B',fontsize=24, horizontalalignment='center',verticalalignment='center')
fig.text(1, 0.4,'C',fontsize=24, horizontalalignment='center',verticalalignment='center')
fig.text(1, 0.162,'D',fontsize=24, horizontalalignment='center',verticalalignment='center')
fig.text(0,0.7, 'Proportion of time asleep', fontsize=18, rotation='vertical')
fig.text(0.5,0,'Time', fontsize=18)
fig.text(0.08,0.14,'PIR', fontsize=21, color="#011C4E", fontweight='semibold')
fig.text(0.08,0.11,'EEG', fontsize=21, color="#A10000", fontweight='semibold')
plt.subplot(411)
plt.plot(EEG30.index, EEG30['EEG_A'], label= "EEG total sleep",lw=2)
plt.fill_between(EEG30.index, 0, 1, where=EEG30.index>='2014-03-18 12:00:00',lw=0, alpha=0.6, facecolor='#aaaaaa')
plt.plot(EEG30.index, EEG30['PIR_A'],label= "PIR sleep", lw=2)
plt.xticks(horizontalalignment='left',fontsize=12)
plt.yticks([0,0.5,1],fontsize=12)
plt.subplot(412)
plt.plot(EEG30.index, EEG30['EEG_B'], lw=2)
plt.plot(EEG30.index, EEG30['PIR_B'], lw=2)
plt.fill_between(EEG30.index, 0, 1, where=EEG30.index>='2014-03-18 12:00:00',lw=0, alpha=0.6, facecolor='#aaaaaa')
plt.xticks(horizontalalignment='left',fontsize=12)
plt.yticks([0,0.5,1],fontsize=12)
plt.subplot(413)
plt.plot(EEG30.index, EEG30['EEG_C'], lw=2)
plt.plot(EEG30.index, EEG30['PIR_C'], lw=2)
plt.fill_between(EEG30.index, 0, 1, where=EEG30.index>='2014-03-18 12:00:00',lw=0, alpha=0.6, facecolor='#aaaaaa')
plt.xticks(horizontalalignment='left',fontsize=12)
plt.yticks([0,0.5,1],fontsize=12)
plt.subplot(414)
plt.plot(EEG30.index, EEG30['EEG_D'], lw=2)
plt.plot(EEG30.index, EEG30['PIR_D'], lw=2)
plt.fill_between(EEG30.index, 0, 1, where=EEG30.index>='2014-03-18 12:00:00',lw=0, alpha=0.6, facecolor='#aaaaaa')
plt.xticks(horizontalalignment='left',fontsize=12)
plt.yticks([0,0.5,1],fontsize=12)
plt.tight_layout(h_pad=0.2,pad=2)
# options for saving figures
#plt.savefig('correlations_BlueRed.eps',format='eps', dpi=1200, bbox_inches='tight', pad_inches=0.5)
#plt.savefig('correlations_BlueRed.jpg',format='jpg', dpi=600,frameon=2, bbox_inches='tight', pad_inches=0.5)
plt.show()
sns.set_style("white")
sns.set_context("talk", font_scale=0.6)
corr30 = EEG30
corr30.pop('Light')
sns.corrplot(corr30, sig_stars=False) # show correlation plot for all values
#plt.savefig('../../Figures/CorrFig3left.eps',format='eps', dpi=600,pad_inches=0.2, frameon=2)
```
# Bland-Altman as an alternative to correlation plots?
### Combined data from all 4 mice (paired estimates of sleep by PIR and EEG aligned in Excel)
```
df = pd.read_csv('../PIRdata/blandAltLandD.csv')
def bland_altman_plot(data1, data2, *args, **kwargs):
data1 = np.asarray(data1)
data2 = np.asarray(data2)
mean = np.mean([data1, data2], axis=0)
diff = data1 - data2 # Difference between data1 and data2
md = np.mean(diff) # Mean of the difference
sd = np.std(diff, axis=0) # Standard deviation of the difference
plt.scatter(mean, diff, *args, **kwargs)
plt.axis([0, 30, -30, 30])
plt.axhline(md, linestyle='-', *args, **kwargs)
plt.axhline(md + 1.96*sd, linestyle='--', *args, **kwargs)
plt.axhline(md - 1.96*sd, linestyle='--', *args, **kwargs)
def bland_altman_output(data1, data2, *args, **kwargs):
data1 = np.asarray(data1)
data2 = np.asarray(data2)
mean = np.mean([data1, data2], axis=0)
diff = data1 - data2 # Difference between data1 and data2
md = np.mean(diff) # Mean of the difference
sd = np.std(diff, axis=0) # Standard deviation of the difference
return md , md-(1.96*sd), md+(1.96*sd)
sns.set_context('talk')
c1, c2, c3 = sns.blend_palette(["#002147","gold","grey"], 3)
plt.subplot(111, axisbg=c3)
bland_altman_plot(df.PIR_Light, df.EEG_Light,color=c2, linewidth=3)
bland_altman_plot(df.PIR_dark, df.EEG_dark,color=c1, linewidth=3)
plt.xlabel('Average score from both methods (min)', fontsize=14)
plt.ylabel('PIR score - EEG score (min)', fontsize=14)
plt.title('Bland-Altman comparison of PIR-derived sleep and EEG-scored sleep', fontsize=16)
#plt.savefig('../../Figures/blandAltman4mice.eps',format='eps', dpi=1200,pad_inches=1,
# frameon=0)
plt.show()
bland_altman_output(df.PIR_Light, df.EEG_Light)
bland_altman_output(df.PIR_dark, df.EEG_dark)
# Combine (concatenate) these data to get overall comparison of measurements
df.PIR = pd.concat([df.PIR_dark, df.PIR_Light],axis=0)
df.EEG = pd.concat([df.EEG_dark, df.EEG_Light],axis=0)
dfall =pd.concat([df.PIR, df.EEG], axis=1, keys=['PIR', 'EEG'])
dfall.head()
bland_altman_output(dfall.PIR, dfall.EEG) # mean and 95% CIs for overall comparison
```
| true |
code
| 0.513912 | null | null | null | null |
|
### What is Matplotlib?
Matplotlib is a plotting library for the Python, Pyplot is a matplotlib module which provides a MATLAB-like interface. Matplotlib is designed to be as usable as MATLAB, with the ability to use Python, and the advantage of being free and open-source.
#### What does Matplotlib Pyplot do?
Matplotlib is a collection of command style functions that make matplotlib work like MATLAB. Each pyplot function makes some change to a figure: e.g., creates a figure, creates a plotting area in a figure, plots some lines in a plotting area, decorates the plot with labels, etc.
```
# import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
```
### Line chart
It is a chart in which series of data are plotted by straight lines, in which we can use line chart (straight lines) to compare related features i.e (x and y). We can explicitly define the grid, the x and y axis scale and labels, title and display options.
```
a= range(1,16)
b = np.array(a)**2
#Now by just appliying plot command and the below chart will appear
plt.plot(a,b)
# we can change the line color by following code
plt.plot(a,b,color='red')
#we can change the type of line and its width by ls and lw variable
plt.plot(a,b,color='red', ls='--',lw=2)
# OR WE CAN DEFINE THE MARKER
plt.plot(a,b,color='green', marker='4',mew=10)
# we can enable grid view
plt.grid()
plt.plot(a,b,color='orange', ls='--',lw=2)
```
Plotting the line chart from panda DataFrame
```
delhi_sale = [45,34,76,65,73,40]
bangalore_sale = [51,14,36,95,33,45]
pune_sale = [39,85,34,12,55,8]
sales = pd.DataFrame({'Delhi':delhi_sale,'Bangalore':bangalore_sale,'Pune':pune_sale})
sales
## Lets plot line chart and xtricks and ytricks are used to specify significant range of axis
sales.plot(xticks=range(1,6),yticks=range(0,100,20))
# we can define color for different lines
color = ['Red','Yellow','Black']
sales.plot(xticks=range(1,6),yticks=range(0,100,20),color = color)
```
### Bar Chart
Bar Chart is used to analyse the group of data,A bar chart or bar graph is a chart or graph that presents categorical data with rectangular bars with heights or lengths proportional to the values that they represent. The bars can be plotted vertically or horizontally.
```
plt.bar(a,b)
```
Plotting the Bar chart from panda DataFrame
```
#we can generate bar chart from pandas DataFrame
sales.plot(kind='bar')
```
### Pie Chart
Pie chart represents whole data as a circle. Different categories makes slice along the circle based on their propertion
```
a = [3,4,5,8,15]
plt.pie(a,labels=['A','B','C','D','E'])
# we can define color for each categories
color_list = ['Red','Blue','Green','black','orange']
plt.pie(a,labels=['A','B','C','D','E'],colors=color_list)
```
### Histograms
Histogram allows us to determine the shape of continuous data. It is one of the plot which is used in statistics. Using this we can detect the distribution of data,outliers in the data and other useful properties
to construct histogram from continuous data, we need to create bins and put data in the appropriate bin,The bins parameter tells you the number of bins that your data will be divided into.
```
# For example, here we ask for 20 bins:
x = np.random.randn(100)
plt.hist(x, bins=20)
# And here we ask for bin edges at the locations [-4, -3, -2... 3, 4].
plt.hist(x, bins=range(-4, 5))
```
### Scatter Plot
It is used to show the relationship between two set of data points. For example, any person weight and height.
```
N = 50
x = np.random.rand(N)
y = np.random.rand(N)
colors = np.random.rand(N)
area = (30 * np.random.rand(N))**2 # 0 to 15 point radii
plt.scatter(x, y, s=area, c=colors, alpha=0.5)
plt.show()
```
### Bow Plot
Bow plot is used to understand the variable spread. In Box plot , rectangle top boundary represents third quantile, bottom boundary represents first quantile and line in the box indicates medium
verticle line at the top indicates max value and vertical line at the bottom indicates the min value
```
box_data = np.random.normal(56,10,50).astype(int)
plt.boxplot(box_data)
```
| true |
code
| 0.610802 | null | null | null | null |
|
Precipitation Metrics (consecutive dry days, rolling 5-day precip accumulation, return period)
```
! pip install xclim
%matplotlib inline
import xarray as xr
import numpy as np
import matplotlib.pyplot as plt
import os
import pandas as pd
from datetime import datetime, timedelta, date
import dask
import dask.array as dda
import dask.distributed as dd
# rhodium-specific kubernetes cluster configuration
import rhg_compute_tools.kubernetes as rhgk
client, cluster = rhgk.get_big_cluster()
cluster.scale(30)
client
cluster.close()
def pull_ERA5_variable(filevar, variable):
filenames = []
for num_yrs in range(len(yrs)):
filename = '/gcs/impactlab-data/climate/source_data/ERA-5/{}/daily/netcdf/v1.3/{}_daily_{}-{}.nc'.format(filevar, filevar, yrs[num_yrs], yrs[num_yrs])
filenames.append(filename)
era5_var = xr.open_mfdataset(filenames,
concat_dim='time', combine='by_coords')
var_all = era5_var[variable]
return var_all
yrs = np.arange(1995,2015)
da = pull_ERA5_variable('pr', 'tp')
import xclim as xc
from xclim.core.calendar import convert_calendar
# remove leap days and convert calendar to no-leap
da = convert_calendar(da, 'noleap')
da_mm = da*1000
da_mm.attrs["units"] = "mm/day"
da_mm = da_mm.persist()
```
Calculate the max number of consecutive dry days per year. Use the threshold value for the wet day frequency correction
```
dry_days = xc.indicators.atmos.maximum_consecutive_dry_days(da_mm, thresh=0.0005, freq='YS')
dry_days = dry_days.compute()
#dry_days.sel(latitude=50.0, longitude=0.0).plot()
avg_dry_days = dry_days.mean(dim='time').compute()
avg_dry_days.plot(robust=True)
from matplotlib import cm
from cartopy import config
import cartopy.crs as ccrs
import cartopy.feature as cfeature
def plot_average_dry_days(da, years, fname):
fig = plt.figure(figsize=(10, 5))
ax = plt.axes(projection=ccrs.Robinson())
cmap = cm.pink_r
da.plot(
ax=ax,
cmap=cmap,
transform=ccrs.PlateCarree(),
cbar_kwargs={'shrink': 0.8, 'pad': 0.02, "label": "# of days"},
vmin=0,
vmax=180,
)
ax.coastlines()
ax.add_feature(cfeature.BORDERS, linestyle=":")
ax.set_title("Mean number of consecutive dry days annually ({})".format(years))
plt.savefig(fname, dpi=600, bbox_inches='tight')
plot_average_dry_days(avg_dry_days, '1995-2014', 'avg_dry_days_era5')
```
Calculate the highest precipitation amount cumulated over a 5-day moving window
```
max_5day_dailyprecip = xc.indicators.icclim.RX5day(da_mm, freq='YS')
# there is a different function for a n-day moving window
max_5day_dailyprecip = max_5day_dailyprecip.compute()
avg_5day_dailyprecip = max_5day_dailyprecip.mean(dim='time').compute()
avg_5day_dailyprecip.plot()
def plot_average_5day_max_precip(da, years, fname):
fig = plt.figure(figsize=(10, 5))
ax = plt.axes(projection=ccrs.Robinson())
cmap = cm.GnBu
da.plot(
ax=ax,
cmap=cmap,
transform=ccrs.PlateCarree(),
cbar_kwargs={'shrink': 0.8, 'pad': 0.02, "label": "5-day accumulated precip (mm)"},
vmin=0,
vmax=250,
)
ax.coastlines()
ax.add_feature(cfeature.BORDERS, linestyle=":")
ax.set_title("Maximum annual 5-day rolling precipitation accumulation ({})".format(years))
plt.savefig(fname, dpi=600, bbox_inches='tight')
plot_average_5day_max_precip(avg_5day_dailyprecip, '1995-2014', 'avg_max_5day_precip_era5')
```
Comparing difference of mean with nans and mean without taking into account nans
```
avg_5day_dailyprecip = max_5day_dailyprecip.mean(dim='time', skipna=True).compute()
avg_5day_dailyprecip
plot_average_5day_max_precip(avg_5day_dailyprecip, '1995-2014')
max_5day_dailyprecip.sel(latitude=-89.0, longitude=0.0).plot()
```
Basics for calculating the return period of daily precipitation. More testing needed as it blows up currently.
```
def calculate_return(da, return_interval):
'''
calculate return period of daily precip data per grid point
'''
# Sort data smallest to largest
sorted_data = da.sortby(da, ascending=True).compute()
# Count total obervations
n = sorted_data.shape[0]
# Compute rank position
rank = np.arange(1, 1 + n)
# Calculate probability
probability = (n - rank + 1) / (n + 1)
# Calculate return - data are daily to then divide by 365?
return_year = (1 / probability)
# Round return period
return_yr_rnd = np.around(return_year, decimals=1)
# identify daily precip for specified return interval
indices = np.where(return_yr_rnd == return_interval)
# Compute over daily accumulation for the X return period
mean_return_period_value = sorted_data[indices].mean().compute()
return(mean_return_period_value)
da_grid_cell = da.sel(latitude=lat, longitude=lon)
da_grid_cell
# applyufunc --> this applies a function to a single grid cell
return_values = []
for ilat in range(0, len(da.latitude)):
for ilon in range(0, len(da.longitude):
# create array to store lon values per lat
values_per_lat = []
# select da per grid cell
da_grid_cell = da.sel(latitude=latitude[ilat], longitude=longitude[ilon])
# compute return period value & append
mean_return_value = calculate_return(da_grid_cell, 5.0)
values_per_lat.append(mean_return_value)
# for each latitude save all longitude values
return_values.append(values_per_lat)
return_values
for lat in da.latitude:
for lon in da.longitude:
da_grid_cell = da.sel(latitude=lat, longitude=lon)
mean_return_value = calculate_return(da_grid_cell, 5.0)
```
Breakdown of per step testing of return period
```
da_test = da.sel(latitude=75.0, longitude=18.0).persist()
da_test
mean = calculate_return(da_test, 5.0)
mean
sorted_data = da_test.sortby(da_test, ascending=True).compute()
sorted_data
n = sorted_data.shape[0]
n
rank = np.arange(1, 1 + n) # sorted_data.insert(0, 'rank', range(1, 1 + n))
rank
probability = (n - rank + 1) / (n + 1)
probability
return_year = (1 / probability)
return_year
return_yr_rnd = np.around(return_year, decimals=1)
return_yr_rnd[5679]
indices = np.where(return_yr_rnd == 5.0)
indices
sorted_data[indices].mean().compute()
sorted_test = np.sort(da_test, axis=0)
sorted_test = xr.DataArray(sorted_test)
sorted_test
```
| true |
code
| 0.599016 | null | null | null | null |
|
# IElixir - Elixir kernel for Jupyter Project
<img src="logo.png" title="Hosted by imgur.com" style="margin: 0 0;"/>
---
## Google Summer of Code 2015
> Developed by [Piotr Przetacznik](https://twitter.com/pprzetacznik)
> Mentored by [José Valim](https://twitter.com/josevalim)
---
## References
* [Elixir language](http://elixir-lang.org/)
* [Jupyter Project](https://jupyter.org/)
* [IElixir sources](https://github.com/pprzetacznik/IElixir)
## Getting Started
### Basic Types
<pre>
1 # integer
0x1F # integer
1.0 # float
true # boolean
:atom # atom / symbol
"elixir" # string
[1, 2, 3] # list
{1, 2, 3} # tuple
</pre>
### Basic arithmetic
```
1 + 2
5 * 5
10 / 2
div(10, 2)
div 10, 2
rem 10, 3
0b1010
0o777
0x1F
1.0
1.0e-10
round 3.58
trunc 3.58
```
### Booleans
```
true
true == false
is_boolean(true)
is_boolean(1)
is_integer(5)
is_float(5)
is_number("5.0")
```
### Atoms
```
:hello
:hello == :world
true == :true
is_atom(false)
is_boolean(:false)
```
### Strings
```
"hellö"
"hellö #{:world}"
IO.puts "hello\nworld"
is_binary("hellö")
byte_size("hellö")
String.length("hellö")
String.upcase("hellö")
```
### Anonymous functions
```
add = fn a, b -> a + b end
is_function(add)
is_function(add, 2)
is_function(add, 1)
add.(1, 2)
add_two = fn a -> add.(a, 2) end
add_two.(2)
x = 42
(fn -> x = 0 end).()
x
```
### (Linked) Lists
```
a = [1, 2, true, 3]
length [1, 2, 3]
[1, 2, 3] ++ [4, 5, 6]
[1, true, 2, false, 3, true] -- [true, false]
hd(a)
tl(a)
hd []
[11, 12, 13]
[104, 101, 108, 108, 111]
'hello' == "hello"
```
### Tuples
```
{:ok, "hello"}
tuple_size {:ok, "hello"}
tuple = {:ok, "hello"}
elem(tuple, 1)
tuple_size(tuple)
put_elem(tuple, 1, "world")
tuple
```
### Lists or tuples?
```
list = [1|[2|[3|[]]]]
[0] ++ list
list ++ [4]
File.read("LICENSE")
File.read("path/to/unknown/file")
```
### Other examples
```
0x1F
a = 25
b = 150
IO.puts(a+b)
defmodule Math do
def sum(a, b) do
a + b
end
end
Math.sum(1, 2)
import ExUnit.CaptureIO
capture_io(fn -> IO.write "john" end) == "john"
?a
<<98>> == <<?b>>
<<?g, ?o, ?\n>> == "go
"
{hlen, blen} = {4, 4}
<<header :: binary-size(hlen), body :: binary-size(blen)>> = "headbody"
{header, body}
h()
defmodule KV.Registry do
use GenServer
## Client API
@doc """
Starts the registry.
"""
def start_link(opts \\ []) do
GenServer.start_link(__MODULE__, :ok, opts)
end
@doc """
Looks up the bucket pid for `name` stored in `server`.
Returns `{:ok, pid}` if the bucket exists, `:error` otherwise.
"""
def lookup(server, name) do
GenServer.call(server, {:lookup, name})
end
@doc """
Ensures there is a bucket associated to the given `name` in `server`.
"""
def create(server, name) do
GenServer.cast(server, {:create, name})
end
## Server Callbacks
def init(:ok) do
{:ok, HashDict.new}
end
def handle_call({:lookup, name}, _from, names) do
{:reply, HashDict.fetch(names, name), names}
end
def handle_cast({:create, name}, names) do
if HashDict.has_key?(names, name) do
{:noreply, names}
else
{:ok, bucket} = KV.Bucket.start_link()
{:noreply, HashDict.put(names, name, bucket)}
end
end
end
ExUnit.start()
defmodule KV.RegistryTest do
use ExUnit.Case, async: true
setup do
{:ok, registry} = KV.Registry.start_link
{:ok, registry: registry}
end
test "spawns buckets", %{registry: registry} do
assert KV.Registry.lookup(registry, "shopping") == :error
KV.Registry.create(registry, "shopping")
assert {:ok, bucket} = KV.Registry.lookup(registry, "shopping")
KV.Bucket.put(bucket, "milk", 1)
assert KV.Bucket.get(bucket, "milk") == 1
end
end
```
## IElixir magic commands
Get output of previous cell.
```
ans
```
You can also access output of any cell using it's number.
```
out[142]
```
| true |
code
| 0.712045 | null | null | null | null |
|
# Logistic Regression
Notebook version: 2.0 (Nov 21, 2017)
2.1 (Oct 19, 2018)
Author: Jesús Cid Sueiro ([email protected])
Jerónimo Arenas García ([email protected])
Changes: v.1.0 - First version
v.1.1 - Typo correction. Prepared for slide presentation
v.2.0 - Prepared for Python 3.0 (backcompmatible with 2.7)
Assumptions for regression model modified
v.2.1 - Minor changes regarding notation and assumptions
```
from __future__ import print_function
# To visualize plots in the notebook
%matplotlib inline
# Imported libraries
import csv
import random
import matplotlib
import matplotlib.pyplot as plt
import pylab
import numpy as np
from mpl_toolkits.mplot3d import Axes3D
from sklearn.preprocessing import PolynomialFeatures
from sklearn import linear_model
```
# Logistic Regression
## 1. Introduction
### 1.1. Binary classification and decision theory. The MAP criterion
The goal of a classification problem is to assign a *class* or *category* to every *instance* or *observation* of a data collection. Here, we will assume that every instance ${\bf x}$ is an $N$-dimensional vector in $\mathbb{R}^N$, and that the class $y$ of sample ${\bf x}$ is an element of a binary set ${\mathcal Y} = \{0, 1\}$. The goal of a classifier is to predict the true value of $y$ after observing ${\bf x}$.
We will denote as $\hat{y}$ the classifier output or *decision*. If $y=\hat{y}$, the decision is a *hit*, otherwise $y\neq \hat{y}$ and the decision is an *error*.
Decision theory provides a solution to the classification problem in situations where the relation between instance ${\bf x}$ and its class $y$ is given by a known probabilistic model: assume that every tuple $({\bf x}, y)$ is an outcome of a random vector $({\bf X}, Y)$ with joint distribution $p_{{\bf X},Y}({\bf x}, y)$. A natural criteria for classification is to select predictor $\hat{Y}=f({\bf x})$ in such a way that the probability or error, $P\{\hat{Y} \neq Y\}$ is minimum. Noting that
$$
P\{\hat{Y} \neq Y\} = \int P\{\hat{Y} \neq Y | {\bf x}\} p_{\bf X}({\bf x}) d{\bf x}
$$
the optimal decision is got if, for every sample ${\bf x}$, we make decision minimizing the conditional error probability:
\begin{align}
\hat{y}^* &= \arg\min_{\hat{y}} P\{\hat{y} \neq Y |{\bf x}\} \\
&= \arg\max_{\hat{y}} P\{\hat{y} = Y |{\bf x}\} \\
\end{align}
Thus, the optimal decision rule can be expressed as
$$
P_{Y|{\bf X}}(1|{\bf x}) \quad\mathop{\gtrless}^{\hat{y}=1}_{\hat{y}=0}\quad P_{Y|{\bf X}}(0|{\bf x})
$$
or, equivalently
$$
P_{Y|{\bf X}}(1|{\bf x}) \quad\mathop{\gtrless}^{\hat{y}=1}_{\hat{y}=0}\quad \frac{1}{2}
$$
The classifier implementing this decision rule is usually named MAP (*Maximum A Posteriori*). As we have seen, the MAP classifier minimizes the error probability for binary classification, but the result can also be generalized to multiclass classification problems.
### 1.2. Parametric classification.
Classical decision theory is grounded on the assumption that the probabilistic model relating the observed sample ${\bf X}$ and the true hypothesis $Y$ is known. Unfortunately, this is unrealistic in many applications, where the only available information to construct the classifier is a dataset $\mathcal D = \{{\bf x}^{(k)}, y^{(k)}\}_{k=0}^{K-1}$ of instances and their respective class labels.
A more realistic formulation of the classification problem is the following: given a dataset $\mathcal D = \{({\bf x}^{(k)}, y^{(k)}) \in {\mathbb{R}}^N \times {\mathcal Y}, \, k=0,\ldots,{K-1}\}$ of independent and identically distributed (i.i.d.) samples from an ***unknown*** distribution $p_{{\bf X},Y}({\bf x}, y)$, predict the class $y$ of a new sample ${\bf x}$ with the minimum probability of error.
Since the probabilistic model generating the data is unknown, the MAP decision rule cannot be applied. However, many classification algorithms use the dataset to obtain an estimate of the posterior class probabilities, and apply it to implement an approximation to the MAP decision maker.
Parametric classifiers based on this idea assume, additionally, that the posterior class probabilty satisfies some parametric formula:
$$
P_{Y|X}(1|{\bf x},{\bf w}) = f_{\bf w}({\bf x})
$$
where ${\bf w}$ is a vector of parameters. Given the expression of the MAP decision maker, classification consists in comparing the value of $f_{\bf w}({\bf x})$ with the threshold $\frac{1}{2}$, and each parameter vector would be associated to a different decision maker.
In practice, the dataset ${\mathcal S}$ is used to select a particular parameter vector $\hat{\bf w}$ according to certain criterion. Accordingly, the decision rule becomes
$$
f_{\hat{\bf w}}({\bf x}) \quad\mathop{\gtrless}^{\hat{y}=1}_{\hat{y}=0}\quad \frac{1}{2}
$$
In this lesson, we explore one of the most popular model-based parametric classification methods: **logistic regression**.
<img src="./figs/parametric_decision.png", width=400>
## 2. Logistic regression.
### 2.1. The logistic function
The logistic regression model assumes that the binary class label $Y \in \{0,1\}$ of observation $X\in \mathbb{R}^N$ satisfies the expression.
$$P_{Y|{\bf X}}(1|{\bf x}, {\bf w}) = g({\bf w}^\intercal{\bf x})$$
$$P_{Y|{\bf,X}}(0|{\bf x}, {\bf w}) = 1-g({\bf w}^\intercal{\bf x})$$
where ${\bf w}$ is a parameter vector and $g(·)$ is the *logistic* function, which is defined by
$$g(t) = \frac{1}{1+\exp(-t)}$$
It is straightforward to see that the logistic function has the following properties:
- **P1**: Probabilistic output: $\quad 0 \le g(t) \le 1$
- **P2**: Symmetry: $\quad g(-t) = 1-g(t)$
- **P3**: Monotonicity: $\quad g'(t) = g(t)·[1-g(t)] \ge 0$
In the following we define a logistic function in python, and use it to plot a graphical representation.
**Exercise 1**: Verify properties P2 and P3.
**Exercise 2**: Implement a function to compute the logistic function, and use it to plot such function in the inverval $[-6,6]$.
```
# Define the logistic function
def logistic(t):
#<SOL>
#</SOL>
# Plot the logistic function
t = np.arange(-6, 6, 0.1)
z = logistic(t)
plt.plot(t, z)
plt.xlabel('$t$', fontsize=14)
plt.ylabel('$g(t)$', fontsize=14)
plt.title('The logistic function')
plt.grid()
```
### 2.2. Classifiers based on the logistic model.
The MAP classifier under a logistic model will have the form
$$P_{Y|{\bf X}}(1|{\bf x}, {\bf w}) = g({\bf w}^\intercal{\bf x}) \quad\mathop{\gtrless}^{\hat{y}=1}_{\hat{y}=0} \quad \frac{1}{2} $$
Therefore
$$
2 \quad\mathop{\gtrless}^{\hat{y}=1}_{\hat{y}=0} \quad
1 + \exp(-{\bf w}^\intercal{\bf x}) $$
which is equivalent to
$${\bf w}^\intercal{\bf x}
\quad\mathop{\gtrless}^{\hat{y}=1}_{\hat{y}=0}\quad
0 $$
Therefore, the classifiers based on the logistic model are given by linear decision boundaries passing through the origin, ${\bf x} = {\bf 0}$.
```
# Weight vector:
w = [4, 8] # Try different weights
# Create a rectangular grid.
x_min = -1
x_max = 1
dx = x_max - x_min
h = float(dx) / 200
xgrid = np.arange(x_min, x_max, h)
xx0, xx1 = np.meshgrid(xgrid, xgrid)
# Compute the logistic map for the given weights
Z = logistic(w[0]*xx0 + w[1]*xx1)
# Plot the logistic map
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot_surface(xx0, xx1, Z, cmap=plt.cm.copper)
ax.contour(xx0, xx1, Z, levels=[0.5], colors='b', linewidths=(3,))
plt.xlabel('$x_0$')
plt.ylabel('$x_1$')
ax.set_zlabel('P(1|x,w)')
plt.show()
```
The next code fragment represents the output of the same classifier, representing the output of the logistic function in the $x_0$-$x_1$ plane, encoding the value of the logistic function in the representation color.
```
CS = plt.contourf(xx0, xx1, Z)
CS2 = plt.contour(CS, levels=[0.5],
colors='m', linewidths=(3,))
plt.xlabel('$x_0$')
plt.ylabel('$x_1$')
plt.colorbar(CS, ticks=[0, 0.5, 1])
plt.show()
```
### 3.3. Nonlinear classifiers.
The logistic model can be extended to construct non-linear classifiers by using non-linear data transformations. A general form for a nonlinear logistic regression model is
$$P_{Y|{\bf X}}(1|{\bf x}, {\bf w}) = g[{\bf w}^\intercal{\bf z}({\bf x})] $$
where ${\bf z}({\bf x})$ is an arbitrary nonlinear transformation of the original variables. The boundary decision in that case is given by equation
$$
{\bf w}^\intercal{\bf z} = 0
$$
** Exercise 2**: Modify the code above to generate a 3D surface plot of the polynomial logistic regression model given by
$$
P_{Y|{\bf X}}(1|{\bf x}, {\bf w}) = g(1 + 10 x_0 + 10 x_1 - 20 x_0^2 + 5 x_0 x_1 + x_1^2)
$$
```
# Weight vector:
w = [1, 10, 10, -20, 5, 1] # Try different weights
# Create a regtangular grid.
x_min = -1
x_max = 1
dx = x_max - x_min
h = float(dx) / 200
xgrid = np.arange(x_min, x_max, h)
xx0, xx1 = np.meshgrid(xgrid, xgrid)
# Compute the logistic map for the given weights
# Z = <FILL IN>
# Plot the logistic map
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot_surface(xx0, xx1, Z, cmap=plt.cm.copper)
plt.xlabel('$x_0$')
plt.ylabel('$x_1$')
ax.set_zlabel('P(1|x,w)')
plt.show()
CS = plt.contourf(xx0, xx1, Z)
CS2 = plt.contour(CS, levels=[0.5],
colors='m', linewidths=(3,))
plt.xlabel('$x_0$')
plt.ylabel('$x_1$')
plt.colorbar(CS, ticks=[0, 0.5, 1])
plt.show()
```
## 3. Inference
Remember that the idea of parametric classification is to use the training data set $\mathcal D = \{({\bf x}^{(k)}, y^{(k)}) \in {\mathbb{R}}^N \times \{0,1\}, k=0,\ldots,{K-1}\}$ to set the parameter vector ${\bf w}$ according to certain criterion. Then, the estimate $\hat{\bf w}$ can be used to compute the label prediction for any new observation as
$$\hat{y} = \arg\max_y P_{Y|{\bf X}}(y|{\bf x},\hat{\bf w}).$$
<img src="figs/parametric_decision.png", width=400>
We need still to choose a criterion to optimize with the selection of the parameter vector. In the notebook, we will discuss two different approaches to the estimation of ${\bf w}$:
* Maximum Likelihood (ML): $\hat{\bf w}_{\text{ML}} = \arg\max_{\bf w} P_{{\mathcal D}|{\bf W}}({\mathcal D}|{\bf w})$
* Maximum *A Posteriori* (MAP): $\hat{\bf w}_{\text{MAP}} = \arg\max_{\bf w} p_{{\bf W}|{\mathcal D}}({\bf w}|{\mathcal D})$
For the mathematical derivation of the logistic regression algorithm, the following representation of the logistic model will be useful: noting that
$$P_{Y|{\bf X}}(0|{\bf x}, {\bf w}) = 1-g[{\bf w}^\intercal{\bf z}({\bf x})]
= g[-{\bf w}^\intercal{\bf z}({\bf x})]$$
we can write
$$P_{Y|{\bf X}}(y|{\bf x}, {\bf w}) = g[\overline{y}{\bf w}^\intercal{\bf z}({\bf x})]$$
where $\overline{y} = 2y-1$ is a *symmetrized label* ($\overline{y}\in\{-1, 1\}$).
### 3.1. Model assumptions
In the following, we will make the following assumptions:
- **A1**. (Logistic Regression): We assume a logistic model for the *a posteriori* probability of ${Y=1}$ given ${\bf X}$, i.e.,
$$P_{Y|{\bf X}}(1|{\bf x}, {\bf w}) = g[{\bf w}^\intercal{\bf z}({\bf x})].$$
- **A2**. All samples in ${\mathcal D}$ have been generated by the same distribution, $p_{{\bf X}, Y}({\bf x}, y)$.
- **A3**. Input variables $\bf x$ do not depend on $\bf w$. This implies that
$$p({\bf x}|{\bf w}) = p({\bf x})$$
- **A4**. Targets $y^{(0)}, \cdots, y^{(K-1)}$ are statistically independent given $\bf w$ and the inputs ${\bf x}^{(0)}, \cdots, {\bf x}^{(K-1)}$, that is:
$$p(y^{(0)}, \cdots, y^{(K-1)} | {\bf x}^{(0)}, \cdots, {\bf x}^{(K-1)}, {\bf w}) = \prod_{k=0}^{K-1} p(s^{(k)} | {\bf x}^{(k)}, {\bf w})$$
### 3.2. ML estimation.
The ML estimate is defined as
$$\hat{\bf w}_{\text{ML}} = \arg\max_{\bf w} P_{{\mathcal D}|{\bf W}}({\mathcal D}|{\bf w})$$
Ussing assumptions A2 and A3 above, we have that
\begin{align}
P_{{\mathcal D}|{\bf W}}({\mathcal D}|{\bf w}) & = p(y^{(0)}, \cdots, y^{(K-1)},{\bf x}^{(0)}, \cdots, {\bf x}^{(K-1)}| {\bf w}) \\
& = P(y^{(0)}, \cdots, y^{(K-1)}|{\bf x}^{(0)}, \cdots, {\bf x}^{(K-1)}, {\bf w}) \; p({\bf x}^{(0)}, \cdots, {\bf x}^{(K-1)}| {\bf w}) \\
& = P(y^{(0)}, \cdots, y^{(K-1)}|{\bf x}^{(0)}, \cdots, {\bf x}^{(K-1)}, {\bf w}) \; p({\bf x}^{(0)}, \cdots, {\bf x}^{(K-1)})\end{align}
Finally, using assumption A4, we can formulate the ML estimation of $\bf w$ as the resolution of the following optimization problem
\begin{align}
\hat {\bf w}_\text{ML} & = \arg \max_{\bf w} p(y^{(0)}, \cdots, y^{(K-1)}|{\bf x}^{(0)}, \cdots, {\bf x}^{(K-1)}, {\bf w}) \\
& = \arg \max_{\bf w} \prod_{k=0}^{K-1} P(y^{(k)}|{\bf x}^{(k)}, {\bf w}) \\
& = \arg \max_{\bf w} \sum_{k=0}^{K-1} \log P(y^{(k)}|{\bf x}^{(k)}, {\bf w}) \\
& = \arg \min_{\bf w} \sum_{k=0}^{K-1} - \log P(y^{(k)}|{\bf x}^{(k)}, {\bf w})
\end{align}
where the arguments of the maximization or minimization problems of the last three lines are usually referred to as the **likelihood**, **log-likelihood** $\left[L(\bf w)\right]$, and **negative log-likelihood** $\left[\text{NLL}(\bf w)\right]$, respectively.
Now, using A1 (the logistic model)
\begin{align}
\text{NLL}({\bf w})
&= - \sum_{k=0}^{K-1}\log\left[g\left(\overline{y}^{(k)}{\bf w}^\intercal {\bf z}^{(k)}\right)\right] \\
&= \sum_{k=0}^{K-1}\log\left[1+\exp\left(-\overline{y}^{(k)}{\bf w}^\intercal {\bf z}^{(k)}\right)\right]
\end{align}
where ${\bf z}^{(k)}={\bf z}({\bf x}^{(k)})$.
It can be shown that $\text{NLL}({\bf w})$ is a convex and differentiable function of ${\bf w}$. Therefore, its minimum is a point with zero gradient.
\begin{align}
\nabla_{\bf w} \text{NLL}(\hat{\bf w}_{\text{ML}})
&= - \sum_{k=0}^{K-1}
\frac{\exp\left(-\overline{y}^{(k)}\hat{\bf w}_{\text{ML}}^\intercal {\bf z}^{(k)}\right) \overline{y}^{(k)} {\bf z}^{(k)}}
{1+\exp\left(-\overline{y}^{(k)}\hat{\bf w}_{\text{ML}}^\intercal {\bf z}^{(k)}
\right)} = \\
&= - \sum_{k=0}^{K-1} \left[y^{(k)}-g(\hat{\bf w}_{\text{ML}}^T {\bf z}^{(k)})\right] {\bf z}^{(k)} = 0
\end{align}
Unfortunately, $\hat{\bf w}_{\text{ML}}$ cannot be taken out from the above equation, and some iterative optimization algorithm must be used to search for the minimum.
### 3.2. Gradient descent.
A simple iterative optimization algorithm is <a href = https://en.wikipedia.org/wiki/Gradient_descent> gradient descent</a>.
\begin{align}
{\bf w}_{n+1} = {\bf w}_n - \rho_n \nabla_{\bf w} L({\bf w}_n)
\end{align}
where $\rho_n >0$ is the *learning step*.
Applying the gradient descent rule to logistic regression, we get the following algorithm:
\begin{align}
{\bf w}_{n+1} &= {\bf w}_n
+ \rho_n \sum_{k=0}^{K-1} \left[y^{(k)}-g({\bf w}_n^\intercal {\bf z}^{(k)})\right] {\bf z}^{(k)}
\end{align}
Defining vectors
\begin{align}
{\bf y} &= [y^{(0)},\ldots,y^{(K-1)}]^\intercal \\
\hat{\bf p}_n &= [g({\bf w}_n^\intercal {\bf z}^{(0)}), \ldots, g({\bf w}_n^\intercal {\bf z}^{(K-1)})]^\intercal
\end{align}
and matrix
\begin{align}
{\bf Z} = \left[{\bf z}^{(0)},\ldots,{\bf z}^{(K-1)}\right]^\intercal
\end{align}
we can write
\begin{align}
{\bf w}_{n+1} &= {\bf w}_n
+ \rho_n {\bf Z}^\intercal \left({\bf y}-\hat{\bf p}_n\right)
\end{align}
In the following, we will explore the behavior of the gradient descend method using the Iris Dataset.
#### 3.2.1 Example: Iris Dataset.
As an illustration, consider the <a href = http://archive.ics.uci.edu/ml/datasets/Iris> Iris dataset </a>, taken from the <a href=http://archive.ics.uci.edu/ml/> UCI Machine Learning repository</a>. This data set contains 3 classes of 50 instances each, where each class refers to a type of iris plant (*setosa*, *versicolor* or *virginica*). Each instance contains 4 measurements of given flowers: sepal length, sepal width, petal length and petal width, all in centimeters.
We will try to fit the logistic regression model to discriminate between two classes using only two attributes.
First, we load the dataset and split them in training and test subsets.
```
# Adapted from a notebook by Jason Brownlee
def loadDataset(filename, split):
xTrain = []
cTrain = []
xTest = []
cTest = []
with open(filename, 'r') as csvfile:
lines = csv.reader(csvfile)
dataset = list(lines)
for i in range(len(dataset)-1):
for y in range(4):
dataset[i][y] = float(dataset[i][y])
item = dataset[i]
if random.random() < split:
xTrain.append(item[0:4])
cTrain.append(item[4])
else:
xTest.append(item[0:4])
cTest.append(item[4])
return xTrain, cTrain, xTest, cTest
xTrain_all, cTrain_all, xTest_all, cTest_all = loadDataset('iris.data', 0.66)
nTrain_all = len(xTrain_all)
nTest_all = len(xTest_all)
print('Train:', nTrain_all)
print('Test:', nTest_all)
```
Now, we select two classes and two attributes.
```
# Select attributes
i = 0 # Try 0,1,2,3
j = 1 # Try 0,1,2,3 with j!=i
# Select two classes
c0 = 'Iris-versicolor'
c1 = 'Iris-virginica'
# Select two coordinates
ind = [i, j]
# Take training test
X_tr = np.array([[xTrain_all[n][i] for i in ind] for n in range(nTrain_all)
if cTrain_all[n]==c0 or cTrain_all[n]==c1])
C_tr = [cTrain_all[n] for n in range(nTrain_all)
if cTrain_all[n]==c0 or cTrain_all[n]==c1]
Y_tr = np.array([int(c==c1) for c in C_tr])
n_tr = len(X_tr)
# Take test set
X_tst = np.array([[xTest_all[n][i] for i in ind] for n in range(nTest_all)
if cTest_all[n]==c0 or cTest_all[n]==c1])
C_tst = [cTest_all[n] for n in range(nTest_all)
if cTest_all[n]==c0 or cTest_all[n]==c1]
Y_tst = np.array([int(c==c1) for c in C_tst])
n_tst = len(X_tst)
```
#### 3.2.2. Data normalization
Normalization of data is a common pre-processing step in many machine learning algorithms. Its goal is to get a dataset where all input coordinates have a similar scale. Learning algorithms usually show less instabilities and convergence problems when data are normalized.
We will define a normalization function that returns a training data matrix with zero sample mean and unit sample variance.
```
def normalize(X, mx=None, sx=None):
# Compute means and standard deviations
if mx is None:
mx = np.mean(X, axis=0)
if sx is None:
sx = np.std(X, axis=0)
# Normalize
X0 = (X-mx)/sx
return X0, mx, sx
```
Now, we can normalize training and test data. Observe in the code that the same transformation should be applied to training and test data. This is the reason why normalization with the test data is done using the means and the variances computed with the training set.
```
# Normalize data
Xn_tr, mx, sx = normalize(X_tr)
Xn_tst, mx, sx = normalize(X_tst, mx, sx)
```
The following figure generates a plot of the normalized training data.
```
# Separate components of x into different arrays (just for the plots)
x0c0 = [Xn_tr[n][0] for n in range(n_tr) if Y_tr[n]==0]
x1c0 = [Xn_tr[n][1] for n in range(n_tr) if Y_tr[n]==0]
x0c1 = [Xn_tr[n][0] for n in range(n_tr) if Y_tr[n]==1]
x1c1 = [Xn_tr[n][1] for n in range(n_tr) if Y_tr[n]==1]
# Scatterplot.
labels = {'Iris-setosa': 'Setosa',
'Iris-versicolor': 'Versicolor',
'Iris-virginica': 'Virginica'}
plt.plot(x0c0, x1c0,'r.', label=labels[c0])
plt.plot(x0c1, x1c1,'g+', label=labels[c1])
plt.xlabel('$x_' + str(ind[0]) + '$')
plt.ylabel('$x_' + str(ind[1]) + '$')
plt.legend(loc='best')
plt.axis('equal')
plt.show()
```
In order to apply the gradient descent rule, we need to define two methods:
- A `fit` method, that receives the training data and returns the model weights and the value of the negative log-likelihood during all iterations.
- A `predict` method, that receives the model weight and a set of inputs, and returns the posterior class probabilities for that input, as well as their corresponding class predictions.
```
def logregFit(Z_tr, Y_tr, rho, n_it):
# Data dimension
n_dim = Z_tr.shape[1]
# Initialize variables
nll_tr = np.zeros(n_it)
pe_tr = np.zeros(n_it)
Y_tr2 = 2*Y_tr - 1 # Transform labels into binary symmetric.
w = np.random.randn(n_dim,1)
# Running the gradient descent algorithm
for n in range(n_it):
# Compute posterior probabilities for weight w
p1_tr = logistic(np.dot(Z_tr, w))
# Compute negative log-likelihood
# (note that this is not required for the weight update, only for nll tracking)
nll_tr[n] = np.sum(np.log(1 + np.exp(-np.dot(Y_tr2*Z_tr, w))))
# Update weights
w += rho*np.dot(Z_tr.T, Y_tr - p1_tr)
return w, nll_tr
def logregPredict(Z, w):
# Compute posterior probability of class 1 for weights w.
p = logistic(np.dot(Z, w)).flatten()
# Class
D = [int(round(pn)) for pn in p]
return p, D
```
We can test the behavior of the gradient descent method by fitting a logistic regression model with ${\bf z}({\bf x}) = (1, {\bf x}^\intercal)^\intercal$.
```
# Parameters of the algorithms
rho = float(1)/50 # Learning step
n_it = 200 # Number of iterations
# Compute Z's
Z_tr = np.c_[np.ones(n_tr), Xn_tr]
Z_tst = np.c_[np.ones(n_tst), Xn_tst]
n_dim = Z_tr.shape[1]
# Convert target arrays to column vectors
Y_tr2 = Y_tr[np.newaxis].T
Y_tst2 = Y_tst[np.newaxis].T
# Running the gradient descent algorithm
w, nll_tr = logregFit(Z_tr, Y_tr2, rho, n_it)
# Classify training and test data
p_tr, D_tr = logregPredict(Z_tr, w)
p_tst, D_tst = logregPredict(Z_tst, w)
# Compute error rates
E_tr = D_tr!=Y_tr
E_tst = D_tst!=Y_tst
# Error rates
pe_tr = float(sum(E_tr)) / n_tr
pe_tst = float(sum(E_tst)) / n_tst
# NLL plot.
plt.plot(range(n_it), nll_tr,'b.:', label='Train')
plt.xlabel('Iteration')
plt.ylabel('Negative Log-Likelihood')
plt.legend()
print('The optimal weights are:')
print(w)
print('The final error rates are:')
print('- Training:', pe_tr)
print('- Test:', pe_tst)
print('The NLL after training is', nll_tr[len(nll_tr)-1])
```
#### 3.2.3. Free parameters
Under certain conditions, the gradient descent method can be shown to converge asymptotically (i.e. as the number of iterations goes to infinity) to the ML estimate of the logistic model. However, in practice, the final estimate of the weights ${\bf w}$ depend on several factors:
- Number of iterations
- Initialization
- Learning step
**Exercise**: Visualize the variability of gradient descent caused by initializations. To do so, fix the number of iterations to 200 and the learning step, and execute the gradient descent 100 times, storing the training error rate of each execution. Plot the histogram of the error rate values.
Note that you can do this exercise with a loop over the 100 executions, including the code in the previous code slide inside the loop, with some proper modifications. To plot a histogram of the values in array `p` with `n`bins, you can use `plt.hist(p, n)`
##### 3.2.3.1. Learning step
The learning step, $\rho$, is a free parameter of the algorithm. Its choice is critical for the convergence of the algorithm. Too large values of $\rho$ make the algorithm diverge. For too small values, the convergence gets very slow and more iterations are required for a good convergence.
**Exercise 3**: Observe the evolution of the negative log-likelihood with the number of iterations for different values of $\rho$. It is easy to check that, for large enough $\rho$, the gradient descent method does not converge. Can you estimate (through manual observation) an approximate value of $\rho$ stating a boundary between convergence and divergence?
**Exercise 4**: In this exercise we explore the influence of the learning step more sistematically. Use the code in the previouse exercises to compute, for every value of $\rho$, the average error rate over 100 executions. Plot the average error rate vs. $\rho$.
Note that you should explore the values of $\rho$ in a logarithmic scale. For instance, you can take $\rho = 1, 1/10, 1/100, 1/1000, \ldots$
In practice, the selection of $\rho$ may be a matter of trial an error. Also there is some theoretical evidence that the learning step should decrease along time up to cero, and the sequence $\rho_n$ should satisfy two conditions:
- C1: $\sum_{n=0}^{\infty} \rho_n^2 < \infty$ (decrease slowly)
- C2: $\sum_{n=0}^{\infty} \rho_n = \infty$ (but not too slowly)
For instance, we can take $\rho_n= 1/n$. Another common choice is $\rho_n = \alpha/(1+\beta n)$ where $\alpha$ and $\beta$ are also free parameters that can be selected by trial and error with some heuristic method.
#### 3.2.4. Visualizing the posterior map.
We can also visualize the posterior probability map estimated by the logistic regression model for the estimated weights.
```
# Create a regtangular grid.
x_min, x_max = Xn_tr[:, 0].min(), Xn_tr[:, 0].max()
y_min, y_max = Xn_tr[:, 1].min(), Xn_tr[:, 1].max()
dx = x_max - x_min
dy = y_max - y_min
h = dy /400
xx, yy = np.meshgrid(np.arange(x_min - 0.1 * dx, x_max + 0.1 * dx, h),
np.arange(y_min - 0.1 * dx, y_max + 0.1 * dy, h))
X_grid = np.array([xx.ravel(), yy.ravel()]).T
# Compute Z's
Z_grid = np.c_[np.ones(X_grid.shape[0]), X_grid]
# Compute the classifier output for all samples in the grid.
pp, dd = logregPredict(Z_grid, w)
# Paint output maps
pylab.rcParams['figure.figsize'] = 6, 6 # Set figure size
# Put the result into a color plot
plt.plot(x0c0, x1c0,'r.', label=labels[c0])
plt.plot(x0c1, x1c1,'g+', label=labels[c1])
plt.xlabel('$x_' + str(ind[0]) + '$')
plt.ylabel('$x_' + str(ind[1]) + '$')
plt.legend(loc='best')
plt.axis('equal')
pp = pp.reshape(xx.shape)
CS = plt.contourf(xx, yy, pp, cmap=plt.cm.copper)
plt.contour(xx, yy, pp, levels=[0.5],
colors='b', linewidths=(3,))
plt.colorbar(CS, ticks=[0, 0.5, 1])
plt.show()
```
#### 3.2.5. Polynomial Logistic Regression
The error rates of the logistic regression model can be potentially reduced by using polynomial transformations.
To compute the polynomial transformation up to a given degree, we can use the `PolynomialFeatures` method in `sklearn.preprocessing`.
```
# Parameters of the algorithms
rho = float(1)/50 # Learning step
n_it = 500 # Number of iterations
g = 5 # Degree of polynomial
# Compute Z_tr
poly = PolynomialFeatures(degree=g)
Z_tr = poly.fit_transform(Xn_tr)
# Normalize columns (this is useful to make algorithms more stable).)
Zn, mz, sz = normalize(Z_tr[:,1:])
Z_tr = np.concatenate((np.ones((n_tr,1)), Zn), axis=1)
# Compute Z_tst
Z_tst = poly.fit_transform(Xn_tst)
Zn, mz, sz = normalize(Z_tst[:,1:], mz, sz)
Z_tst = np.concatenate((np.ones((n_tst,1)), Zn), axis=1)
# Convert target arrays to column vectors
Y_tr2 = Y_tr[np.newaxis].T
Y_tst2 = Y_tst[np.newaxis].T
# Running the gradient descent algorithm
w, nll_tr = logregFit(Z_tr, Y_tr2, rho, n_it)
# Classify training and test data
p_tr, D_tr = logregPredict(Z_tr, w)
p_tst, D_tst = logregPredict(Z_tst, w)
# Compute error rates
E_tr = D_tr!=Y_tr
E_tst = D_tst!=Y_tst
# Error rates
pe_tr = float(sum(E_tr)) / n_tr
pe_tst = float(sum(E_tst)) / n_tst
# NLL plot.
plt.plot(range(n_it), nll_tr,'b.:', label='Train')
plt.xlabel('Iteration')
plt.ylabel('Negative Log-Likelihood')
plt.legend()
print('The optimal weights are:')
print(w)
print('The final error rates are:')
print('- Training:', pe_tr)
print('- Test:', pe_tst)
print('The NLL after training is', nll_tr[len(nll_tr)-1])
```
Visualizing the posterior map we can se that the polynomial transformation produces nonlinear decision boundaries.
```
# Compute Z_grid
Z_grid = poly.fit_transform(X_grid)
n_grid = Z_grid.shape[0]
Zn, mz, sz = normalize(Z_grid[:,1:], mz, sz)
Z_grid = np.concatenate((np.ones((n_grid,1)), Zn), axis=1)
# Compute the classifier output for all samples in the grid.
pp, dd = logregPredict(Z_grid, w)
pp = pp.reshape(xx.shape)
# Paint output maps
pylab.rcParams['figure.figsize'] = 6, 6 # Set figure size
plt.plot(x0c0, x1c0,'r.', label=labels[c0])
plt.plot(x0c1, x1c1,'g+', label=labels[c1])
plt.xlabel('$x_' + str(ind[0]) + '$')
plt.ylabel('$x_' + str(ind[1]) + '$')
plt.axis('equal')
plt.legend(loc='best')
CS = plt.contourf(xx, yy, pp, cmap=plt.cm.copper)
plt.contour(xx, yy, pp, levels=[0.5],
colors='b', linewidths=(3,))
plt.colorbar(CS, ticks=[0, 0.5, 1])
plt.show()
```
## 4. Regularization and MAP estimation.
An alternative to the ML estimation of the weights in logistic regression is Maximum A Posteriori estimation. Modelling the logistic regression weights as a random variable with prior distribution $p_{\bf W}({\bf w})$, the MAP estimate is defined as
$$
\hat{\bf w}_{\text{MAP}} = \arg\max_{\bf w} p({\bf w}|{\mathcal D})
$$
The posterior density $p({\bf w}|{\mathcal D})$ is related to the likelihood function and the prior density of the weights, $p_{\bf W}({\bf w})$ through the Bayes rule
$$
p({\bf w}|{\mathcal D}) =
\frac{P\left({\mathcal D}|{\bf w}\right) \; p_{\bf W}({\bf w})}
{p\left({\mathcal D}\right)}
$$
In general, the denominator in this expression cannot be computed analytically. However, it is not required for MAP estimation because it does not depend on ${\bf w}$. Therefore, the MAP solution is given by
\begin{align}
\hat{\bf w}_{\text{MAP}} & = \arg\max_{\bf w} P\left({\mathcal D}|{\bf w}\right) \; p_{\bf W}({\bf w}) \\
& = \arg\max_{\bf w} \left\{ L({\mathbf w}) + \log p_{\bf W}({\bf w})\right\} \\
& = \arg\min_{\bf w} \left\{ \text{NLL}({\mathbf w}) - \log p_{\bf W}({\bf w})\right\}
\end{align}
In the light of this expression, we can conclude that the MAP solution is affected by two terms:
- The likelihood, which takes large values for parameter vectors $\bf w$ that fit well the training data
- The prior distribution of weights $p_{\bf W}({\bf w})$, which expresses our *a priori* preference for some solutions. Usually, we recur to prior distributions that take large values when $\|{\bf w}\|$ is small (associated to smooth classification borders).
We can check that the MAP criterion adds a penalty term to the ML objective, that penalizes parameter vectors for which the prior distribution of weights takes small values.
### 4.1 MAP estimation with Gaussian prior
If we assume that ${\bf W}$ is a zero-mean Gaussian random variable with variance matrix $v{\bf I}$,
$$
p_{\bf W}({\bf w}) = \frac{1}{(2\pi v)^{N/2}} \exp\left(-\frac{1}{2v}\|{\bf w}\|^2\right)
$$
the MAP estimate becomes
\begin{align}
\hat{\bf w}_{\text{MAP}}
&= \arg\min_{\bf w} \left\{L({\bf w}) + \frac{1}{C}\|{\bf w}\|^2
\right\}
\end{align}
where $C = 2v$. Noting that
$$\nabla_{\bf w}\left\{L({\bf w}) + \frac{1}{C}\|{\bf w}\|^2\right\}
= - {\bf Z} \left({\bf y}-\hat{\bf p}_n\right) + \frac{2}{C}{\bf w},
$$
we obtain the following gradient descent rule for MAP estimation
\begin{align}
{\bf w}_{n+1} &= \left(1-\frac{2\rho_n}{C}\right){\bf w}_n
+ \rho_n {\bf Z} \left({\bf y}-\hat{\bf p}_n\right)
\end{align}
### 4.2 MAP estimation with Laplacian prior
If we assume that ${\bf W}$ follows a multivariate zero-mean Laplacian distribution given by
$$
p_{\bf W}({\bf w}) = \frac{1}{(2 C)^{N}} \exp\left(-\frac{1}{C}\|{\bf w}\|_1\right)
$$
(where $\|{\bf w}\|=|w_1|+\ldots+|w_N|$ is the $L_1$ norm of ${\bf w}$), the MAP estimate is
\begin{align}
\hat{\bf w}_{\text{MAP}}
&= \arg\min_{\bf w} \left\{L({\bf w}) + \frac{1}{C}\|{\bf w}\|_1
\right\}
\end{align}
The additional term introduced by the prior in the optimization algorithm is usually named the *regularization term*. It is usually very effective to avoid overfitting when the dimension of the weight vectors is high. Parameter $C$ is named the *inverse regularization strength*.
**Exercise 5**: Derive the gradient descent rules for MAP estimation of the logistic regression weights with Laplacian prior.
## 5. Other optimization algorithms
### 5.1. Stochastic Gradient descent.
Stochastic gradient descent (SGD) is based on the idea of using a single sample at each iteration of the learning algorithm. The SGD rule for ML logistic regression is
\begin{align}
{\bf w}_{n+1} &= {\bf w}_n
+ \rho_n {\bf z}^{(n)} \left(y^{(n)}-\hat{p}^{(n)}_n\right)
\end{align}
Once all samples in the training set have been applied, the algorith can continue by applying the training set several times.
The computational cost of each iteration of SGD is much smaller than that of gradient descent, though it usually needs more iterations to converge.
**Exercise 6**: Modify logregFit to implement an algorithm that applies the SGD rule.
### 5.2. Newton's method
Assume that the function to be minimized, $C({\bf w})$, can be approximated by its second order Taylor series expansion around ${\bf w}_0$
$$
C({\bf w}) \approx C({\bf w}_0)
+ \nabla_{\bf w}^\intercal C({\bf w}_0)({\bf w}-{\bf w}_0)
+ \frac{1}{2}({\bf w}-{\bf w}_0)^\intercal{\bf H}({\bf w}_0)({\bf w}-{\bf w}_0)
$$
where ${\bf H}({\bf w}_k)$ is the <a href=https://en.wikipedia.org/wiki/Hessian_matrix> *Hessian* matrix</a> of $C$ at ${\bf w}_k$. Taking the gradient of $C({\bf w})$, and setting the result to ${\bf 0}$, the minimum of C around ${\bf w}_0$ can be approximated as
$$
{\bf w}^* = {\bf w}_0 - {\bf H}({\bf w}_0)^{-1} \nabla_{\bf w}^\intercal C({\bf w}_0)
$$
Since the second order polynomial is only an approximation to $C$, ${\bf w}^*$ is only an approximation to the optimal weight vector, but we can expect ${\bf w}^*$ to be closer to the minimizer of $C$ than ${\bf w}_0$. Thus, we can repeat the process, computing a second order approximation around ${\bf w}^*$ and a new approximation to the minimizer.
<a href=https://en.wikipedia.org/wiki/Newton%27s_method_in_optimization> Newton's method</a> is based on this idea. At each optization step, the function to be minimized is approximated by a second order approximation using a Taylor series expansion around the current estimate. As a result, the learning rules becomes
$$\hat{\bf w}_{n+1} = \hat{\bf w}_{n} - \rho_n {\bf H}({\bf w}_k)^{-1} \nabla_{{\bf w}}C({\bf w}_k)
$$
For instance, for the MAP estimate with Gaussian prior, the *Hessian* matrix becomes
$$
{\bf H}({\bf w})
= \frac{2}{C}{\bf I} + \sum_{k=1}^K f({\bf w}^T {\bf z}^{(k)}) \left(1-f({\bf w}^T {\bf z}^{(k)})\right){\bf z}^{(k)} ({\bf z}^{(k)})^\intercal
$$
Defining diagonal matrix
$$
{\mathbf S}({\bf w}) = \text{diag}\left(f({\bf w}^T {\bf z}^{(k)}) \left(1-f({\bf w}^T {\bf z}^{(k)})\right)\right)
$$
the Hessian matrix can be written in more compact form as
$$
{\bf H}({\bf w})
= \frac{2}{C}{\bf I} + {\bf Z}^\intercal {\bf S}({\bf w}) {\bf Z}
$$
Therefore, the Newton's algorithm for logistic regression becomes
\begin{align}
\hat{\bf w}_{n+1} = \hat{\bf w}_{n} +
\rho_n
\left(\frac{2}{C}{\bf I} + {\bf Z}^\intercal {\bf S}(\hat{\bf w}_{n})
{\bf Z}
\right)^{-1}
{\bf Z}^\intercal \left({\bf y}-\hat{\bf p}_n\right)
\end{align}
Some variants of the Newton method are implemented in the <a href="http://scikit-learn.org/stable/"> Scikit-learn </a> package.
```
def logregFit2(Z_tr, Y_tr, rho, n_it, C=1e4):
# Compute Z's
r = 2.0/C
n_dim = Z_tr.shape[1]
# Initialize variables
nll_tr = np.zeros(n_it)
pe_tr = np.zeros(n_it)
w = np.random.randn(n_dim,1)
# Running the gradient descent algorithm
for n in range(n_it):
p_tr = logistic(np.dot(Z_tr, w))
sk = np.multiply(p_tr, 1-p_tr)
S = np.diag(np.ravel(sk.T))
# Compute negative log-likelihood
nll_tr[n] = - np.dot(Y_tr.T, np.log(p_tr)) - np.dot((1-Y_tr).T, np.log(1-p_tr))
# Update weights
invH = np.linalg.inv(r*np.identity(n_dim) + np.dot(Z_tr.T, np.dot(S, Z_tr)))
w += rho*np.dot(invH, np.dot(Z_tr.T, Y_tr - p_tr))
return w, nll_tr
# Parameters of the algorithms
rho = float(1)/50 # Learning step
n_it = 500 # Number of iterations
C = 1000
g = 4
# Compute Z_tr
poly = PolynomialFeatures(degree=g)
Z_tr = poly.fit_transform(X_tr)
# Normalize columns (this is useful to make algorithms more stable).)
Zn, mz, sz = normalize(Z_tr[:,1:])
Z_tr = np.concatenate((np.ones((n_tr,1)), Zn), axis=1)
# Compute Z_tst
Z_tst = poly.fit_transform(X_tst)
Zn, mz, sz = normalize(Z_tst[:,1:], mz, sz)
Z_tst = np.concatenate((np.ones((n_tst,1)), Zn), axis=1)
# Convert target arrays to column vectors
Y_tr2 = Y_tr[np.newaxis].T
Y_tst2 = Y_tst[np.newaxis].T
# Running the gradient descent algorithm
w, nll_tr = logregFit2(Z_tr, Y_tr2, rho, n_it, C)
# Classify training and test data
p_tr, D_tr = logregPredict(Z_tr, w)
p_tst, D_tst = logregPredict(Z_tst, w)
# Compute error rates
E_tr = D_tr!=Y_tr
E_tst = D_tst!=Y_tst
# Error rates
pe_tr = float(sum(E_tr)) / n_tr
pe_tst = float(sum(E_tst)) / n_tst
# NLL plot.
plt.plot(range(n_it), nll_tr,'b.:', label='Train')
plt.xlabel('Iteration')
plt.ylabel('Negative Log-Likelihood')
plt.legend()
print('The final error rates are:')
print('- Training:', str(pe_tr))
print('- Test:', str(pe_tst))
print('The NLL after training is:', str(nll_tr[len(nll_tr)-1]))
```
## 6. Logistic regression in Scikit Learn.
The <a href="http://scikit-learn.org/stable/"> scikit-learn </a> package includes an efficient implementation of <a href="http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html#sklearn.linear_model.LogisticRegression"> logistic regression</a>. To use it, we must first create a classifier object, specifying the parameters of the logistic regression algorithm.
```
# Create a logistic regression object.
LogReg = linear_model.LogisticRegression(C=1.0)
# Compute Z_tr
poly = PolynomialFeatures(degree=g)
Z_tr = poly.fit_transform(Xn_tr)
# Normalize columns (this is useful to make algorithms more stable).)
Zn, mz, sz = normalize(Z_tr[:,1:])
Z_tr = np.concatenate((np.ones((n_tr,1)), Zn), axis=1)
# Compute Z_tst
Z_tst = poly.fit_transform(Xn_tst)
Zn, mz, sz = normalize(Z_tst[:,1:], mz, sz)
Z_tst = np.concatenate((np.ones((n_tst,1)), Zn), axis=1)
# Fit model to data.
LogReg.fit(Z_tr, Y_tr)
# Classify training and test data
D_tr = LogReg.predict(Z_tr)
D_tst = LogReg.predict(Z_tst)
# Compute error rates
E_tr = D_tr!=Y_tr
E_tst = D_tst!=Y_tst
# Error rates
pe_tr = float(sum(E_tr)) / n_tr
pe_tst = float(sum(E_tst)) / n_tst
print('The final error rates are:')
print('- Training:', str(pe_tr))
print('- Test:', str(pe_tst))
# Compute Z_grid
Z_grid = poly.fit_transform(X_grid)
n_grid = Z_grid.shape[0]
Zn, mz, sz = normalize(Z_grid[:,1:], mz, sz)
Z_grid = np.concatenate((np.ones((n_grid,1)), Zn), axis=1)
# Compute the classifier output for all samples in the grid.
dd = LogReg.predict(Z_grid)
pp = LogReg.predict_proba(Z_grid)[:,1]
pp = pp.reshape(xx.shape)
# Paint output maps
pylab.rcParams['figure.figsize'] = 6, 6 # Set figure size
plt.plot(x0c0, x1c0,'r.', label=labels[c0])
plt.plot(x0c1, x1c1,'g+', label=labels[c1])
plt.xlabel('$x_' + str(ind[0]) + '$')
plt.ylabel('$x_' + str(ind[1]) + '$')
plt.axis('equal')
plt.contourf(xx, yy, pp, cmap=plt.cm.copper)
plt.legend(loc='best')
plt.contour(xx, yy, pp, levels=[0.5],
colors='b', linewidths=(3,))
plt.colorbar(CS, ticks=[0, 0.5, 1])
plt.show()
```
| true |
code
| 0.568236 | null | null | null | null |
|
## [Experiments] Uncertainty Sampling with a 1D Gaussian Process as model
First, we define a prior probablility for a model.
The GaussianRegressor approximates this model using an optimization method (probably similar to EM) for a given data input.
The resulting model has a mean and a certainty.
We use these to determine the next data point that should be labeled and critizise the data set.
```
%matplotlib inline
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import (RBF, Matern, RationalQuadratic,
ExpSineSquared, DotProduct,
ConstantKernel)
import math
import numpy as np
from matplotlib import pyplot as plt
size = 100
kernel = 1.0 * RBF(length_scale=1.0,length_scale_bounds=(1e-1,10.0))
gp = GaussianProcessRegressor(kernel=kernel)
# plot prior probability of model
plt.figure(figsize=(8, 8))
plt.subplot(2, 1, 1)
X_ = np.linspace(0, 5, size)
y_mean, y_std = gp.predict(X_[:, np.newaxis], return_std=True)
plt.plot(X_, y_mean, 'k', lw=3, zorder=9)
plt.fill_between(X_, y_mean - y_std, y_mean + y_std,
alpha=0.2, color='k')
y_samples = gp.sample_y(X_[:, np.newaxis], 10)
plt.plot(X_, y_samples, lw=1)
plt.xlim(0, 5)
plt.ylim(-3, 3)
plt.title("Prior (kernel: %s)" % kernel, fontsize=12)
# Generate data and fit GP
rng = np.random.RandomState(4)
X = np.linspace(0, 5, 100)[:, np.newaxis]
y = np.sin((X[:, 0] - 2.5) ** 2)
budget = 10
requested_X = []
requested_y = []
# init model with random data point
start = np.random.choice(np.arange(size))
requested_X.append(X[start])
requested_y.append(y[start])
gp.fit(requested_X, requested_y)
y_mean, y_std = gp.predict(X_[:, np.newaxis], return_std=True)
for index in range(2,10):
max_std = np.unravel_index(np.argmax(y_std, axis=None), y_std.shape)
requested_X.append(X[max_std])
requested_y.append(y[max_std])
gp.fit(requested_X, requested_y)
y_mean, y_std = gp.predict(X_[:, np.newaxis], return_std=True)
plt.plot(X_, y_mean, 'k', lw=3, zorder=9)
plt.fill_between(X_, y_mean - y_std, y_mean + y_std,
alpha=0.2, color='k')
y_samples = gp.sample_y(X_[:, np.newaxis], 7)
plt.plot(X_, y_samples, lw=1)
plt.plot(X_, y, lw=2,color='b',zorder =8, dashes=[1,1],)
plt.scatter(requested_X, requested_y, c='r', s=50, zorder=10, edgecolors=(0, 0, 0))
plt.xlim(0, 5)
plt.ylim(-3, 3)
plt.title("%s examles: Posterior (kernel: %s)\n Log-Likelihood: %.3f"
% (index, gp.kernel_, gp.log_marginal_likelihood(gp.kernel_.theta)),
fontsize=12)
plt.show()
```
Note how the new data point we aquired after 9 iterations completely changed the certainty about our model.
| true |
code
| 0.707544 | null | null | null | null |
|
# Probability Distribution:
In [probability theory](https://en.wikipedia.org/wiki/Probability_theory) and [statistics](https://en.wikipedia.org/wiki/statistics), a probability distribution is a [mathematical function](https://en.wikipedia.org/wiki/Function_(mathematics)) that, stated in simple terms, can be thought of as providing the probabilities of occurrence of different possible outcomes in an experiment.
In more technical terms, the probability distribution is a description of a random phenomenon in terms of the probabilities of events. Examples of random phenomena can include the results of an experiment or survey. A probability distribution is defined in terms of an underlying sample space, which is the set of all possible outcomes of the random phenomenon being observed.
### Discrete and Continuous Distributions
Probability distributions are generally divided into two classes. A __discrete probability distribution__ (applicable to the scenarios where the set of possible outcomes is discrete, such as a coin toss or a roll of dice) can be encoded by a discrete list of the probabilities of the outcomes, known as a [probability mass function](https://en.wikipedia.org/wiki/Probability_mass_function). On the other hand, a __continuous probability distribution__ (applicable to the scenarios where the set of possible outcomes can take on values in a continuous range (e.g. real numbers), such as the temperature on a given day) is typically described by probability density functions (with the probability of any individual outcome actually being 0). Such distributions are generally described with the help of [probability density functions](https://en.wikipedia.org/wiki/Probability_density_function).
### In this notebook, we discuss about most important distributions
* **Bernoulli distribution**
* **Binomial distribution**
* **Poisson distribution**
* **Normal distribution**
#### Some Essential Terminologies
* __Mode__: for a discrete random variable, the value with highest probability (the location at which the probability mass function has its peak); for a continuous random variable, a location at which the probability density function has a local peak.
* __Support__: the smallest closed set whose complement has probability zero.
* __Head__: the range of values where the pmf or pdf is relatively high.
* __Tail__: the complement of the head within the support; the large set of values where the pmf or pdf is relatively low.
* __Expected value or mean__: the weighted average of the possible values, using their probabilities as their weights; or the continuous analog thereof.
* __Median__: the value such that the set of values less than the median, and the set greater than the median, each have probabilities no greater than one-half.
* __Variance__: the second moment of the pmf or pdf about the mean; an important measure of the dispersion of the distribution.
* __Standard deviation__: the square root of the variance, and hence another measure of dispersion.
* __Symmetry__: a property of some distributions in which the portion of the distribution to the left of a specific value is a mirror image of the portion to its right.
* __Skewness__: a measure of the extent to which a pmf or pdf "leans" to one side of its mean. The third standardized moment of the distribution.
* __Kurtosis__: a measure of the "fatness" of the tails of a pmf or pdf. The fourth standardized moment of the distribution.

## Bernoulii distribution
The Bernoulli distribution, named after Swiss mathematician [Jacob Bernoulli](https://en.wikipedia.org/wiki/Jacob_Bernoulli), is the probability distribution of a random variable which takes the value 1 with probability $p$ and the value 0 with probability $q = 1 − p$ — i.e., the probability distribution of any single experiment that asks a ___yes–no question___; the question results in a boolean-valued outcome, a single bit of information whose value is success/yes/true/one with probability $p$ and failure/no/false/zero with probability $q$. This distribution has only two possible outcomes and a single trial.
It can be used to represent a coin toss where 1 and 0 would represent "head" and "tail" (or vice versa), respectively. In particular, unfair coins would have $p ≠ 0.5$.
The probability mass function $f$ of this distribution, over possible outcomes $k$, is
$${\displaystyle f(k;p)={\begin{cases}p&{\text{if }}k=1,\\[6pt]1-p&{\text{if }}k=0.\end{cases}}}$$
```
import numpy as np
from matplotlib import pyplot as plt
from numpy import random
import seaborn as sns
from scipy.stats import bernoulli
```
#### Generate random variates
```
# p=0.5 i.e. fair coin
s=bernoulli.rvs(p=0.5,size=10)
s
plt.hist(s)
# p=0.2 i.e. more tails than heads
bernoulli.rvs(p=0.2,size=10)
# p=0.8 i.e. more heads than tails
bernoulli.rvs(p=0.8,size=10)
```
#### Mean, variance, skew, and kurtosis
```
print("A fair coin is spinning...\n"+"-"*30)
pr=0.5 # Fair coin toss probability
mean, var, skew, kurt = bernoulli.stats(p=pr, moments='mvsk')
print("Mean:",mean)
print("Variance:",var)
print("Skew:",skew)
print("Kurtosis:",kurt)
print("\nNow a biased coin is spinning...\n"+"-"*35)
pr=0.7 # Biased coin toss probability
mean, var, skew, kurt = bernoulli.stats(p=pr, moments='mvsk')
print("Mean:",mean)
print("Variance:",var)
print("Skew:",skew)
print("Kurtosis:",kurt)
```
#### Standard deviation, mean, median
```
print("\nA biased coin with likelihood 0.3 is spinning...\n"+"-"*50)
pr=0.3
print("Std. dev:",bernoulli.std(p=pr))
print("Mean:",bernoulli.mean(p=pr))
print("Median:",bernoulli.median(p=pr))
```
## Binomial distribution
The Binomial Distribution can instead be thought as the sum of outcomes of an event following a Bernoulli distribution. The Binomial Distribution is therefore used in binary outcome events and the probability of success and failure is the same in all the successive trials. This distribution takes two parameters as inputs: the number of times an event takes place and the probability assigned to one of the two classes.
The binomial distribution is frequently used to model the number of successes in a sample of size n drawn with replacement from a population of size N. A simple example of a Binomial Distribution in action can be the toss of a biased/unbiased coin repeated a certain amount of times.
In general, if the random variable $X$ follows the binomial distribution with parameters n ∈ ℕ and p ∈ [0,1], we write X ~ B(n, p). The probability of getting exactly $k$ successes in $n$ trials is given by the probability mass function:
$${\Pr(k;n,p)=\Pr(X=k)={n \choose k}p^{k}(1-p)^{n-k}}$$
for k = 0, 1, 2, ..., n, where
$${\displaystyle {\binom {n}{k}}={\frac {n!}{k!(n-k)!}}}$$
```
from scipy.stats import binom
```
#### Generate random variates
8 coins are flipped (or 1 coin is flipped 8 times), each with probability of success (1) of 0.25 This trial/experiment is repeated for 10 times
```
k=binom.rvs(8,0.25,size=10)
print("Number of success for each trial:",k)
print("Average of the success:", np.mean(k))
sns.distplot(binom.rvs(n=10, p=0.5, size=1000), hist=True, kde=False)
plt.show()
print("A fair coin is spinning 5 times\n"+"-"*35)
pr=0.5 # Fair coin toss probability
n=5
mean, var, skew, kurt = binom.stats(n=n,p=pr, moments='mvsk')
print("Mean:",mean)
print("Variance:",var)
print("Skew:",skew)
print("Kurtosis:",kurt)
print("\nNow a biased coin is spinning 5 times...\n"+"-"*45)
pr=0.7 # Biased coin toss probability
n=5
mean, var, skew, kurt = binom.stats(n=n,p=pr, moments='mvsk')
print("Mean:",mean)
print("Variance:",var)
print("Skew:",skew)
print("Kurtosis:",kurt)
```
#### Standard deviation, mean, median
```
n=5
pr=0.7
print("\n{} biased coins with likelihood {} are spinning...\n".format(n,pr)+"-"*50)
print("Std. dev:",binom.std(n=n,p=pr))
print("Mean:",binom.mean(n=n,p=pr))
print("Median:",binom.median(n=n,p=pr))
```
#### Visualize the probability mass function (pmf)
```
n=40
pr=0.5
rv = binom(n,pr)
x=np.arange(0,41,1)
pmf1 = rv.pmf(x)
n=40
pr=0.15
rv = binom(n,pr)
x=np.arange(0,41,1)
pmf2 = rv.pmf(x)
n=50
pr=0.6
rv = binom(n,pr)
x=np.arange(0,41,1)
pmf3 = rv.pmf(x)
plt.figure(figsize=(12,6))
plt.title("Probability mass function: $\\binom{n}{k}\, p^k (1-p)^{n-k}$\n",fontsize=20)
plt.scatter(x,pmf1)
plt.scatter(x,pmf2)
plt.scatter(x,pmf3,c='k')
plt.legend(["$n=40, p=0.5$","$n=40, p=0.3$","$n=50, p=0.6$"],fontsize=15)
plt.xlabel("Number of successful trials ($k$)",fontsize=15)
plt.ylabel("Probability of success",fontsize=15)
plt.xticks(fontsize=15)
plt.yticks(fontsize=15)
plt.grid(True)
plt.show()
```
## Poisson Distribution
The Poisson distribution, is a discrete probability distribution that expresses the probability that an event might happen or not knowing how often it usually occurs.
Poisson Distributions are for example frequently used by insurance companies to conduct risk analysis (eg. predict the number of car crash accidents within a predefined time span) to decide car insurance pricing.
Other examples that may follow a Poisson include
* number of phone calls received by a call center per hour
* The number of patients arriving in an emergency room between 10 and 11 pm
```
from scipy.stats import poisson
```
#### Display probability mass function (pmf)
An event can occur 0, 1, 2, … times in an interval. The average number of events in an interval is designated $\lambda$. This is the event rate, also called the rate parameter. The probability of observing k events in an interval is given by the equation
${\displaystyle P(k{\text{ events in interval}})=e^{-\lambda }{\frac {\lambda ^{k}}{k!}}}$
where,
${\lambda}$ is the average number of events per interval
e is the number 2.71828... (Euler's number) the base of the natural logarithms
k takes values 0, 1, 2, …
k! = k × (k − 1) × (k − 2) × … × 2 × 1 is the factorial of k.
#### Generate random variates
```
la=5
r = poisson.rvs(mu=la, size=20)
print("Random variates with lambda={}: {}".format(la,r))
la=0.5
r = poisson.rvs(mu=la, size=20)
print("Random variates with lambda={}: {}".format(la,r))
data_poisson = poisson.rvs(mu=3, size=10000)
sns.distplot(data_poisson, kde=False)
plt.show()
print("For small lambda\n"+"-"*25)
la=0.5
mean, var, skew, kurt = poisson.stats(mu=la, moments='mvsk')
print("Mean:",mean)
print("Variance:",var)
print("Skew:",skew)
print("Kurtosis:",kurt)
print("\nNow for large lambda\n"+"-"*30)
la=5
mean, var, skew, kurt = poisson.stats(mu=la, moments='mvsk')
print("Mean:",mean)
print("Variance:",var)
print("Skew:",skew)
print("Kurtosis:",kurt)
```
#### Standard deviation, mean, median
```
la=5
print("For lambda = {}\n-------------------------".format(la))
print("Std. dev:",poisson.std(mu=la))
print("Mean:",poisson.mean(mu=la))
print("Median:",poisson.median(mu=la))
```
#### For the complete list of functions and methods please [see this link](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.poisson.html#scipy.stats.poisson).
## Normal (Gaussian) distribution
In probability theory, the normal (or Gaussian or Gauss or Laplace–Gauss) distribution is a very common continuous probability distribution. Normal distributions are important in statistics and are often used in the natural and social sciences to represent real-valued random variables whose distributions are not known. A random variable with a Gaussian distribution is said to be normally distributed and is called a normal deviate.
The normal distribution is useful because of the **[central limit theorem](https://en.wikipedia.org/wiki/Central_limit_theorem)**. In its most general form, under some conditions (which include finite variance), it states that **averages of samples of observations of random variables independently drawn from independent distributions converge in distribution to the normal**, that is, they become normally distributed when the number of observations is sufficiently large.
Physical quantities that are expected to be the sum of many independent processes (such as measurement errors) often have distributions that are nearly normal. Moreover, many results and methods (such as propagation of uncertainty and least squares parameter fitting) can be derived analytically in explicit form when the relevant variables are normally distributed.
### PDF
The probability density function (PDF) is given by,
$$ f(x\mid \mu ,\sigma ^{2})={\frac {1}{\sqrt {2\pi \sigma ^{2}}}}e^{-{\frac {(x-\mu )^{2}}{2\sigma ^{2}}}} $$
where,
- $\mu$ is the mean or expectation of the distribution (and also its median and mode),
- $\sigma$ is the standard deviation, and $\sigma^2$ is the variance.
```
from scipy.stats import norm
x = np.linspace(-3, 3, num = 100)
constant = 1.0 / np.sqrt(2*np.pi)
pdf_normal_distribution = constant * np.exp((-x**2) / 2.0)
fig, ax = plt.subplots(figsize=(10, 5));
ax.plot(x, pdf_normal_distribution);
ax.set_ylim(0);
ax.set_title('Normal Distribution', size = 20);
ax.set_ylabel('Probability Density', size = 20)
mu, sigma = 0.5, 0.1
s = np.random.normal(mu, sigma, 1000)
# create the bins and the histogram
count, bins, ignored = plt.hist(s, 20, normed=True)
# plot the distribution curve
plt.plot(bins, 1/(sigma*np.sqrt(2*np.pi))*np.exp( -(bins - mu)**2 / (2*sigma**2)), linewidth = 3, color = "y")
plt.show()
a1 = np.random.normal(loc=0,scale=np.sqrt(0.2),size=100000)
a2 = np.random.normal(loc=0,scale=1.0,size=100000)
a3 = np.random.normal(loc=0,scale=np.sqrt(5),size=100000)
a4 = np.random.normal(loc=-2,scale=np.sqrt(0.5),size=100000)
plt.figure(figsize=(8,5))
plt.hist(a1,density=True,bins=100,color='blue',alpha=0.5)
plt.hist(a2,density=True,bins=100,color='red',alpha=0.5)
plt.hist(a3,density=True,bins=100,color='orange',alpha=0.5)
plt.hist(a4,density=True,bins=100,color='green',alpha=0.5)
plt.xlim(-7,7)
plt.show()
```
## References
https://www.w3schools.com/python/numpy_random_normal.asp
https://towardsdatascience.com/probability-distributions-in-data-science-cce6e64873a7
https://statisticsbyjim.com/basics/probabilitydistributions/#:~:text=A%20probability%20distribution%20is%20a,on%20the%20underlying%20probability%20distribution.
https://bolt.mph.ufl.edu/6050-6052/unit-3b/binomial-random-variables/
| true |
code
| 0.643469 | null | null | null | null |
|
# Modes of the Ball-Channel Pendulum Linear Model
```
import numpy as np
import numpy.linalg as la
import matplotlib.pyplot as plt
from resonance.linear_systems import BallChannelPendulumSystem
%matplotlib widget
```
A (almost) premade system is available in `resonance`. The only thing missing is the function that calculates the canonical coefficients.
```
sys = BallChannelPendulumSystem()
sys.constants
sys.states
def can_coeffs(mp, mb, l, g, r):
M = np.array([[mp * l**2 + mb * r**2, -mb * r**2],
[-mb * r**2, mb * r**2]])
C = np.zeros((2, 2))
K = np.array([[g * l * mp, g * mb * r],
[g * mb * r, g * mb * r]])
return M, C, K
sys.canonical_coeffs_func = can_coeffs
```
Once the system is completely defined the mass, damping, and stiffness matrices can be calculated and inspected:
```
M, C, K = sys.canonical_coefficients()
M
C
K
```
## Convert to mass normalized form (calculate $\tilde{\mathbf{K}}$)
First calculate the Cholesky lower triangular decomposition matrix of $\mathbf{M}$, which is symmetric and postive definite.
```
L = la.cholesky(M)
L
```
The transpose can be computed with `np.transpose()`, `L.transpose()` or `L.T` for short:
```
np.transpose(L)
L.transpose()
L.T
```
Check that $\mathbf{L}\mathbf{L}^T$ returns $M$. Note that in Python the `@` operator is used for matrix multiplication. The `*` operator will do elementwise multiplication.
```
L @ L.T
```
`inv()` computes the inverse, giving $\left(\mathbf{L}^T\right)^{-1}$:
```
la.inv(L.T)
```
$\mathbf{L}^{-1}\mathbf{M}\left(\mathbf{L}^T\right)^{-1} = \mathbf{I}$. Note that the off-diagonal terms are very small numbers. The reason these are not precisely zero is due to floating point arithmetic and the associated truncation errors.
```
la.inv(L) @ M @ la.inv(L.T)
```
$\tilde{\mathbf{K}} = \mathbf{L}^{-1}\mathbf{K}\left(\mathbf{L}^T\right)^{-1}$. Note that this matrix is symmetric. It is guaranteed to be symmetric if $\mathbf{K}$ is symmetric.
```
Ktilde = la.inv(L) @ K @ la.inv(L.T)
Ktilde
```
The entries of $\tilde{\mathbf{K}}$ can be accessed as so:
```
k11 = Ktilde[0, 0]
k12 = Ktilde[0, 1]
k21 = Ktilde[1, 0]
k22 = Ktilde[1, 1]
```
# Calculate the eigenvalues of $\tilde{\mathbf{K}}$
The eigenvalues of this 2 x 2 matrix are found by forming the characteristic equation from:
$$\textrm{det}\left(\tilde{\mathbf{K}} - \lambda \mathbf{I}\right) = 0$$
and solving the resulting quadratic polynomial for its roots, which are the eigenvalues.
```
lam1 = (k11 + k22) / 2 + np.sqrt((k11 + k22)**2 - 4 * (k11 * k22 - k12*k21)) / 2
lam1
lam2 = (k11 + k22) / 2 - np.sqrt((k11 + k22)**2 - 4 * (k11 * k22 - k12*k21)) / 2
lam2
```
# Calculate the eigenfrequencies of the system
$\omega_i = \sqrt{\lambda_i}$
```
omega1 = np.sqrt(lam1)
omega1
omega2 = np.sqrt(lam2)
omega2
```
And in Hertz:
```
fn1 = omega1/2/np.pi
fn1
fn2 = omega2/2/np.pi
fn2
```
# Calculate the eigenvectors of $\tilde{\mathbf{K}}$
The eigenvectors can be found by substituting the value for $\lambda$ into:
$$\tilde{\mathbf{K}}\hat{q}_0 = \lambda \hat{q}_0$$
and solving for $\hat{q}_0$.
```
v1 = np.array([-k12 / (k11 - lam1), 1])
v2 = np.array([-k12 / (k11 - lam2), 1])
```
Check that they are orthogonal, i.e. the dot product should be zero.
```
np.dot(v1, v2)
```
The `norm()` function calculates the Euclidean norm, i.e. the vector's magnitude and the vectors can be normalized like so:
```
v1_hat = v1 / np.linalg.norm(v1)
v2_hat = v2 / np.linalg.norm(v2)
v1_hat
v2_hat
np.linalg.norm(v1_hat)
```
For any size $\tilde{\mathbf{K}}$ the `eig()` function can be used to calculate the eigenvalues and the normalized eigenvectors with one function call:
```
evals, evecs = np.linalg.eig(Ktilde)
evals
evecs
```
The columns of `evecs` correspond to the entries of `evals`.
```
P = evecs
P
```
If P contains columns that are orthnormal, then $\mathbf{P}^T \mathbf{P} = \mathbf{I}$. Check this with:
```
P.T @ P
```
$\mathbf{P}$ can be used to find the matrix $\Lambda$ that decouples the differential equations.
```
Lam = P.T @ Ktilde @ P
Lam
```
# Formulate solution to ODEs (simulation)
The trajectory of the coordinates can be found with:
$$
\bar{c}(t) = \sum_{i=1}^n c_i \sin(\omega_i t + \phi_i) \bar{u}_i
$$
where
$$
\phi_i = \arctan \frac{\omega_i \hat{q}_{0i}^T \bar{q}(0)}{\hat{q}_{0i}^T \dot{\bar{q}}(0)}
$$
and
$$
c_i = \frac{\hat{q}^T_{0i} \bar{q}(0)}{\sin\phi_i}
$$
$c_i$ are the modal participation factors and reflect what propotional of each mode is excited given specific initial conditions. If the initial conditions are the eigenmode, $\bar{u}_i$, the all but the $i$th $c_i$ will be zero.
A matrix $\mathbf{S} = \left(\mathbf{L}^T\right)^{-1} = \begin{bmatrix}\bar{u}_1 \quad \bar{u}_2\end{bmatrix}$ can be computed such that the columns are $\bar{u}_i$.
```
S = la.inv(L.T) @ P
S
u1 = S[:, 0]
u2 = S[:, 1]
u1
u2
```
Define the initial coordinates as a scalar factor of the second eigenvector, which sets these values to small angles.
```
c0 = S[:, 1] / 400
np.rad2deg(c0)
```
Set the initial speeds to zero:
```
s0 = np.zeros(2)
s0
```
The initial mass normalized coordinates and speeds are then:
```
q0 = L.T @ c0
q0
qd0 = L.T @ s0
qd0
```
Calculate the modal freqencies in radians per second.
```
ws = np.sqrt(evals)
ws
```
The phase shifts for each mode can be found. Note that it is important to use `arctan2()` so that the quadrant and thus sign of the arc tangent is properly handled.
$$
\phi_i = \arctan \frac{\omega_i \hat{q}_{0i}^T \bar{q}(0)}{\hat{q}_{0i}^T \dot{\bar{q}}(0)}
$$
```
phi1 = np.arctan2(ws * P[:, 0] @ q0, P[:, 0] @ qd0)
phi1
phi2 = np.arctan2(ws * P[:, 1] @ q0, P[:, 1] @ qd0)
phi2
```
All $\phi$'s can be calculated in one line using NumPy's broadcasting feature:
```
phis = np.arctan2(ws * P.T @ q0, P.T @ qd0)
phis
```
The phase shifts for this particular initial condition are $\pm90$ degrees.
```
np.rad2deg(phis)
```
Now calculate the modal participation factors.
$$
c_i = \frac{\hat{q}^T_{0i} \bar{q}(0)}{\sin\phi_i}
$$
```
cs = P.T @ q0 / np.sin(phis)
cs
```
Note that the first participation factor is zero. This is because we've set the initial coordinate to be a scalar function of the second eigenvector.
## Simulate
```
t = np.linspace(0, 5, num=500)
cs[1] * np.sin(ws[1] * t)
```
The following line will give an error because the dimensions of `u1` are not compatible with the dimensions of the preceding portion. It is possible for a single line to work like this if you take advatnage of NumPy's broadcasting rules. See https://scipy-lectures.org/intro/numpy/operations.html#broadcasting for more info. The `tile()` function is used to repeat `u1` as many times as needed.
```
# cs[1] * np.sin(ws[1] * t) * u1
c1 = cs[1] * np.sin(ws[1] * t) * np.tile(u1, (len(t), 1)).T
c1.shape
```
`tile()` can be used to create a 2 x 1000 vector that repeats the vector $\hat{u}_i$ allowing a single line to calculate the mode contribution.
Now use a loop to calculate the contribution of each mode and build the summation of contributions from each mode:
```
ct = np.zeros((2, len(t))) # 2 x m array to hold coordinates as a function of time
for ci, wi, phii, ui in zip(cs, ws, phis, S.T):
print(ci, wi, phii, ui)
ct += ci * np.sin(wi * t + phii) * np.tile(ui, (len(t), 1)).T
def sim(c0, s0, t):
"""Returns the time history of the coordinate vector, c(t) given the initial state and time.
Parameters
==========
c0 : ndarray, shape(n,)
s0 : ndarray, shape(n,)
t : ndarray, shape(m,)
Returns
=======
c(t) : ndarray, shape(n, m)
"""
q0 = L.T @ c0
qd0 = L.T @ s0
ws = np.sqrt(evals)
phis = np.arctan2(ws * P.T @ q0, P.T @ qd0)
cs = P.T @ q0 / np.sin(phis)
c = np.zeros((2, 1000))
for ci, wi, phii, ui in zip(cs, ws, phis, S.T):
c += ci * np.sin(wi * t + phii) * np.tile(ui, (len(t), 1)).T
return c
```
Simulate and plot the first mode:
```
t = np.linspace(0, 5, num=1000)
c0 = S[:, 0] / np.max(S[:, 0]) * np.deg2rad(10)
s0 = np.zeros(2)
fig, ax = plt.subplots()
ax.plot(t, np.rad2deg(sim(c0, s0, t).T))
ax.set_xlabel('Time [s]')
ax.set_ylabel('Angle [deg]')
ax.legend([r'$\theta$', r'$\phi$'])
```
Simulate and plot the second mode:
```
t = np.linspace(0, 5, num=1000)
c0 = S[:, 1] / np.max(S[:, 1]) * np.deg2rad(10)
s0 = np.zeros(2)
fig, ax = plt.subplots()
ax.plot(t, np.rad2deg(sim(c0, s0, t).T))
ax.set_xlabel('Time [s]')
ax.set_ylabel('Angle [deg]')
ax.legend([r'$\theta$', r'$\phi$'])
```
Compare this to the free response from the system:
```
sys.coordinates['theta'] = c0[0]
sys.coordinates['phi'] = c0[1]
sys.speeds['alpha'] = 0
sys.speeds['beta'] = 0
traj = sys.free_response(5.0)
traj[['theta', 'phi']].plot()
sys.animate_configuration(fps=30, repeat=False)
```
Simulate with arbitrary initial conditions.
```
sys.coordinates['theta'] = np.deg2rad(12.0)
sys.coordinates['phi'] = np.deg2rad(3.0)
traj = sys.free_response(5.0)
traj[['theta', 'phi']].plot()
```
| true |
code
| 0.626381 | null | null | null | null |
|
# U.S. Border Patrol Nationwide Apprehensions by Citizenship and Sector
**Data Source:** [CBP Apprehensions](https://www.cbp.gov/sites/default/files/assets/documents/2021-Aug/USBORD~3.PDF) <br>
**Download the Output:** [here](../data/extracted_data/)
## Overview
The source PDF is a large and complex PDF with varying formats across pages. This notebook demonstrates how to extract all data from this PDF into a single structured table.
Though not explored in this notebook there are many other PDFs which could be extracted, including many more that CBP posts on their website. This code can be use to extract data from PDFs, and convert them into a more usable format (either within Python, or a csv).
**See**: dataset source: https://www.cbp.gov/newsroom/media-resources/stats <br>
## Technical Approach
We download our PDF of interest and then use [tabula](https://github.com/chezou/tabula-py) and a good deal of custom Python code to process all pages of the PDF into a single structured table that can be used for further analysis.
## Skills Learned
1. How to download a PDF
2. How to use tabula to extract data from a complex pdf
3. How to deal with errors generated in the extraction process
4. How to clean up and format final output table
## The Code
**PLEASE NOTE**: We have made this notebook READ only to ensure you receive all updates we make to it. Do not edit this notebook directly, create a copy instead.
To customize and experiment with this notebook:
1. Create a copy: `Select File -> Make a Copy` at the top-left of the notebook
2. Unlock cells in your copy: Press `CMD + A` on your keyboard to select all cells, then click the small unlocked padlock button near the mid-top right of the notebook.
```
import logging
import logging.config
from pathlib import Path
import pandas as pd
import requests
import tabula
from tabula.io import read_pdf
from PyPDF2 import PdfFileReader
pd.set_option("max_rows", 400)
# Below just limits warnings that can be ignored
logging.config.dictConfig(
{
"version": 1,
"disable_existing_loggers": True,
}
)
```
---------
# 1. Download PDF
Let's first download the [PDF](https://www.cbp.gov/sites/default/files/assets/documents/2021-Aug/USBORD~3.PDF) we want to extract data from.
**Below we pass the:**
* Path to the pdf file on the internet
* What we want to call it
* And the folder we want to save the file to
```
def download_pdf(url, name, output_folder):
"""
Function to download a single pdf file from a provided link.
Parameters:
url: Url of the file you want to download
name: name label you want to apply to the file
output_folder: Folder path to savae file
Returns:
Saves the file to the output directory, function itself returns nothing.
Example:
download_pdf(
'https://travel.state.gov/content/travel/en/legal/visa-law0/visa-statistics/immigrant-visa-statistics/monthly-immigrant-visa-issuances.html',
'July 2020 - IV Issuances by Post and Visa Class',
'visa_test/'
)
"""
output_folder = Path(output_folder)
response = requests.get(url)
if response.status_code == 200:
# Write content in pdf file
outpath = output_folder / f"{name}.pdf"
pdf = open(str(outpath), "wb")
pdf.write(response.content)
pdf.close()
print("File ", f"{name}.pdf", " downloaded")
else:
print("File ", f"{name}.pdf", " not found.")
```
Now call our function
```
download_pdf(
"https://www.cbp.gov/sites/default/files/assets/documents/2021-Aug/USBORD~3.PDF", # <- the url
"US Border Patrol Nationwide Apps by Citizenship & Sector", # <- our name for it
"../data/raw_source_files/", # <- Output directory
)
```
**We have now downloaded the file locally**
We will create variable to store path to local PDF file path
```
pdf_path = "../data/raw_source_files/US Border Patrol Nationwide Apps by Citizenship & Sector.pdf"
```
## 2. Reviewing the PDF and Preparing to Extract Data
This file is somewhat hard to extract data from. The columns merged fields and sub headings etc. Also if you scroll through the whole file you will see that the table format changes somewhat. Therefore we are going to hardcode the actual columnns we are interested in. Below we see an image of the first table in the pdf.

Since it is hard to capture the correct column names, below we create a variable called `cols` where we save the columns names we will use in our table. These columns refer to citizenship of the person, where they were encountered and different aggregations based on border location (SW, North, Coast).
```
cols = [
"citizenship",
"bbt",
"drt",
"elc",
"ept",
"lrt",
"rgv",
"sdc",
"tca",
"yum",
"sbo_total", # SBO
"blw",
"bun",
"dtm",
"gfn",
"hlt",
"hvm",
"spw",
"swb",
"nbo_total",
"mip",
"nll",
"rmy",
"cbo_total",
"total",
]
```
-------
## 3. Extracting the Data
Below we have a bunch of code that will iterate through the PDF pages and extract data. We know this is a lot but suggest reviewing the comments in the code (anything starting with a #) to get a sense of what is going on.
**Now run the process**
```
print("*Starting Process")
def fix_header_pages(df):
df.columns = cols
df = df.drop([0, 1], axis=0)
return df
# List to store the tables we encounter
tables = []
# Dataframe to store table segments
table_segments = pd.DataFrame()
# Start on page 1 (PDF is not zero indexed like python but regular indexed .. starts with 1 not 0)
start = 1
# Read the pdf with PdfFileReader to get the number of pages
stop = PdfFileReader(pdf_path).getNumPages() + 1
# Something to count the number of table swe encounter
table_num = -1
for page_num in range(start, stop):
print(f" **Processing Page: {page_num} of {stop}")
new_table = False # New tables are where a new year starts (2007, 2008, etc)
# Extract data using tabula
df = read_pdf(
pdf_path, pages=f"{page_num}", lattice=True, pandas_options={"header": None}
)[0]
# If it is AFGHANISTAN we have a new table
if "AFGHANISTAN" in df.loc[2][0]:
new_table = True
table_num += 1
# If CITIZENSHIP is in the first row - its a header not data so we want to remove
if "CITIZENSHIP" in df.loc[0][0]:
df = fix_header_pages(df) # Mixed formats in this pdf
else:
df.columns = cols
# Check for errors
check_for_error = df[df.citizenship.str.isdigit()]
if len(check_for_error) > 0:
# If there was an error we try to fix it with some special tabula arguments
fixed = False
missing_country_df = read_pdf(
pdf_path,
pages=f"{page_num}",
stream=True,
area=(500, 5.65, 570, 5.65 + 800),
pandas_options={"header": None},
)[0]
missing_country = missing_country_df.tail(1)[0].squeeze()
print(
f" *** --> ERROR!! pg:{page_num}, country={missing_country}, review table_num={table_num} in tables (list object) - if not fixed automatically"
)
if missing_country_df.shape[1] == df.shape[1]:
fixed = True
print(" *** --> --> !! Success - Likely Fixed Automatically")
missing_country_df.columns = cols
df.loc[check_for_error.index[0]] = missing_country_df.iloc[-1]
if not fixed:
df.loc[
check_for_error.index[0], "citizenship"
] = f" *** -->ERROR - {missing_country}"
# Check if new table
if page_num != start and new_table:
tables.append(table_segments)
table_segments = df
else:
table_segments = table_segments.append(df)
tables.append(table_segments)
tables = [table.reset_index(drop=True) for table in tables if len(table) > 0]
print("*Process Complete")
```
### Manual Fixes
Above, we see that there were 3 errors.
1. pg: 35, Syria
2. pg: 37, Ireland
3. pg: 38, Unknown
We were able to fix `#2` automatically but `#1` and `#3` need manual correction.
If you are wondering why these were not collected correctly it is because on pg 35, 37 and 38 the table is missing a strong black line at the bottom of the table.
Tabula uses strong lines to differentiate data from other parts of the pdf. Below we see the pg 35, Syria example.
Ireland was fixed automatically by using some different arguments for the python tabula package. In that instance it worked and allowed for automatically correcting the data, for Syria and Unknown though it was not successful.

We can examine the actual data by reviweing the table in the `tables` list.
```
example = tables[12].reset_index()
example.iloc[117:120]
```
Above we look at table `#12` which referes to FY2018, and specifically the end of page 35 and the beginning of page 36. We see that SYRIA has no information. But if we look at the pdf (see image above) it does have information.
Therefore we will have to correct this manually.
**Below is just a list of values that provides the information that was not collected for Syria on pg 35**
```
syria_correct = [
"SYRIA",
0,
0,
0,
1,
2,
0,
0,
0,
0,
3,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
3,
]
len(syria_correct)
```
**And then the Unknown countries for page 38**
```
unknown_correct = [
"UNNKOWN",
0,
1,
0,
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
]
len(unknown_correct)
```
**We grab the table and then assign the correct data to that row**
Fix Syria
```
# the value assigned to tbl_index corresponds to the table_num value shown in our error message for each country
tbl_index = 11
tables[tbl_index].loc[
tables[tbl_index][tables[tbl_index].citizenship.str.contains("SYRIA")].index[0]
] = syria_correct
```
Fix Unkown
```
tbl_index = 12
tables[tbl_index].loc[
tables[tbl_index][tables[tbl_index].citizenship.str.contains("UNKNOWN")].index[0]
] = unknown_correct
```
-----------
## 4. Clean Up Tables
We need to remove commas from numbers and convert string numbers to actual integer values. Below we can see that there are many cell values with `,` present.
```
tables[0][tables[0].total.str.contains(",")]
```
We will also create a dictionary with the cleaned tables and better labels
```
# Get just the specific station/crossing columns (not totals)
station_cols = [
i
for i in cols
if i not in ["citizenship", "sbo_total", "nbo_total", "cbo_total", "total"]
]
total_cols = ["sbo_total", "nbo_total", "cbo_total", "total"]
def clean_tables(df):
df = df.fillna(0).reset_index(drop=True)
df["total"] = [
int(i.replace(",", "")) if isinstance(i, str) else i for i in df["total"]
]
for c in station_cols + total_cols:
df.loc[:, c] = [
int(i.replace(",", "")) if isinstance(i, str) else i for i in df[c]
]
return df
data = {
f"total_apprehensions_FY{idx+7:02}": clean_tables(df)
for idx, df in enumerate(tables)
}
```
**Here are the keys in the dictionary - they relate to the specific `FY-Year` of the data**
```
data.keys()
```
**Sanity Check**
We can compare the `TOTAL` column to the actual summed row totals to see if the data was extracted correctly
```
table_name = "total_apprehensions_FY19"
totals = data[table_name].query('citizenship == "TOTAL"')
pd.concat(
[data[table_name].query('citizenship != "TOTAL"').sum(axis=0), totals.T], axis=1
)
```
Looks pretty good!
## Combine the data into a single dataframe
We will create a single dataframe but will add two columns, one (`label`) that will store the file key, and two (`year`) the fiscal year.
```
combined = pd.DataFrame()
for k in data:
tmp = data[k]
tmp["label"] = k
combined = combined.append(tmp)
combined["year"] = combined.label.apply(lambda x: int(f"20{x[-2:]}"))
combined
combined.citizenship = [str(i) for i in combined.citizenship]
```
**Export file to csv**
```
combined.to_csv("../data/extracted_data/cbp-apprehensions-nov2021.csv")
```
-----------
# Appendix
## Visualizations
### Sample Visualization
Now that we have the data in a usable format, we can also visualize the data. One visualization we can make is a graph of apprehensions by citizenship.
```
pd.pivot(
index="year",
columns="citizenship",
values="total",
data=combined[
combined.citizenship.isin(
combined.groupby("citizenship")
.sum()
.sort_values("total", ascending=False)
.head(6)
.index.tolist()
)
],
).plot(
figsize=(15, 8),
marker="o",
color=["yellow", "red", "blue", "black", "gray", "orange"],
title="FY07-19 Total Apprehensions by Citizenship at US Borders",
)
```
# End
| true |
code
| 0.500488 | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.