markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
merge validation pdfs created so far | # ! pip install PyPDF2
# ! ls -lh /home/jovyan/*.pdf
pdf_list = ['/home/jovyan/global_mean_tasmax_370.pdf',
'/home/jovyan/tasmax_max_bias_corrected.pdf',
'/home/jovyan/tasmax_max_cmip6.pdf',
'/home/jovyan/tasmax_max_downscaled.pdf',
'/home/jovyan/tasmax_mean_bias_corrected.pdf',
'/home/jovyan/tasmax_mean_cmip6.pdf',
'/home/jovyan/tasmax_mean_downscaled.pdf']
merge_validation_pdfs(pdf_list, '/home/jovyan/test_validation.pdf') | _____no_output_____ | MIT | notebooks/downscaling_pipeline/global_validation.ipynb | brews/downscaleCMIP6 |
CSX46 - Class 19 - MCODEIn this notebook, we will analyze a simple graph (`test.dot`) and then the Krogran network using the MCODE community detection algorithm. | import pygraphviz
import igraph
import numpy
import pandas
import sys
from collections import defaultdict
test_graph = FILL IN HERE
nodes = test_graph.nodes()
edges = FILL IN HERE
test_igraph = FILL IN HERE
test_igraph.summary()
igraph.drawing.plot(FILL IN HERE) | _____no_output_____ | Apache-2.0 | class19_MCODE_python3_template.ipynb | curiositymap/Networks-in-Computational-Biology |
Function `mcode` takes a graph adjacency list `adj_list` and a float parameter `vwp` (vertex weight probability), and returns a list of cluster assignments (of length equal to the number of clusters). Original code from True Price at UNC Chapel Hill [link to original code](https://github.com/trueprice/python-graph-clustering/blob/master/src/mcode.py). | def mcode(adj_list, vwp):
# Stage 1: Vertex Weighting
N = len(adj_list)
edges = [[]]*N
weights = dict((v, 1.) for v in range(0,N))
edges=defaultdict(set)
for i in range(0,N):
edges[i] = # MAKE A SET FROM adj_list[i]
res_clusters = []
for i,v in enumerate(edges):
neighborhood = # union of set((v,)) and edges[v]
# if node has only one neighbor, we know everything we need to know
if len(neighborhood) <= 2: continue
k = 1
while neighborhood:
k_core = # copy neighborhood object
invalid_nodes = True
while invalid_nodes and neighborhood:
invalid_nodes = set(
n for n in neighborhood if len(edges[n] & neighborhood) <= k)
# remove invalid_nodes from neighborhood
#increment k by one
# vertex weight = k-core number * density of k-core
weights[v] = (k-1) * (sum(len(edges[n] & k_core) for n in k_core) /
(2. * len(k_core)**2))
# Stage 2: Molecular Complex Prediction
unvisited = set(edges)
num_clusters = 0
for seed in sorted(weights, key=weights.get, reverse=True):
if seed not in unvisited: continue
cluster, frontier = set((seed,)), set((seed,))
w = weights[seed] * vwp
while frontier:
cluster.update(frontier)
# remove frontier from unvisited
frontier_plus_neighbors = set.union(*(edges[n] for n in frontier))
frontier = set(
n for n in frontier_plus_neighbors & unvisited if weights[n] > w)
# haircut: only keep 2-core complexes
invalid_nodes = True
while invalid_nodes and cluster:
invalid_nodes = set(n for n in cluster if len(edges[n] & cluster) < 2)
# remove invalid_nodes from cluster
if cluster:
# make a list from `cluster` and add that list to `res_clusters`
num_clusters += 1
return(res_clusters) | _____no_output_____ | Apache-2.0 | class19_MCODE_python3_template.ipynb | curiositymap/Networks-in-Computational-Biology |
Run mcode on the adjacency list for your toy graph, with vwp=0.8. How many clusters did it find? Do the cluster memberships make sense? Load the Krogan et al. network edge-list data as a Pandas data frame | edge_list = pandas.read_csv("shared/krogan.sif",
sep="\t",
names=["protein1","protein2"]) | _____no_output_____ | Apache-2.0 | class19_MCODE_python3_template.ipynb | curiositymap/Networks-in-Computational-Biology |
Make an igraph graph and print its summary | krogan_graph = FILL IN HERE
krogan_graph.summary() | _____no_output_____ | Apache-2.0 | class19_MCODE_python3_template.ipynb | curiositymap/Networks-in-Computational-Biology |
Run mcode on your graph with vwp=0.1 | res = FILL IN HERE | _____no_output_____ | Apache-2.0 | class19_MCODE_python3_template.ipynb | curiositymap/Networks-in-Computational-Biology |
Get the cluster sizes | FILL IN HERE | _____no_output_____ | Apache-2.0 | class19_MCODE_python3_template.ipynb | curiositymap/Networks-in-Computational-Biology |
Test Hypothesis by Simulating Statistics Mini-Lab 1: Hypothesis Testing Welcome to your next mini-lab! Go ahead an run the following cell to get started. You can do that by clicking on the cell and then clickcing `Run` on the top bar. You can also just press `Shift` + `Enter` to run the cell. | from datascience import *
import numpy as np
import otter
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plots
plots.style.use('fivethirtyeight')
grader = otter.Notebook("m7_l1_tests") | _____no_output_____ | MIT | minilabs/test-hypothesis-by-simulating-statistics/m7_l1.ipynb | garath/inferentialthinking |
In the previous two labs we've analyzed some data regarding COVID-19 test cases. Let's continue to analyze this data, specifically _claims_ about this data. Once again, we'll be be using ficitious statistics from Blockeley University.Let's say that Blockeley data science faculty are looking at the spread of COVID-19 across the realm of Minecraft. We have very specific data about Blockeley and the rest of Cubefornia but other realms' data isn't as clear cut or detailed. Let's say that a neighboring village has been reporting a COVID-19 infection rate of 26%. Should we trust these numbers?Regardless of whether or not you believe these claims, the job of a data scientist is to definitively substantiate or disprove such claims with data. You have access to the test results of similar sized village nearby and come up with the brilliant idea of running a hypothesis test with this data. Let's go ahead and load it! Run te cell below to import this data. If you want to explore this data further, go ahead and group by both columns! An empty cell is provided for you to do this. | test_results = Table.read_table("../datasets/covid19_village_tests.csv")
test_results.show(5)
... | _____no_output_____ | MIT | minilabs/test-hypothesis-by-simulating-statistics/m7_l1.ipynb | garath/inferentialthinking |
From here we can formulate our **Null Hypothesis** and **Alternate Hypothesis** Our *null hypothesis* is that this village truly has a 26% infection rate amongst the populations. Our *alternate hypothesis* is that this village does not in actuality have a 26% infection rate - it's way too low. Now we need our test statistic. Since we're looking at the infection rate inthe population, our test statistic should be:$$\text{Test Statistic} = \frac{\text{Number of Positive Cases}}{\text{Total Number of Cases}}$$We've started the function declaration for you. Go ahead and complete `percent_positive` to calculate this test statistic.*Note*: Check out `np.count_nonzero` and built-in `len` function! These should be helpful for you. | def proportion_positive(test_results):
numerator = ...
denominator = ...
return numerator / denominator
grader.check("q1") | _____no_output_____ | MIT | minilabs/test-hypothesis-by-simulating-statistics/m7_l1.ipynb | garath/inferentialthinking |
If you grouped by `Village Number` before, you would realize that there are roughly 3000 tests per village. Let's now create functions that will randomly take 3000 tests from the `test_results` table and to apply our test statistic. Complete the `sample_population` and `apply_statistic` functions below!The `sample_population` function will take a `population_table` that is a table with all the data we want and will return a new table that has been sampled from this `population_table`. Please note that `with_replacement` should be `False`.The `apply_statistic` function will take in a `sample_table` which is the table full of samples taken from a population table, a `column_name` which is the name of the column containing the data of interest, and a `statistic_function` which will be the test statistic that we will use. This function will return the result of using the `statistic_function` on the `sample_table`. | def sample_population(population_table):
sampled_population = ...
return sampled_population
def apply_statistic(sample_table, column_name, statistic_function):
return statistic_function(...)
grader.check("q2") | _____no_output_____ | MIT | minilabs/test-hypothesis-by-simulating-statistics/m7_l1.ipynb | garath/inferentialthinking |
Now for the simulation portion! Complete the for loop below and fill in a reasonable number for the `iterations` variable. The `iterations` variable will determine just how many random samples that we will take in order to test our hypotheses. There is also code that will visualize your simulation and give you data regarding your simulation vs. the null hypothesis. | # Simulation code below. Fill out this portion!
iterations = ...
simulations = make_array()
for iteration in np.arange(iterations):
sample_table = ...
test_statistic = ...
simulations = np.append(simulations, test_statistic)
# This code is to tell you what percentage of our simulations are at or below the null hypothesis
# There's no need to fill anything out but it is good to understand what's going on!
null_hypothesis = 0.26
num_below = np.count_nonzero(simulations <= null_hypothesis) / iterations
print(f"Out of the {iterations} simulations, roughly {round(num_below * 100, 2)}% of test statistics " +
f"are less than our null hypothesis of a {null_hypothesis * 100}% infection rate.")
# This code is to graph your simulation data and where our null hypothesis lies
# There's no need to fill anything out but it is good to understand what's going on!
simulation_table = Table().with_column("Simulated Test Statistics", simulations)
simulation_table.hist(bins=20)
plots.scatter(null_hypothesis, 0, color='red', s=30);
grader.check("q3") | _____no_output_____ | MIT | minilabs/test-hypothesis-by-simulating-statistics/m7_l1.ipynb | garath/inferentialthinking |
Given our hypothesis test, what can you conclude about the village that reports having a 26% COVID-19 infection rate? Has your hypothesis changed before? Do you now trust or distrust these numbers? And if you do distrust these numbers, what do you think went wrong in the reporting? Congratulations on finishing! Run the next cell to make sure that you passed all of the test cases. | grader.check_all() | _____no_output_____ | MIT | minilabs/test-hypothesis-by-simulating-statistics/m7_l1.ipynb | garath/inferentialthinking |
Given running cost $g(x_t,u_t)$ and terminal cost $h(x_T)$ the finite horizon $(t=0 \ldots T)$ optimal control problem seeks to find the optimal control, $$u^*_{1:T} = \text{argmin}_{u_{1:T}} L(x_{1:T},u_{1:T})$$ $$u^*_{1:T} = \text{argmin}_{u_{1:T}} h(x_T) + \sum_{t=0}^T g(x_t,u_t)$$subject to the dynamics constraint: $x_{t+1} = f(x_t,u_t)$.This notebook provides a dirty, brute forcing solution to problems of this form, using the inverted pendulum as an example, and assuming dynamics are not know a-priori. First, we gather state, actions, next state pairs, and use these to train a surrogate neural network dynamics model, $x_{t+1} \sim \hat{f}(x_t,u_t)$, approximating the true dynamics $f$.We'll then set up a shooting-based trajectory optimisation problem, rolling out using the surrgoate dynamics $\hat{f}$ for a sequence of controls $u^*_{1:T}$, evaluate the cost, then take gradient steps to minimise this, adjusting the values of the control. We'll use pytorch and Adam to accomplish this. We'll do this in a continuous control setting, but note that this is a practically infeasible control strategy, because solving this sort of optimisation online (and with any convergence guarantees) within the bandwidth of an inverted pendulum is a stretch. | # NN parameters
Nsamples = 10000
epochs = 500
latent_dim = 1024
batch_size = 8
lr = 3e-4
# Torch environment wrapping gym pendulum
torch_env = Pendulum()
# Test parameters
Nsteps = 100
# Set up model (fully connected neural network)
model = FCN(latent_dim=latent_dim,d=torch_env.d,ud=torch_env.ud)
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
# Load previously trained model
model.load_state_dict(torch.load('./fcn.npy'))
# Or gather some training data
states_, actions, states = torch_env.get_data(Nsamples)
dset = H5Dataset(np.array(states_),np.array(actions),np.array(states))
sampler = DataLoader(dset, batch_size=batch_size, shuffle=True)
# and train model
losses = []
for epoch in range(epochs):
batch_losses = []
for states_,actions,states in sampler:
recon_x = model(states_,actions)
loss = model.loss_fn(recon_x,states)
optimizer.zero_grad()
loss.backward()
optimizer.step()
batch_losses.append(loss.item())
losses.append(np.mean(batch_losses))
plt.cla()
plt.semilogy(losses)
display.clear_output(wait=True)
display.display(plt.gcf())
torch.save(model.state_dict(),'./fcn.npy')
# Test model rollouts - looks reasonable
states = []
_states = []
s = torch_env.env.reset()
states.append(s)
_states.append(s.copy())
for i in range(30):
a = torch_env.env.action_space.sample()
s,r,_,_ = torch_env.env.step(a) # take a random action
states.append(s)
# roll-out with model
_s = model(torch.from_numpy(_states[-1]).float().reshape(1,-1),torch.from_numpy(a).float().reshape(1,-1))
_states.append(_s.detach().numpy())
plt.cla()
plt.plot(np.array(states),'--')
plt.plot(np.vstack(_states))
display.clear_output(wait=True)
display.display(plt.gcf())
# Set up optimal controller
controller = OptControl(model.dynamics, torch_env.running_cost, torch_env.term_cost, u_dim=torch_env.ud, umax=torch_env.umax, horizon=30,lr=1e-1)
# Uncomment to use true dynamics
# controller = OptControl(torch_env.dynamics, torch_env.running_cost, torch_env.term_cost, u_dim=torch_env.ud, umax=torch_env.umax, horizon=30,lr=1e-1)
# Test controller
plt.figure(figsize=(15,5))
s = torch_env.env.reset()
for i in range(Nsteps):
u,states,cost,costs = controller.minimize(torch.from_numpy(s).reshape(1,-1).float(),Nsteps=5) #OC
s,r,_,_ = torch_env.env.step(u[:,0].detach().numpy()) # take a random action
torch_env.env.render()
plt.clf()
plt.subplot(1,3,1)
plt.plot(u.detach().numpy().T,'--')
plt.ylim(-2,2)
plt.ylabel('Controls')
plt.subplot(1,3,2)
plt.plot(np.squeeze(torch.stack(states).detach().numpy()))
plt.legend({'Tip x','Tip y','Velocity'})
plt.ylim(-8,8)
plt.subplot(1,3,3)
plt.plot(np.squeeze(torch.stack(costs).detach().numpy()))
plt.ylabel('Cost')
plt.ylim(0,15)
display.clear_output(wait=True)
display.display(plt.gcf())
torch_env.env.close() | _____no_output_____ | MIT | Model-based-OC-shooting.ipynb | mgb45/OC-notebooks |
Генерация заголовков научных статей: слабый baseline Источник: https://github.com/bentrevett/pytorch-seq2seq | # Если Вы запускаете ноутбук на colab,
# выполните следующие строчки, чтобы подгрузить библиотеку dlnlputils:
# !git clone https://github.com/Samsung-IT-Academy/stepik-dl-nlp.git
# import sys; sys.path.append('/content/stepik-dl-nlp')
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torchtext.data import Field, BucketIterator
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
import spacy
import random
import math
import time
SEED = 1234
random.seed(SEED)
torch.manual_seed(SEED)
torch.backends.cudnn.deterministic = True
# возможно, Вам потребуется предварительно загрузить модели SpaCy для английского языка
# !python -m spacy download en
spacy_en = spacy.load('en')
def tokenize(text):
"""
Tokenizes English text from a string into a list of strings (tokens)
"""
return [tok.text for tok in spacy_en.tokenizer(text) if not tok.text.isspace()]
from torchtext import data, vocab
tokenizer = data.get_tokenizer('spacy')
TEXT = Field(tokenize=tokenize,
init_token = '<sos>',
eos_token = '<eos>',
include_lengths = True,
lower = True)
%%time
trn_data_fields = [("src", TEXT),
("trg", TEXT)]
dataset = data.TabularDataset(
path='datasets/train.csv',
format='csv',
skip_header=True,
fields=trn_data_fields
)
train_data, valid_data, test_data = dataset.split(split_ratio=[0.98, 0.01, 0.01])
TEXT.build_vocab(train_data, min_freq = 7)
print(f"Unique tokens in vocabulary: {len(TEXT.vocab)}")
BATCH_SIZE = 32
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
train_iterator, valid_iterator, test_iterator = BucketIterator.splits(
(train_data, valid_data, test_data),
batch_size = BATCH_SIZE,
sort_within_batch = True,
sort_key = lambda x : len(x.src),
device = device)
class Encoder(nn.Module):
def __init__(self, input_dim, emb_dim, enc_hid_dim, dec_hid_dim, dropout):
super().__init__()
self.embedding = nn.Embedding(input_dim, emb_dim)
self.rnn = nn.GRU(emb_dim, enc_hid_dim, bidirectional = True)
self.fc = nn.Linear(enc_hid_dim * 2, dec_hid_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, src, src_len):
#src = [src sent len, batch size]
#src_len = [src sent len]
embedded = self.dropout(self.embedding(src))
#embedded = [src sent len, batch size, emb dim]
packed_embedded = nn.utils.rnn.pack_padded_sequence(embedded, src_len)
packed_outputs, hidden = self.rnn(packed_embedded)
#packed_outputs is a packed sequence containing all hidden states
#hidden is now from the final non-padded element in the batch
outputs, _ = nn.utils.rnn.pad_packed_sequence(packed_outputs)
#outputs is now a non-packed sequence, all hidden states obtained
# when the input is a pad token are all zeros
#outputs = [sent len, batch size, hid dim * num directions]
#hidden = [n layers * num directions, batch size, hid dim]
#hidden is stacked [forward_1, backward_1, forward_2, backward_2, ...]
#outputs are always from the last layer
#hidden [-2, :, : ] is the last of the forwards RNN
#hidden [-1, :, : ] is the last of the backwards RNN
#initial decoder hidden is final hidden state of the forwards and backwards
# encoder RNNs fed through a linear layer
hidden = torch.tanh(self.fc(torch.cat((hidden[-2,:,:], hidden[-1,:,:]), dim = 1)))
#outputs = [sent len, batch size, enc hid dim * 2]
#hidden = [batch size, dec hid dim]
return outputs, hidden
class Attention(nn.Module):
def __init__(self, enc_hid_dim, dec_hid_dim):
super().__init__()
self.attn = nn.Linear((enc_hid_dim * 2) + dec_hid_dim, dec_hid_dim)
self.v = nn.Parameter(torch.rand(dec_hid_dim))
def forward(self, hidden, encoder_outputs, mask):
#hidden = [batch size, dec hid dim]
#encoder_outputs = [src sent len, batch size, enc hid dim * 2]
#mask = [batch size, src sent len]
batch_size = encoder_outputs.shape[1]
src_len = encoder_outputs.shape[0]
#repeat encoder hidden state src_len times
hidden = hidden.unsqueeze(1).repeat(1, src_len, 1)
encoder_outputs = encoder_outputs.permute(1, 0, 2)
#hidden = [batch size, src sent len, dec hid dim]
#encoder_outputs = [batch size, src sent len, enc hid dim * 2]
energy = torch.tanh(self.attn(torch.cat((hidden, encoder_outputs), dim = 2)))
#energy = [batch size, src sent len, dec hid dim]
energy = energy.permute(0, 2, 1)
#energy = [batch size, dec hid dim, src sent len]
#v = [dec hid dim]
v = self.v.repeat(batch_size, 1).unsqueeze(1)
#v = [batch size, 1, dec hid dim]
attention = torch.bmm(v, energy).squeeze(1)
#attention = [batch size, src sent len]
attention = attention.masked_fill(mask == 0, -1e10)
return F.softmax(attention, dim = 1)
class Decoder(nn.Module):
def __init__(self, output_dim, emb_dim, enc_hid_dim, dec_hid_dim, dropout, attention):
super().__init__()
self.output_dim = output_dim
self.attention = attention
self.embedding = nn.Embedding(output_dim, emb_dim)
self.rnn = nn.GRU((enc_hid_dim * 2) + emb_dim, dec_hid_dim)
self.out = nn.Linear((enc_hid_dim * 2) + dec_hid_dim + emb_dim, output_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, input, hidden, encoder_outputs, mask):
#input = [batch size]
#hidden = [batch size, dec hid dim]
#encoder_outputs = [src sent len, batch size, enc hid dim * 2]
#mask = [batch size, src sent len]
input = input.unsqueeze(0)
#input = [1, batch size]
embedded = self.dropout(self.embedding(input))
#embedded = [1, batch size, emb dim]
a = self.attention(hidden, encoder_outputs, mask)
#a = [batch size, src sent len]
a = a.unsqueeze(1)
#a = [batch size, 1, src sent len]
encoder_outputs = encoder_outputs.permute(1, 0, 2)
#encoder_outputs = [batch size, src sent len, enc hid dim * 2]
weighted = torch.bmm(a, encoder_outputs)
#weighted = [batch size, 1, enc hid dim * 2]
weighted = weighted.permute(1, 0, 2)
#weighted = [1, batch size, enc hid dim * 2]
rnn_input = torch.cat((embedded, weighted), dim = 2)
#rnn_input = [1, batch size, (enc hid dim * 2) + emb dim]
output, hidden = self.rnn(rnn_input, hidden.unsqueeze(0))
#output = [sent len, batch size, dec hid dim * n directions]
#hidden = [n layers * n directions, batch size, dec hid dim]
#sent len, n layers and n directions will always be 1 in this decoder, therefore:
#output = [1, batch size, dec hid dim]
#hidden = [1, batch size, dec hid dim]
#this also means that output == hidden
assert (output == hidden).all()
embedded = embedded.squeeze(0)
output = output.squeeze(0)
weighted = weighted.squeeze(0)
output = self.out(torch.cat((output, weighted, embedded), dim = 1))
#output = [bsz, output dim]
return output, hidden.squeeze(0), a.squeeze(1)
class Seq2Seq(nn.Module):
def __init__(self, encoder, decoder, pad_idx, sos_idx, eos_idx, device):
super().__init__()
self.encoder = encoder
self.decoder = decoder
self.pad_idx = pad_idx
self.sos_idx = sos_idx
self.eos_idx = eos_idx
self.device = device
def create_mask(self, src):
mask = (src != self.pad_idx).permute(1, 0)
return mask
def forward(self, src, src_len, trg, teacher_forcing_ratio = 0.5):
#src = [src sent len, batch size]
#src_len = [batch size]
#trg = [trg sent len, batch size]
#teacher_forcing_ratio is probability to use teacher forcing
#e.g. if teacher_forcing_ratio is 0.75 we use teacher forcing 75% of the time
if trg is None:
assert teacher_forcing_ratio == 0, "Must be zero during inference"
inference = True
trg = torch.zeros((100, src.shape[1])).long().fill_(self.sos_idx).to(src.device)
else:
inference = False
batch_size = src.shape[1]
max_len = trg.shape[0]
trg_vocab_size = self.decoder.output_dim
#tensor to store decoder outputs
outputs = torch.zeros(max_len, batch_size, trg_vocab_size).to(self.device)
#tensor to store attention
attentions = torch.zeros(max_len, batch_size, src.shape[0]).to(self.device)
#encoder_outputs is all hidden states of the input sequence, back and forwards
#hidden is the final forward and backward hidden states, passed through a linear layer
encoder_outputs, hidden = self.encoder(src, src_len)
#first input to the decoder is the <sos> tokens
input = trg[0,:]
mask = self.create_mask(src)
#mask = [batch size, src sent len]
for t in range(1, max_len):
#insert input token embedding, previous hidden state, all encoder hidden states
# and mask
#receive output tensor (predictions), new hidden state and attention tensor
output, hidden, attention = self.decoder(input, hidden, encoder_outputs, mask)
#place predictions in a tensor holding predictions for each token
outputs[t] = output
#place attentions in a tensor holding attention value for each input token
attentions[t] = attention
#decide if we are going to use teacher forcing or not
teacher_force = random.random() < teacher_forcing_ratio
#get the highest predicted token from our predictions
top1 = output.argmax(1)
#if teacher forcing, use actual next token as next input
#if not, use predicted token
input = trg[t] if teacher_force else top1
#if doing inference and next token/prediction is an eos token then stop
if inference and input.item() == self.eos_idx:
return outputs[:t], attentions[:t]
return outputs, attentions
INPUT_DIM = len(TEXT.vocab)
OUTPUT_DIM = len(TEXT.vocab)
ENC_EMB_DIM = 128
DEC_EMB_DIM = 128
ENC_HID_DIM = 64
DEC_HID_DIM = 64
ENC_DROPOUT = 0.8
DEC_DROPOUT = 0.8
PAD_IDX = TEXT.vocab.stoi['<pad>']
SOS_IDX = TEXT.vocab.stoi['<sos>']
EOS_IDX = TEXT.vocab.stoi['<eos>']
attn = Attention(ENC_HID_DIM, DEC_HID_DIM)
enc = Encoder(INPUT_DIM, ENC_EMB_DIM, ENC_HID_DIM, DEC_HID_DIM, ENC_DROPOUT)
dec = Decoder(OUTPUT_DIM, DEC_EMB_DIM, ENC_HID_DIM, DEC_HID_DIM, DEC_DROPOUT, attn)
model = Seq2Seq(enc, dec, PAD_IDX, SOS_IDX, EOS_IDX, device).to(device)
def init_weights(m):
for name, param in m.named_parameters():
if 'weight' in name:
nn.init.normal_(param.data, mean=0, std=0.01)
else:
nn.init.constant_(param.data, 0)
model.apply(init_weights)
def count_parameters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
print(f'The model has {count_parameters(model):,} trainable parameters')
optimizer = optim.Adam(model.parameters())
criterion = nn.CrossEntropyLoss(ignore_index = PAD_IDX) | _____no_output_____ | MIT | task11_kaggle/lstm_baseline.ipynb | yupopov/stepik-dl-nlp |
Обучение модели | import matplotlib
matplotlib.rcParams.update({'figure.figsize': (16, 12), 'font.size': 14})
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import clear_output
def train(model, iterator, optimizer, criterion, clip, train_history=None, valid_history=None):
model.train()
epoch_loss = 0
history = []
for i, batch in enumerate(iterator):
src, src_len = batch.src
trg, trg_len = batch.trg
optimizer.zero_grad()
output, attetion = model(src, src_len, trg)
#trg = [trg sent len, batch size]
#output = [trg sent len, batch size, output dim]
output = output[1:].view(-1, output.shape[-1])
trg = trg[1:].view(-1)
#trg = [(trg sent len - 1) * batch size]
#output = [(trg sent len - 1) * batch size, output dim]
loss = criterion(output, trg)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), clip)
optimizer.step()
epoch_loss += loss.item()
history.append(loss.cpu().data.numpy())
if (i+1)%10==0:
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(12, 8))
clear_output(True)
ax[0].plot(history, label='train loss')
ax[0].set_xlabel('Batch')
ax[0].set_title('Train loss')
if train_history is not None:
ax[1].plot(train_history, label='general train history')
ax[1].set_xlabel('Epoch')
if valid_history is not None:
ax[1].plot(valid_history, label='general valid history')
plt.legend()
plt.show()
return epoch_loss / len(iterator)
def evaluate(model, iterator, criterion):
model.eval()
epoch_loss = 0
with torch.no_grad():
for i, batch in enumerate(iterator):
src, src_len = batch.src
trg, trg_len = batch.trg
output, attention = model(src, src_len, trg, 0) #turn off teacher forcing
#trg = [trg sent len, batch size]
#output = [trg sent len, batch size, output dim]
output = output[1:].view(-1, output.shape[-1])
trg = trg[1:].view(-1)
#trg = [(trg sent len - 1) * batch size]
#output = [(trg sent len - 1) * batch size, output dim]
loss = criterion(output, trg)
epoch_loss += loss.item()
return epoch_loss / len(iterator)
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
MODEL_NAME = 'models/lstm_baseline.pt'
N_EPOCHS = 5
CLIP = 1
train_history = []
valid_history = []
best_valid_loss = float('inf')
for epoch in range(N_EPOCHS):
start_time = time.time()
train_loss = train(model, train_iterator, optimizer, criterion, CLIP, train_history, valid_history)
valid_loss = evaluate(model, valid_iterator, criterion)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(model.state_dict(), MODEL_NAME)
train_history.append(train_loss)
valid_history.append(valid_loss)
print(f'Epoch: {epoch+1:02} | Time: {epoch_mins}m {epoch_secs}s')
print(f'\tTrain Loss: {train_loss:.3f} | Train PPL: {math.exp(train_loss):7.3f}')
print(f'\t Val. Loss: {valid_loss:.3f} | Val. PPL: {math.exp(valid_loss):7.3f}') | _____no_output_____ | MIT | task11_kaggle/lstm_baseline.ipynb | yupopov/stepik-dl-nlp |
Finally, we load the parameters from our best validation loss and get our results on the test set. | # for cpu usage
model.load_state_dict(torch.load(MODEL_NAME, map_location=torch.device('cpu')))
# for gpu usage
# model.load_state_dict(torch.load(MODEL_NAME), map_location=torch.device('cpu'))
test_loss = evaluate(model, test_iterator, criterion)
print(f'| Test Loss: {test_loss:.3f} | Test PPL: {math.exp(test_loss):7.3f} |') | _____no_output_____ | MIT | task11_kaggle/lstm_baseline.ipynb | yupopov/stepik-dl-nlp |
Генерация заголовков | def translate_sentence(model, tokenized_sentence):
model.eval()
tokenized_sentence = ['<sos>'] + [t.lower() for t in tokenized_sentence] + ['<eos>']
numericalized = [TEXT.vocab.stoi[t] for t in tokenized_sentence]
sentence_length = torch.LongTensor([len(numericalized)]).to(device)
tensor = torch.LongTensor(numericalized).unsqueeze(1).to(device)
translation_tensor_logits, attention = model(tensor, sentence_length, None, 0)
translation_tensor = torch.argmax(translation_tensor_logits.squeeze(1), 1)
translation = [TEXT.vocab.itos[t] for t in translation_tensor]
translation, attention = translation[1:], attention[1:]
return translation, attention
def display_attention(sentence, translation, attention):
fig = plt.figure(figsize=(30,50))
ax = fig.add_subplot(111)
attention = attention.squeeze(1).cpu().detach().numpy().T
cax = ax.matshow(attention, cmap='bone')
ax.tick_params(labelsize=12)
ax.set_yticklabels(['']+['<sos>']+[t.lower() for t in sentence]+['<eos>'])
ax.set_xticklabels(['']+translation, rotation=80)
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
plt.show()
plt.close()
example_idx = 100
src = vars(train_data.examples[example_idx])['src']
trg = vars(train_data.examples[example_idx])['trg']
print(f'src = {src}')
print(f'trg = {trg}')
translation, attention = translate_sentence(model, src)
print(f'predicted trg = {translation}')
display_attention(src, translation, attention)
for example_idx in range(100):
src = vars(test_data.examples[example_idx])['src']
trg = vars(test_data.examples[example_idx])['trg']
translation, attention = translate_sentence(model, src)
print('Оригинальный заголовок: ', ' '.join(trg))
print('Предсказанный заголовок: ', ' '.join(translation))
print('-----------------------------------')
example_idx = 0
src = vars(valid_data.examples[example_idx])['src']
trg = vars(valid_data.examples[example_idx])['trg']
print(f'src = {src}')
print(f'trg = {trg}')
translation, attention = translate_sentence(model, src)
print(f'predicted trg = {translation}')
display_attention(src, translation, attention)
example_idx = 510
src = vars(test_data.examples[example_idx])['src']
trg = vars(test_data.examples[example_idx])['trg']
print(f'src = {src}')
print(f'trg = {trg}')
translation, attention = translate_sentence(model, src)
print(f'predicted trg = {translation}')
display_attention(src, translation, attention) | _____no_output_____ | MIT | task11_kaggle/lstm_baseline.ipynb | yupopov/stepik-dl-nlp |
Считаем BLEU на train.csv | import nltk
n_gram_weights = [0.3334, 0.3333, 0.3333]
test_len = len(test_data)
original_texts = []
generated_texts = []
macro_bleu = 0
for example_idx in range(test_len):
src = vars(test_data.examples[example_idx])['src']
trg = vars(test_data.examples[example_idx])['trg']
translation, _ = translate_sentence(model, src)
original_texts.append(trg)
generated_texts.append(translation)
bleu_score = nltk.translate.bleu_score.sentence_bleu(
[trg],
translation,
weights = n_gram_weights
)
macro_bleu += bleu_score
macro_bleu /= test_len
# averaging sentence-level BLEU (i.e. macro-average precision)
print('Macro-average BLEU (LSTM): {0:.5f}'.format(macro_bleu)) | _____no_output_____ | MIT | task11_kaggle/lstm_baseline.ipynb | yupopov/stepik-dl-nlp |
Делаем submission в Kaggle | import pandas as pd
submission_data = pd.read_csv('datasets/test.csv')
abstracts = submission_data['abstract'].values | _____no_output_____ | MIT | task11_kaggle/lstm_baseline.ipynb | yupopov/stepik-dl-nlp |
Генерация заголовков для тестовых данных: | titles = []
for abstract in abstracts:
title, _ = translate_sentence(model, abstract.split())
titles.append(' '.join(title).replace('<unk>', '')) | _____no_output_____ | MIT | task11_kaggle/lstm_baseline.ipynb | yupopov/stepik-dl-nlp |
Записываем полученные заголовки в файл формата `,`: | submission_df = pd.DataFrame({'abstract': abstracts, 'title': titles})
submission_df.to_csv('datasets/predicted_titles.csv', index=False) | _____no_output_____ | MIT | task11_kaggle/lstm_baseline.ipynb | yupopov/stepik-dl-nlp |
С помощью скрипта `generate_csv` приводим файл `submission_prediction.csv` в формат, необходимый для посылки в соревнование на Kaggle: | from create_submission import generate_csv
generate_csv('datasets/predicted_titles.csv', 'datasets/kaggle_pred.csv', 'datasets/vocs.pkl')
!wc -l datasets/kaggle_pred.csv
!head datasets/kaggle_pred.csv | _____no_output_____ | MIT | task11_kaggle/lstm_baseline.ipynb | yupopov/stepik-dl-nlp |
Basic Apach Spark Analysis
- Ref: https://timw.info/ply
- Notebook tutorial: https://timw.info/ekt
| # Load NYC Taxi data
df = spark.read.load('abfss://[email protected]/NYCTripSmall.parquet', format='parquet')
display(df.limit(10))
# View the dataframe schema
df.printSchema()
# Load the NYC Taxi data into the Spark nyctaxi database
spark.sql("CREATE DATABASE IF NOT EXISTS nyctaxi")
df.write.mode("overwrite").saveAsTable("nyctaxi.trip")
# Display the taxi data
df = spark.sql("SELECT * FROM nyctaxi.trip")
display(df)
# Analyze the data and save results to nyctaxi.passengercountstats table (select CHART)
df = spark.sql("""
SELECT PassengerCount,
SUM(TripDistanceMiles) as SumTripDistance,
AVG(TripDistanceMiles) as AvgTripDistance
FROM nyctaxi.trip
WHERE TripDistanceMiles > 0 AND PassengerCount > 0
GROUP BY PassengerCount
ORDER BY PassengerCount
""")
display(df)
df.write.saveAsTable("nyctaxi.passengercountstats")
| _____no_output_____ | MIT | demo-resources/Spark-Pool-Notebook.ipynb | vijrqrr9/dp203 |
Germany: LK Kleve (Nordrhein-Westfalen)* Homepage of project: https://oscovida.github.io* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Germany-Nordrhein-Westfalen-LK-Kleve.ipynb) | import datetime
import time
start = datetime.datetime.now()
print(f"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}")
%config InlineBackend.figure_formats = ['svg']
from oscovida import *
overview(country="Germany", subregion="LK Kleve");
# load the data
cases, deaths, region_label = germany_get_region(landkreis="LK Kleve")
# compose into one table
table = compose_dataframe_summary(cases, deaths)
# show tables with up to 500 rows
pd.set_option("max_rows", 500)
# display the table
table | _____no_output_____ | CC-BY-4.0 | ipynb/Germany-Nordrhein-Westfalen-LK-Kleve.ipynb | RobertRosca/oscovida.github.io |
Explore the data in your web browser- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Germany-Nordrhein-Westfalen-LK-Kleve.ipynb)- and wait (~1 to 2 minutes)- Then press SHIFT+RETURN to advance code cell to code cell- See http://jupyter.org for more details on how to use Jupyter Notebook Acknowledgements:- Johns Hopkins University provides data for countries- Robert Koch Institute provides data for within Germany- Open source and scientific computing community for the data tools- Github for hosting repository and html files- Project Jupyter for the Notebook and binder service- The H2020 project Photon and Neutron Open Science Cloud ([PaNOSC](https://www.panosc.eu/))-------------------- | print(f"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and "
f"deaths at {fetch_deaths_last_execution()}.")
# to force a fresh download of data, run "clear_cache()"
print(f"Notebook execution took: {datetime.datetime.now()-start}")
| _____no_output_____ | CC-BY-4.0 | ipynb/Germany-Nordrhein-Westfalen-LK-Kleve.ipynb | RobertRosca/oscovida.github.io |
MNIST learns your handwritingThis is a small project on using a GAN to generate numbers that look as someone else's handwriting when not trained on all numbers written by this person. For example say we had someone write the number 273 and we now want to write 481 in their own handwriting.The main inspiration for this project is a paper I read recently called STAR GAN v2. In this paper they try to recognize diferent styles and features in images and transfer those into a different image. For example they are able to use image of different animals like dogs or tigers and making them look like a cat. Furthermore at the time of writing this it is currently a state-of-the-art method for this style translation tasks.Some of the results can be seen at the end of this notebook. Unfortunately it seems not that many features were captured and mostly it was only the thickness of the numbers that was preserved. A reason this happens might be that the size of the images is small being 28x28. However, some ways to allow for more variation might be by exteding the number of layers being used, by having higher dimensional spaces for the latent and style spaces, or by giving a higher weight to the style diversification loss (look at section loss functions to see more about this).The main purpose of this notebook is to make a small showcase of the architecture used in a simple design so that the ideas are simple to follow. This notebook will also contain some explanations and comments on the architecture of the neural network so that it might be easier to follow.Note: another small thing I did in this project is to 'translate' STAR GAN code from pytorch to tensorflow. Redoing all of the work was useful to understand everything done on their code and having an option in tensorflow might be useful for some people.For a small tutorial on how to write a simple GAN architecture: https://machinelearningmastery.com/how-to-develop-a-generative-adversarial-network-for-an-mnist-handwritten-digits-from-scratch-in-keras/Link to STAR GAN v2: https://app.wandb.ai/stacey/stargan/reports/Cute-Animals-and-Post-Modern-Style-Transfer%3A-StarGAN-v2-for-Multi-Domain-Image-Synthesis---VmlldzoxNzcwODQFurther Reading on style domain techniques for image generation:Link to STAR GAN paper: https://arxiv.org/pdf/1912.01865.pdfLink to Multimodal Unsupervised Image-to-Image Translation: https://arxiv.org/pdf/1804.04732.pdfLink to Improving Style-Content Disentanglement Paper: https://arxiv.org/pdf/2007.04964.pdf Intitializing | import tensorflow as tf
from tensorflow_addons.layers import InstanceNormalization
import numpy as np
import tensorflow.keras.layers as layers
import time
from tensorflow.keras.datasets.mnist import load_data
import sys
import os
import datetime | _____no_output_____ | MIT | AI-data-Projects/MNIST_GAN/MNIST_Style_GAN_v2.ipynb | nk555/AI-Projects |
LayersThere are a few layers that were custom made. More importantly it is udeful to make this custom layers for the layers that try to incorporate style. This is as the inputs themselves are custom as you are inputing an image and a vector representing the style.ResBlk is short for Residual Block, where it is predicting the residual (the difference between the original and the prediction). | class ResBlk(tf.keras.Model):
def __init__(self, dim_in, dim_out, actv=layers.LeakyReLU(),
normalize=False, downsample=False):
super(ResBlk, self).__init__()
self.actv = actv
self.normalize = normalize
self.downsample = downsample
self.learned_sc = dim_in != dim_out
self._build_weights(dim_in, dim_out)
def _build_weights(self, dim_in, dim_out):
self.conv1 = layers.Conv2D(dim_in, 3, padding='same')
self.conv2 = layers.Conv2D(dim_out, 3, padding='same')
if self.normalize:
self.norm1 = InstanceNormalization()
self.norm2 = InstanceNormalization()
if self.learned_sc:
self.conv1x1 = layers.Conv2D(dim_out, 1)
def _shortcut(self, x):
if self.learned_sc:
x = self.conv1x1(x)
if self.downsample:
x = layers.AveragePooling2D(pool_size=(2,2), padding='same')(x)
return x
def _residual(self, x):
if len(tf.shape(x))>4:
x=tf.reshape(x,tf.shape(x)[1:])
if self.normalize:
x = self.norm1(x)
x = self.actv(x)
x = self.conv1(x)
if self.downsample:
x = layers.AveragePooling2D(pool_size=(2,2), padding='same')(x)
if self.normalize:
x = self.norm2(x)
x = self.actv(x)
x = self.conv2(x)
return x
def call(self, x):
x = self._shortcut(x) + self._residual(x)
return x / 2**(1/2) # unit variance | _____no_output_____ | MIT | AI-data-Projects/MNIST_GAN/MNIST_Style_GAN_v2.ipynb | nk555/AI-Projects |
AdaIN stands for Adaptive Instance Normalization. It is a type of normalization that allows to 'mix' two inputs. In this case we use the style vector to mix with our input x which is the image or part of the process of constructing this image. | class AdaIn(tf.keras.Model):
def __init__(self, style_dim, num_features):
super(AdaIn,self).__init__()
self.norm = InstanceNormalization()
self.lin = layers.Dense(num_features*2)
def call(self, x, s):
h=self.lin(s)
h=tf.reshape(h, [1, tf.shape(h)[0], 1, tf.shape(h)[1]])
gamma,beta=tf.split(h, 2, axis=3)
return (1+gamma)*self.norm(x)+beta
class AdainResBlk(tf.keras.Model):
def __init__(self, dim_in, dim_out, style_dim=16,
actv=layers.LeakyReLU(), upsample=False):
super(AdainResBlk, self).__init__()
self.actv = actv
self.upsample = upsample
self.learned_sc = dim_in != dim_out
self._build_weights(dim_in, dim_out, style_dim)
def _build_weights(self, dim_in, dim_out, style_dim=16):
self.conv1 = layers.Conv2D(dim_out, 3, padding='same')
self.conv2 = layers.Conv2D(dim_out, 3, padding='same')
self.norm1 = AdaIn(style_dim, dim_in)
self.norm2 = AdaIn(style_dim, dim_out)
if self.learned_sc:
self.conv1x1 = layers.Conv2D(dim_out, 1)
def _shortcut(self, x):
if self.upsample:
x = layers.UpSampling2D(size=(2,2), interpolation='nearest')(x)
if self.learned_sc:
x = self.conv1x1(x)
return x
def _residual(self, x, s):
x = self.norm1(x, s)
x = self.actv(x)
if self.upsample:
x = layers.UpSampling2D(size=(2,2), interpolation='nearest')(x)
x = self.conv1(x)
x = self.norm2(x, s)
x = self.actv(x)
x = self.conv2(x)
return x
def call(self, x, s):
x = self._shortcut(x) + self._residual(x,s)
return x / 2**(1/2) # unit variance | _____no_output_____ | MIT | AI-data-Projects/MNIST_GAN/MNIST_Style_GAN_v2.ipynb | nk555/AI-Projects |
Generator ClassIn the generator we have two steps one for encoding the image into lower level information and one to decode back to the image. In this particular architecture the decoding uses the style to build back the image as it is an important part of the process. The decoding does not do this as we have the style encoder as an architecture that deals with this issue of generating a style vector for a particular image. | class Generator(tf.keras.Model):
def __init__(self, img_size=28, style_dim=24, dim_in=8, max_conv_dim=128, repeat_num=2):
super(Generator, self).__init__()
self.img_size=img_size
self.from_bw=layers.Conv2D(dim_in, 3, padding='same', input_shape=(1,img_size,img_size,1))
self.encode=[]
self.decode=[]
self.to_bw=tf.keras.Sequential([InstanceNormalization(), layers.LeakyReLU(), layers.Conv2D(1, 1, padding='same')])
for _ in range(repeat_num):
dim_out = min(dim_in*2, max_conv_dim)
self.encode.append(ResBlk(dim_in, dim_out, normalize=True, downsample=True))
self.decode.insert(0, AdainResBlk(dim_out, dim_in, style_dim, upsample=True))
dim_in = dim_out
# bottleneck blocks
for _ in range(2):
self.encode.append(ResBlk(dim_out, dim_out, normalize=True))
self.decode.insert(0, AdainResBlk(dim_out, dim_out, style_dim))
def call(self, x, s):
x = self.from_bw(x)
cache = {}
for block in self.encode:
x = block(x)
for block in self.decode:
x = block(x, s)
return self.to_bw(x) | _____no_output_____ | MIT | AI-data-Projects/MNIST_GAN/MNIST_Style_GAN_v2.ipynb | nk555/AI-Projects |
Mapping NetworkThe Mapping Network and the Style encoder are the parts of this architecture that make a difference in allowing style to be analyzed and put into our images. The mapping network will take as an input a latent code (represents images as a vector in a high dimensional space) and the domain in this case the domain is the number we are representing. And the style encoder will take as inputs an image and a domain. | class MappingNetwork(tf.keras.Model):
def __init__(self, latent_dim=16, style_dim=24, num_domains=10):
super(MappingNetwork,self).__init__()
map_layers = [layers.Dense(128)]
map_layers += [layers.ReLU()]
for _ in range(2):
map_layers += [layers.Dense(128)]
map_layers += [layers.ReLU()]
self.shared = tf.keras.Sequential(layers=map_layers)
self.unshared = []
for _ in range(num_domains):
self.unshared += [tf.keras.Sequential(layers=[layers.Dense(128),
layers.ReLU(),
layers.Dense(128),
layers.ReLU(),
layers.Dense(128),
layers.ReLU(),
layers.Dense(style_dim)])]
def call(self, z, y):
h = self.shared(z)
out = []
for layer in self.unshared:
out += [layer(h)]
out = tf.stack(out, axis=1) # (batch, num_domains, style_dim)
s = tf.gather(out, y, axis=1) # (batch, style_dim)
return s | _____no_output_____ | MIT | AI-data-Projects/MNIST_GAN/MNIST_Style_GAN_v2.ipynb | nk555/AI-Projects |
Style EncoderAn important thing to notice from the style encoder is that it takes as an input an image and outputs a style vector. Looking at the dimensions of these we notice we need to flatten out the image through the layers. This can usually be done in two ways. By flattening a 2 dimensional input to a 1 dimensional output a flatten layer, or as it was done hear by using enough pooling layers so that we downsample the size of our 2 dimensional input until it is one dimensional. | class StyleEncoder(tf.keras.Model):
def __init__(self, img_size=28, style_dim=24, dim_in=16, num_domains=10, max_conv_dim=128, repeat_num=5):
super(StyleEncoder,self).__init__()
blocks = [layers.Conv2D(dim_in, 3, padding='same')]
for _ in range(repeat_num): #repetition 1 sends to (b,14,14,d) 2 to (b,7,7,d) 3 to (b,4,4,d) 4 to (b,2,2,d) 5 to (b,1,1,d)
dim_out = min(dim_in*2, max_conv_dim)
blocks += [ResBlk(dim_in, dim_out, downsample=True)]
dim_in = dim_out
blocks += [layers.LeakyReLU()]
blocks += [layers.Conv2D(dim_out, 4, padding='same')]
blocks += [layers.LeakyReLU()]
self.shared = tf.keras.Sequential(layers=blocks)
self.unshared = []
for _ in range(num_domains):
self.unshared += [layers.Dense(style_dim)]
def call(self, x, y):
h = self.shared(x)
h = tf.reshape(h,[tf.shape(h)[0], tf.shape(h)[3]])
out = []
for layer in self.unshared:
out += [layer(h)]
out = tf.stack(out, axis=1) # (batch, num_domains, style_dim)
s = tf.gather(out, y, axis=1) # (batch, style_dim)
return s | _____no_output_____ | MIT | AI-data-Projects/MNIST_GAN/MNIST_Style_GAN_v2.ipynb | nk555/AI-Projects |
Discriminator ClassSimilarly to the Style encoder the input of the discriminator is an image and we need to downsample it until it is one dimensional. | class Discriminator(tf.keras.Model):
def __init__(self, img_size=28, dim_in=16, num_domains=10, max_conv_dim=128, repeat_num=5):
super(Discriminator, self).__init__()
blocks = [layers.Conv2D(dim_in, 3, padding='same')]
for _ in range(repeat_num): #repetition 1 sends to (b,14,14,d) 2 to (b,7,7,d) 3 to (b,4,4,d) 4 to (b,2,2,d) 5 to (b,1,1,d)
dim_out = min(dim_in*2, max_conv_dim)
blocks += [ResBlk(dim_in, dim_out, downsample=True)]
dim_in = dim_out
blocks += [layers.LeakyReLU()]
blocks += [layers.Conv2D(dim_out, 4, padding='same')]
blocks += [layers.LeakyReLU()]
blocks += [layers.Conv2D(num_domains, 1, padding='same')]
self.main = tf.keras.Sequential(layers=blocks)
def call(self, x, y):
out = self.main(x)
out = tf.reshape(out, (tf.shape(out)[0], tf.shape(out)[3])) # (batch, num_domains)
out = tf.gather(out, y, axis=1) # (batch)
out = tf.reshape(out, [1])
return out | _____no_output_____ | MIT | AI-data-Projects/MNIST_GAN/MNIST_Style_GAN_v2.ipynb | nk555/AI-Projects |
Loss FunctionsThe loss functions used are an important part of this model as it describes our goal when training and how to perform gradient descent. The discriminator loss function is the regular adversarial loss L_adv used in a GAN architecture. But furthermore we have three loss functions added.For this loss functions if you want to see the mathematical formula I recommend looking at STAR GAN 2's paper. However I will explain what the loss tries to measure and a quick description of how it does so.L_sty is a style reconstruction loss. This tries to capture how well the style was captured on our output. It is computed as an expected value of the distance between the target style vector and the style vector that our style encoder predicts for the generated image.L_ds is a style diversification loss. It tries to capture that the images produced are different to promote a variety of images produced. It is computed as the expected value of the distance between the images (l_1 norm) generated when using two different styles and the same sources. L_cyc is a characteristic preserving loss. The cyc comes from cyclic as we measusre the distance between the original image and the image generated by using an image generated by this image and the style our style encoder provides as an input. (Notice we use the image generated by the image generated, so that we use the generator two times.)In the end the total loss function is expressed asL_adv + lambda_sty * L_sty + lambda_ds * L_ds + lambda_cyc * L_cyc | def moving_average(model, model_test, beta=0.999):
for i in range(len(model.weights)):
model_test.weights[i] = (1-beta)*model.weights[i] + beta*model_test.weights[i]
def adv_loss(logits, target):
assert target in [1, 0]
targets = tf.fill(tf.shape(logits), target)
loss = tf.keras.losses.BinaryCrossentropy(from_logits=True)(targets, logits)
return loss
def r1_reg(d_out, x_in, g):
# zero-centered gradient penalty for real images
batch_size = tf.shape(x_in)[0]
grad_dout=g.gradient(d_out, x_in)
#grad_dout = tf.gradients(ys=d_out, xs=x_in)
grad_dout2 = tf.square(grad_dout)
grad_dout2 = tf.reshape(grad_dout2,[batch_size, tf.shape(grad_dout2)[1]*tf.shape(grad_dout2)[2]])
reg = 0.5 * tf.math.reduce_mean(tf.math.reduce_sum(grad_dout2, axis=1))
return reg
def compute_d_loss(nets, args, x_real, y_org, y_trg, z_trg=None, x_ref=None):
assert (z_trg is None) != (x_ref is None)
# with real images
with tf.GradientTape() as g:
g.watch(x_real)
out = nets['discriminator'](x_real, y_org)
loss_real = adv_loss(out, 1)
loss_reg = r1_reg(out, x_real, g)
# with fake images
if z_trg is not None:
s_trg = nets['mapping_network'](z_trg, y_trg)
else: # x_ref is not None
s_trg = nets['style_encoder'](x_ref, y_trg)
x_fake = nets['generator'](x_real, s_trg)
out = nets['discriminator'](x_fake, y_trg)
loss_fake = adv_loss(out, 0)
loss = loss_real + loss_fake + args['lambda_reg'] * loss_reg
return loss, {'real': loss_real, 'fake':loss_fake, 'reg':loss_reg}
def compute_g_loss(nets, args, x_real, y_org, y_trg, z_trgs=None, x_refs=None):
assert (z_trgs is None) != (x_refs is None)
if z_trgs is not None:
z_trg, z_trg2 = z_trgs
if x_refs is not None:
x_ref, x_ref2 = x_refs
# adversarial loss
if z_trgs is not None:
s_trg = nets['mapping_network'](z_trg, y_trg)
else:
s_trg = nets['style_encoder'](x_ref, y_trg)
x_fake = nets['generator'](x_real, s_trg)
out = nets['discriminator'](x_fake, y_trg)
loss_adv = adv_loss(out, 1)
# style reconstruction loss
s_pred = nets['style_encoder'](x_fake, y_trg)
loss_sty = tf.math.reduce_mean(tf.abs(s_pred - s_trg))
# diversity sensitive loss
if z_trgs is not None:
s_trg2 = nets['mapping_network'](z_trg2, y_trg)
else:
s_trg2 = nets['style_encoder'](x_ref2, y_trg)
x_fake2 = nets['generator'](x_real, s_trg2)
loss_ds = tf.math.reduce_mean(tf.abs(x_fake - x_fake2))
# cycle-consistency loss
s_org = nets['style_encoder'](x_real, y_org)
x_rec = nets['generator'](x_fake, s_org)
loss_cyc = tf.math.reduce_mean(tf.abs(x_rec - x_real))
loss = loss_adv + args['lambda_sty'] * loss_sty \
- args['lambda_ds'] * loss_ds + args['lambda_cyc'] * loss_cyc
return loss, {'adv':loss_adv, 'sty':loss_sty, 'ds':loss_ds, 'cyc':loss_cyc} | _____no_output_____ | MIT | AI-data-Projects/MNIST_GAN/MNIST_Style_GAN_v2.ipynb | nk555/AI-Projects |
The ModelHere we introduce the class Solver which is the most important class as this will represent our whole model. It will initiate all of our neural networks as well as train our network. | class Solver(tf.keras.Model):
def __init__(self, args):
super(Solver, self).__init__()
self.args = args
self.step=0
self.nets, self.nets_ema = self.build_model(self.args)
# below setattrs are to make networks be children of Solver, e.g., for self.to(self.device)
for name in self.nets.keys():
setattr(self, name, self.nets[name])
for name in self.nets_ema.keys():
setattr(self, name + '_ema', self.nets_ema[name])
if args['mode'] == 'train':
self.optims = {}
for net in self.nets.keys():
self.optims[net] = tf.keras.optimizers.Adam(learning_rate= args['f_lr'] if net == 'mapping_network' else args['lr'],
beta_1=args['beta1'], beta_2=args['beta2'],
epsilon=args['weight_decay'])
self.ckptios = [tf.train.Checkpoint(model=net) for net in self.nets.values()]
self.ckptios += [tf.train.Checkpoint(model=net_ema) for net_ema in self.nets_ema.values()]
self.ckptios += [tf.train.Checkpoint(optimizer=optim) for optim in self.optims.values()]
else:
self.ckptios = [tf.train.Checkpoint(model=net_ema) for net_ema in self.nets_ema.values()]
#for name in self.nets.keys():
# Do not initialize the FAN parameters
# print('Initializing %s...' % name)
#self.nets[name].apply(initializer=tf.keras.initializers.HeNormal)
def build_model(self, args):
generator = Generator(args['img_size'], args['style_dim'])
mapping_network = MappingNetwork(args['latent_dim'], args['style_dim'], args['num_domains'])
style_encoder = StyleEncoder(args['img_size'], args['style_dim'], args['num_domains'])
discriminator = Discriminator(args['img_size'], args['num_domains'])
generator_ema = Generator(args['img_size'], args['style_dim'])
mapping_network_ema = MappingNetwork(args['latent_dim'], args['style_dim'], args['num_domains'])
style_encoder_ema = StyleEncoder(args['img_size'], args['style_dim'], args['num_domains'])
nets = {'generator':generator, 'mapping_network':mapping_network,
'style_encoder':style_encoder, 'discriminator':discriminator}
nets_ema = {'generator':generator_ema, 'mapping_network':mapping_network_ema,
'style_encoder':style_encoder_ema}
nets['discriminator'](inputs[0]['x_src'],inputs[0]['y_src'])
s_trg = nets['mapping_network'](inputs[0]['z_trg'],inputs[0]['y_src'])
nets['generator'](inputs[0]['x_src'],s_trg)
nets['style_encoder'](inputs[0]['x_src'], inputs[0]['y_src'])
s_trg = nets_ema['mapping_network'](inputs[0]['z_trg'],inputs[0]['y_src'])
nets_ema['generator'](inputs[0]['x_src'],s_trg)
nets_ema['style_encoder'](inputs[0]['x_src'], inputs[0]['y_src'])
return nets, nets_ema
def save(self):
for net in solv.nets.keys():
solv.nets[net].save_weights('MNIST_GAN_2/saved_model/'+net+'step'+str(self.step)+'.h5')
for net in solv.nets_ema.keys():
solv.nets[net].save_weights('MNIST_GAN_2/saved_model/'+net+'step'+str(self.step)+'_ema.h5')
#for ckptio in self.ckptios:
# ckptio.save(step)
def load(self, step):
self.step= step
for net in solv.nets.keys():
solv.nets[net].load_weights('MNIST_GAN_2/saved_model/'+net+'step'+str(step)+'.h5')
for net in solv.nets_ema.keys():
solv.nets[net].load_weights('MNIST_GAN_2/saved_model/'+net+'step'+str(step)+'_ema.h5')
#for ckptio in self.ckptios:
# ckptio.load(step)
# def _reset_grad(self):
# for optim in self.optims.values():
# optim.zero_grad()
def train(self, inputs, validations):
"""
inputs is a list of dictionaries that contains a source image, a reference image, domain and latent code information used to train the network
validation is a list that contains validation images
"""
args = self.args
nets = self.nets
nets_ema = self.nets_ema
optims = self.optims
inputs_val=validations[0]
# resume training if necessary
if args['resume_iter'] > 0:
self.load(args['resume_iter'])
# remember the initial value of ds weight
initial_lambda_ds = args['lambda_ds']
print('Start training...')
start_time = time.time()
for i in range(args['resume_iter'], args['total_iters']):
self.step+=1
# fetch images and labels
input= inputs[i-args['resume_iter']]
x_real, y_org = input['x_src'], input['y_src']
x_ref, x_ref2, y_trg = input['x_ref'], input['x_ref2'], input['y_ref']
z_trg, z_trg2 = input['z_trg'], input['z_trg2']
#print(1.5)
# train the discriminator
with tf.GradientTape() as g:
g.watch(nets['discriminator'].weights)
d_loss, d_losses_latent = compute_d_loss(
nets, args, x_real, y_org, y_trg, z_trg=z_trg)
#self._reset_grad()
#d_loss.backward()
grad=g.gradient(d_loss, nets['discriminator'].weights)
#optims['discriminator'].get_gradients(d_loss, nets['discriminator'].weights)
optims['discriminator'].apply_gradients(zip(grad, nets['discriminator'].weights))
#print(2)
with tf.GradientTape() as g:
g.watch(nets['discriminator'].weights)
d_loss, d_losses_ref = compute_d_loss(
nets, args, x_real, y_org, y_trg, x_ref=x_ref)
#self._reset_grad()
#d_loss.backward()
grad=g.gradient(d_loss, nets['discriminator'].weights)
optims['discriminator'].apply_gradients(zip(grad, nets['discriminator'].weights))
#print(3)
# train the generator
with tf.GradientTape(persistent=True) as g:
g.watch(nets['generator'].weights)
g.watch(nets['mapping_network'].weights)
g.watch(nets['style_encoder'].weights)
g_loss, g_losses_latent = compute_g_loss(
nets, args, x_real, y_org, y_trg, z_trgs=[z_trg, z_trg2])
#self._reset_grad()
#g_loss.backward()
grad=g.gradient(g_loss, nets['generator'].weights)
optims['generator'].apply_gradients(zip(grad, nets['generator'].weights))
grad=g.gradient(g_loss, nets['mapping_network'].weights)
optims['mapping_network'].apply_gradients(zip(grad, nets['mapping_network'].weights))
grad=g.gradient(g_loss, nets['style_encoder'].weights)
optims['style_encoder'].apply_gradients(zip(grad, nets['style_encoder'].weights))
del g
#print(4)
with tf.GradientTape(persistent=True) as g:
g.watch(nets['generator'].weights)
g_loss, g_losses_ref = compute_g_loss(
nets, args, x_real, y_org, y_trg, x_refs=[x_ref, x_ref2])
#self._reset_grad()
#g_loss.backward()
grad=g.gradient(g_loss, nets['generator'].weights)
optims['generator'].apply_gradients(zip(grad, nets['generator'].weights))
#print(5)
# compute moving average of network parameters
moving_average(nets['generator'], nets_ema['generator'], beta=0.999)
moving_average(nets['mapping_network'], nets_ema['mapping_network'], beta=0.999)
moving_average(nets['style_encoder'], nets_ema['style_encoder'], beta=0.999)
#print(6)
# decay weight for diversity sensitive loss
if args['lambda_ds'] > 0:
args['lambda_ds'] -= (initial_lambda_ds / args['ds_iter'])
# print out log info
if (i+1) % args['print_every'] == 0:
elapsed = time.time() - start_time
elapsed = str(datetime.timedelta(seconds=elapsed))[:-7]
log = "Elapsed time [%s], Iteration [%i/%i], " % (elapsed, i+1, args['total_iters'])
all_losses = {}
for loss, prefix in [(d_losses_latent,'D/latent_'), (d_losses_ref,'D/ref_'),
(g_losses_latent,'G/latent_'), (g_losses_ref,'G/ref_')]:
for key, value in loss.items():
all_losses[prefix + key] = value
all_losses['G/lambda_ds'] = args['lambda_ds']
for key, value in all_losses.items():
if key!= 'G/lambda_ds':
print(log+key, value.numpy())
else:
print(log+key, value)
# generate images for debugging
#if (i+1) % args['sample_every'] == 0:
# os.makedirs(args['sample_dir'], exist_ok=True)
# debug_image(nets_ema, args, inputs=inputs_val, step=i+1)
# save model checkpoints
if (i+1) % args['save_every'] == 0:
for net in solv.nets.keys():
solv.nets[net].save_weights('MNIST_GAN_2/saved_model/'+net+'step'+str(self.step)+'.h5')
for net in solv.nets_ema.keys():
solv.nets[net].save_weights('MNIST_GAN_2/saved_model/'+net+'step'+str(self.step)+'_ema.h5')
# self._save_checkpoint(step=i+1)
def sample(self, src, ref):
"""
src source image that we want to modify
ref pair of reference image and domain
generates an image that changes source image into the style of the reference image
"""
args = self.args
nets_ema = self.nets_ema
os.makedirs(args['result_dir'], exist_ok=True)
self._load_checkpoint(args['resume_iter'])
fname = ospj(args['result_dir'], 'reference.jpg')
print('Working on {}...'.format(fname))
translate_using_reference(nets_ema, args, src, ref[0], ref[1], fname) | _____no_output_____ | MIT | AI-data-Projects/MNIST_GAN/MNIST_Style_GAN_v2.ipynb | nk555/AI-Projects |
Data Loading and Preprocessing | (trainX, trainy), (valX, valy) = load_data()
trainX=tf.reshape(trainX, (60000,1,28,28,1))
valX=tf.reshape(valX, (10000,1,28,28,1))
inputs=[]
latent_dim=8
for i in range(6000):
i=i+36000
if i % 2000==1999:
print(i+1)
input={}
input['x_src']=tf.cast(trainX[i],tf.float32)
input['y_src']=int(trainy[i])
n=np.random.randint(0,60000)
input['x_ref']=tf.cast(trainX[n],tf.float32)
input['x_ref2']=tf.cast(trainX[np.random.randint(0,60000)],tf.float32)
input['y_ref']=int(trainy[n])
input['z_trg']=tf.random.normal((1,latent_dim))
input['z_trg2']=tf.random.normal((1,latent_dim))
inputs.append(input) | 38000
40000
42000
| MIT | AI-data-Projects/MNIST_GAN/MNIST_Style_GAN_v2.ipynb | nk555/AI-Projects |
ParametersThis dictionary contains the different parameters we use to run the model. | args={'img_size':28,
'style_dim':24,
'latent_dim':16,
'num_domains':10,
'lambda_reg':1,
'lambda_ds':1,
'lambda_sty':10,
'lambda_cyc':10,
'hidden_dim':128,
'resume_iter':0,
'ds_iter':6000,
'total_iters':6000,
'batch_size':8,
'val_batch_size':32,
'lr':1e-4,
'f_lr':1e-6,
'beta1':0,
'beta2':0.99,
'weight_decay':1e-4,
'num_outs_per_domain':4,
'mode': 'train', #train,sample,eval
'seed':0,
'train_img_dir':'GAN/data/train',
'val_img_dir': 'GAN/data/val',
'sample_dir':'GAN/res/samples',
'checkpoint_dir':'GAN/res/checkpoints',
'eval_dir':'GAN/res/eval',
'result_dir':'GAN/res/results',
'src_dir':'GAN/data/src',
'ref_dir':'GAN/data/ref',
'print_every': 500,
'sample_every':200,
'save_every':1000,
'eval_every':1000 } | _____no_output_____ | MIT | AI-data-Projects/MNIST_GAN/MNIST_Style_GAN_v2.ipynb | nk555/AI-Projects |
Load Model | solv=Solver(args)
solv.build_model(args)
solv.load(96000) | _____no_output_____ | MIT | AI-data-Projects/MNIST_GAN/MNIST_Style_GAN_v2.ipynb | nk555/AI-Projects |
Training | with tf.device('/device:GPU:0'):
solv.train(inputs, inputs) | Start training...
WARNING:tensorflow:Calling GradientTape.gradient on a persistent tape inside its context is significantly less efficient than calling it outside the context (it causes the gradient ops to be recorded on the tape, leading to increased CPU and memory usage). Only call GradientTape.gradient inside the context if you actually want to trace the gradient in order to compute higher order derivatives.
WARNING:tensorflow:Calling GradientTape.gradient on a persistent tape inside its context is significantly less efficient than calling it outside the context (it causes the gradient ops to be recorded on the tape, leading to increased CPU and memory usage). Only call GradientTape.gradient inside the context if you actually want to trace the gradient in order to compute higher order derivatives.
WARNING:tensorflow:Calling GradientTape.gradient on a persistent tape inside its context is significantly less efficient than calling it outside the context (it causes the gradient ops to be recorded on the tape, leading to increased CPU and memory usage). Only call GradientTape.gradient inside the context if you actually want to trace the gradient in order to compute higher order derivatives.
WARNING:tensorflow:Calling GradientTape.gradient on a persistent tape inside its context is significantly less efficient than calling it outside the context (it causes the gradient ops to be recorded on the tape, leading to increased CPU and memory usage). Only call GradientTape.gradient inside the context if you actually want to trace the gradient in order to compute higher order derivatives.
Elapsed time [0:18:08], Iteration [500/6000], D/latent_real 5.933643e-06
Elapsed time [0:18:08], Iteration [500/6000], D/latent_fake 1.3552595e-08
Elapsed time [0:18:08], Iteration [500/6000], D/latent_reg 5.7575995e-05
Elapsed time [0:18:08], Iteration [500/6000], D/ref_real 5.965053e-06
Elapsed time [0:18:08], Iteration [500/6000], D/ref_fake 1.6118185e-16
Elapsed time [0:18:08], Iteration [500/6000], D/ref_reg 5.745954e-05
Elapsed time [0:18:08], Iteration [500/6000], G/latent_adv 18.11265
Elapsed time [0:18:08], Iteration [500/6000], G/latent_sty 0.03147651
Elapsed time [0:18:08], Iteration [500/6000], G/latent_ds 35.073742
Elapsed time [0:18:08], Iteration [500/6000], G/latent_cyc 35.35375
Elapsed time [0:18:08], Iteration [500/6000], G/ref_adv 37.213753
Elapsed time [0:18:08], Iteration [500/6000], G/ref_sty 941.0832
Elapsed time [0:18:08], Iteration [500/6000], G/ref_ds 0.6424262
Elapsed time [0:18:08], Iteration [500/6000], G/ref_cyc 35.773685
Elapsed time [0:18:08], Iteration [500/6000], G/lambda_ds 0.9166666666666758
Elapsed time [0:34:51], Iteration [1000/6000], D/latent_real 3.2877884e-05
Elapsed time [0:34:51], Iteration [1000/6000], D/latent_fake 5.9242324e-05
Elapsed time [0:34:51], Iteration [1000/6000], D/latent_reg 1.7900673e-05
Elapsed time [0:34:51], Iteration [1000/6000], D/ref_real 3.2785334e-05
Elapsed time [0:34:51], Iteration [1000/6000], D/ref_fake 3.3252056e-07
Elapsed time [0:34:51], Iteration [1000/6000], D/ref_reg 1.7910741e-05
Elapsed time [0:34:51], Iteration [1000/6000], G/latent_adv 9.766323
Elapsed time [0:34:51], Iteration [1000/6000], G/latent_sty 0.04155442
Elapsed time [0:34:51], Iteration [1000/6000], G/latent_ds 33.543137
Elapsed time [0:34:51], Iteration [1000/6000], G/latent_cyc 30.618359
Elapsed time [0:34:51], Iteration [1000/6000], G/ref_adv 14.91778
Elapsed time [0:34:51], Iteration [1000/6000], G/ref_sty 4931.98
Elapsed time [0:34:51], Iteration [1000/6000], G/ref_ds 3.2844734
Elapsed time [0:34:51], Iteration [1000/6000], G/ref_cyc 30.256683
Elapsed time [0:34:51], Iteration [1000/6000], G/lambda_ds 0.8333333333333517
Elapsed time [0:51:21], Iteration [1500/6000], D/latent_real 3.895065e-06
Elapsed time [0:51:21], Iteration [1500/6000], D/latent_fake 1.8259102e-09
Elapsed time [0:51:21], Iteration [1500/6000], D/latent_reg 2.7209067e-05
Elapsed time [0:51:21], Iteration [1500/6000], D/ref_real 3.889516e-06
Elapsed time [0:51:21], Iteration [1500/6000], D/ref_fake 3.044195e-13
Elapsed time [0:51:21], Iteration [1500/6000], D/ref_reg 2.7181919e-05
Elapsed time [0:51:21], Iteration [1500/6000], G/latent_adv 20.124987
Elapsed time [0:51:21], Iteration [1500/6000], G/latent_sty 0.042978738
Elapsed time [0:51:21], Iteration [1500/6000], G/latent_ds 6.781559
Elapsed time [0:51:21], Iteration [1500/6000], G/latent_cyc 24.195248
Elapsed time [0:51:21], Iteration [1500/6000], G/ref_adv 28.627546
Elapsed time [0:51:21], Iteration [1500/6000], G/ref_sty 2035.4679
Elapsed time [0:51:21], Iteration [1500/6000], G/ref_ds 1.2784641
Elapsed time [0:51:21], Iteration [1500/6000], G/ref_cyc 21.998203
Elapsed time [0:51:21], Iteration [1500/6000], G/lambda_ds 0.7500000000000275
Elapsed time [1:07:58], Iteration [2000/6000], D/latent_real 2.5237994e-06
Elapsed time [1:07:58], Iteration [2000/6000], D/latent_fake 1.6464643e-12
Elapsed time [1:07:58], Iteration [2000/6000], D/latent_reg 2.9553012e-05
Elapsed time [1:07:58], Iteration [2000/6000], D/ref_real 2.5142215e-06
Elapsed time [1:07:58], Iteration [2000/6000], D/ref_fake 1.6905084e-09
Elapsed time [1:07:58], Iteration [2000/6000], D/ref_reg 2.9505836e-05
Elapsed time [1:07:58], Iteration [2000/6000], G/latent_adv 27.145195
Elapsed time [1:07:58], Iteration [2000/6000], G/latent_sty 0.083152466
Elapsed time [1:07:58], Iteration [2000/6000], G/latent_ds 32.264793
Elapsed time [1:07:58], Iteration [2000/6000], G/latent_cyc 35.44804
Elapsed time [1:07:58], Iteration [2000/6000], G/ref_adv 20.16832
Elapsed time [1:07:58], Iteration [2000/6000], G/ref_sty 2833.9543
Elapsed time [1:07:58], Iteration [2000/6000], G/ref_ds 5.3448505
Elapsed time [1:07:58], Iteration [2000/6000], G/ref_cyc 35.21791
Elapsed time [1:07:58], Iteration [2000/6000], G/lambda_ds 0.6666666666667034
Elapsed time [1:24:39], Iteration [2500/6000], D/latent_real 0.00011105127
Elapsed time [1:24:39], Iteration [2500/6000], D/latent_fake 1.5896394e-14
Elapsed time [1:24:39], Iteration [2500/6000], D/latent_reg 0.002541282
Elapsed time [1:24:39], Iteration [2500/6000], D/ref_real 0.000100846126
Elapsed time [1:24:39], Iteration [2500/6000], D/ref_fake 7.2359145e-16
Elapsed time [1:24:39], Iteration [2500/6000], D/ref_reg 0.0024638632
Elapsed time [1:24:39], Iteration [2500/6000], G/latent_adv 31.634012
Elapsed time [1:24:39], Iteration [2500/6000], G/latent_sty 0.03502376
Elapsed time [1:24:39], Iteration [2500/6000], G/latent_ds 17.42342
Elapsed time [1:24:39], Iteration [2500/6000], G/latent_cyc 18.593584
Elapsed time [1:24:39], Iteration [2500/6000], G/ref_adv 34.839787
Elapsed time [1:24:39], Iteration [2500/6000], G/ref_sty 3970.8281
Elapsed time [1:24:39], Iteration [2500/6000], G/ref_ds 0.8345002
Elapsed time [1:24:39], Iteration [2500/6000], G/ref_cyc 18.072935
Elapsed time [1:24:39], Iteration [2500/6000], G/lambda_ds 0.5833333333333792
Elapsed time [1:41:22], Iteration [3000/6000], D/latent_real 5.3445833e-06
Elapsed time [1:41:22], Iteration [3000/6000], D/latent_fake 8.820166e-10
Elapsed time [1:41:22], Iteration [3000/6000], D/latent_reg 6.4297914e-05
Elapsed time [1:41:22], Iteration [3000/6000], D/ref_real 5.36975e-06
Elapsed time [1:41:22], Iteration [3000/6000], D/ref_fake 3.0929835e-13
Elapsed time [1:41:22], Iteration [3000/6000], D/ref_reg 6.230037e-05
Elapsed time [1:41:22], Iteration [3000/6000], G/latent_adv 20.843857
Elapsed time [1:41:22], Iteration [3000/6000], G/latent_sty 0.03444063
Elapsed time [1:41:22], Iteration [3000/6000], G/latent_ds 9.778823
Elapsed time [1:41:22], Iteration [3000/6000], G/latent_cyc 28.097446
Elapsed time [1:41:22], Iteration [3000/6000], G/ref_adv 28.774145
Elapsed time [1:41:22], Iteration [3000/6000], G/ref_sty 2043.1276
Elapsed time [1:41:22], Iteration [3000/6000], G/ref_ds 0.5735893
Elapsed time [1:41:22], Iteration [3000/6000], G/ref_cyc 28.21649
Elapsed time [1:41:22], Iteration [3000/6000], G/lambda_ds 0.5000000000000551
Elapsed time [1:58:17], Iteration [3500/6000], D/latent_real 4.4489816e-06
Elapsed time [1:58:17], Iteration [3500/6000], D/latent_fake 3.4955981e-12
Elapsed time [1:58:17], Iteration [3500/6000], D/latent_reg 1.8419665e-05
Elapsed time [1:58:17], Iteration [3500/6000], D/ref_real 4.4123353e-06
Elapsed time [1:58:17], Iteration [3500/6000], D/ref_fake 4.5671822e-18
Elapsed time [1:58:17], Iteration [3500/6000], D/ref_reg 1.8420711e-05
Elapsed time [1:58:17], Iteration [3500/6000], G/latent_adv 26.39033
Elapsed time [1:58:17], Iteration [3500/6000], G/latent_sty 0.024623169
Elapsed time [1:58:17], Iteration [3500/6000], G/latent_ds 2.0222964
Elapsed time [1:58:17], Iteration [3500/6000], G/latent_cyc 28.361814
Elapsed time [1:58:17], Iteration [3500/6000], G/ref_adv 40.008335
Elapsed time [1:58:17], Iteration [3500/6000], G/ref_sty 825.08575
Elapsed time [1:58:17], Iteration [3500/6000], G/ref_ds 0.13139339
Elapsed time [1:58:17], Iteration [3500/6000], G/ref_cyc 27.434856
Elapsed time [1:58:17], Iteration [3500/6000], G/lambda_ds 0.4166666666667309
Elapsed time [2:14:58], Iteration [4000/6000], D/latent_real 3.6162882e-07
Elapsed time [2:14:58], Iteration [4000/6000], D/latent_fake 5.1923527e-10
Elapsed time [2:14:58], Iteration [4000/6000], D/latent_reg 3.846259e-05
Elapsed time [2:14:58], Iteration [4000/6000], D/ref_real 3.6288532e-07
Elapsed time [2:14:58], Iteration [4000/6000], D/ref_fake 5.942721e-08
Elapsed time [2:14:58], Iteration [4000/6000], D/ref_reg 3.843496e-05
Elapsed time [2:14:58], Iteration [4000/6000], G/latent_adv 21.378086
Elapsed time [2:14:58], Iteration [4000/6000], G/latent_sty 0.106728025
Elapsed time [2:14:58], Iteration [4000/6000], G/latent_ds 20.904701
Elapsed time [2:14:58], Iteration [4000/6000], G/latent_cyc 42.642372
Elapsed time [2:14:58], Iteration [4000/6000], G/ref_adv 17.064861
Elapsed time [2:14:58], Iteration [4000/6000], G/ref_sty 51.672703
Elapsed time [2:14:58], Iteration [4000/6000], G/ref_ds 2.3570015
Elapsed time [2:14:58], Iteration [4000/6000], G/ref_cyc 42.118973
Elapsed time [2:14:58], Iteration [4000/6000], G/lambda_ds 0.33333333333340676
Elapsed time [2:31:26], Iteration [4500/6000], D/latent_real 1.2024437e-06
Elapsed time [2:31:26], Iteration [4500/6000], D/latent_fake 9.023747e-11
Elapsed time [2:31:26], Iteration [4500/6000], D/latent_reg 2.4809478e-05
Elapsed time [2:31:26], Iteration [4500/6000], D/ref_real 1.2103014e-06
Elapsed time [2:31:26], Iteration [4500/6000], D/ref_fake 1.2240717e-15
Elapsed time [2:31:26], Iteration [4500/6000], D/ref_reg 2.4771667e-05
Elapsed time [2:31:26], Iteration [4500/6000], G/latent_adv 23.126303
Elapsed time [2:31:26], Iteration [4500/6000], G/latent_sty 0.05421421
Elapsed time [2:31:26], Iteration [4500/6000], G/latent_ds 22.202696
Elapsed time [2:31:26], Iteration [4500/6000], G/latent_cyc 26.787922
Elapsed time [2:31:26], Iteration [4500/6000], G/ref_adv 34.32032
Elapsed time [2:31:26], Iteration [4500/6000], G/ref_sty 1543.9764
Elapsed time [2:31:26], Iteration [4500/6000], G/ref_ds 0.5546016
Elapsed time [2:31:26], Iteration [4500/6000], G/ref_cyc 26.969646
Elapsed time [2:31:26], Iteration [4500/6000], G/lambda_ds 0.2500000000000826
Elapsed time [2:47:51], Iteration [5000/6000], D/latent_real 3.7615814e-06
Elapsed time [2:47:51], Iteration [5000/6000], D/latent_fake 6.1679136e-19
Elapsed time [2:47:51], Iteration [5000/6000], D/latent_reg 1.4114717e-05
Elapsed time [2:47:51], Iteration [5000/6000], D/ref_real 3.7170662e-06
Elapsed time [2:47:51], Iteration [5000/6000], D/ref_fake 1.3349664e-15
Elapsed time [2:47:51], Iteration [5000/6000], D/ref_reg 1.4105939e-05
Elapsed time [2:47:51], Iteration [5000/6000], G/latent_adv 42.000294
Elapsed time [2:47:51], Iteration [5000/6000], G/latent_sty 0.03513376
Elapsed time [2:47:51], Iteration [5000/6000], G/latent_ds 2.6810305
Elapsed time [2:47:51], Iteration [5000/6000], G/latent_cyc 16.486334
Elapsed time [2:47:51], Iteration [5000/6000], G/ref_adv 34.27865
Elapsed time [2:47:51], Iteration [5000/6000], G/ref_sty 1753.359
Elapsed time [2:47:51], Iteration [5000/6000], G/ref_ds 0.62547046
Elapsed time [2:47:51], Iteration [5000/6000], G/ref_cyc 17.005342
Elapsed time [2:47:51], Iteration [5000/6000], G/lambda_ds 0.16666666666674457
Elapsed time [3:04:31], Iteration [5500/6000], D/latent_real 9.165446e-07
Elapsed time [3:04:31], Iteration [5500/6000], D/latent_fake 9.630188e-10
Elapsed time [3:04:31], Iteration [5500/6000], D/latent_reg 2.2647702e-05
Elapsed time [3:04:31], Iteration [5500/6000], D/ref_real 9.2337206e-07
Elapsed time [3:04:31], Iteration [5500/6000], D/ref_fake 1.4190508e-07
Elapsed time [3:04:31], Iteration [5500/6000], D/ref_reg 2.254527e-05
Elapsed time [3:04:31], Iteration [5500/6000], G/latent_adv 20.79223
Elapsed time [3:04:31], Iteration [5500/6000], G/latent_sty 0.026682168
Elapsed time [3:04:31], Iteration [5500/6000], G/latent_ds 11.481898
Elapsed time [3:04:31], Iteration [5500/6000], G/latent_cyc 28.555185
Elapsed time [3:04:31], Iteration [5500/6000], G/ref_adv 15.918713
Elapsed time [3:04:31], Iteration [5500/6000], G/ref_sty 1588.65
Elapsed time [3:04:31], Iteration [5500/6000], G/ref_ds 0.9953184
Elapsed time [3:04:31], Iteration [5500/6000], G/ref_cyc 29.72444
Elapsed time [3:04:31], Iteration [5500/6000], G/lambda_ds 0.08333333333341
Elapsed time [3:21:09], Iteration [6000/6000], D/latent_real 3.292037e-05
Elapsed time [3:21:09], Iteration [6000/6000], D/latent_fake 2.396092e-10
Elapsed time [3:21:09], Iteration [6000/6000], D/latent_reg 3.2086697e-05
Elapsed time [3:21:09], Iteration [6000/6000], D/ref_real 3.259703e-05
Elapsed time [3:21:09], Iteration [6000/6000], D/ref_fake 7.805121e-13
Elapsed time [3:21:09], Iteration [6000/6000], D/ref_reg 3.2122865e-05
Elapsed time [3:21:09], Iteration [6000/6000], G/latent_adv 22.151058
Elapsed time [3:21:09], Iteration [6000/6000], G/latent_sty 0.03772318
Elapsed time [3:21:09], Iteration [6000/6000], G/latent_ds 4.0022397
Elapsed time [3:21:09], Iteration [6000/6000], G/latent_cyc 45.717335
Elapsed time [3:21:09], Iteration [6000/6000], G/ref_adv 27.884796
Elapsed time [3:21:09], Iteration [6000/6000], G/ref_sty 1092.5536
Elapsed time [3:21:09], Iteration [6000/6000], G/ref_ds 1.0627509
Elapsed time [3:21:09], Iteration [6000/6000], G/ref_cyc 45.289036
Elapsed time [3:21:09], Iteration [6000/6000], G/lambda_ds 7.683496850915961e-14
| MIT | AI-data-Projects/MNIST_GAN/MNIST_Style_GAN_v2.ipynb | nk555/AI-Projects |
ResultsIn this first cell we show an image where the rows represent a source image and the columns the style they are trying to mimic. We can see in this case that that the image still highly resembles the source image but has obtained some characteristics depending on the style of our reference. In most cases this style is mostly about the thickness of the lines, but it does vary slightly in other ways. | import matplotlib.pyplot as pyplot
for i in range(4):
pyplot.subplot(5,5,2+i)
pyplot.axis('off')
pyplot.imshow(np.reshape(inputs[i]['x_ref'],[28,28]), cmap='gray_r')
for i in range(4):
pyplot.subplot(5, 5, 5*(i+1) + 1)
pyplot.axis('off')
pyplot.imshow(np.reshape(inputs[i]['x_src'], [28,28]), cmap='gray_r')
for j in range(4):
pyplot.subplot(5, 5, 5*(i+1) + j +2)
pyplot.axis('off')
pyplot.imshow(np.reshape(solv.nets['generator'](inputs[i]['x_src'],solv.nets['style_encoder'](inputs[j]['x_ref'],inputs[j]['y_ref'])).numpy(), [28,28]), cmap='gray_r')
pyplot.show()
#left is source and top is the target trying to mimic its font | _____no_output_____ | MIT | AI-data-Projects/MNIST_GAN/MNIST_Style_GAN_v2.ipynb | nk555/AI-Projects |
Below we generate random styles and see the output it generates. We notice that it is quite likely the images are distorted in this case, compared to when using the style of an already existing image it seems it would usually have a good quality. | for i in range(5):
pyplot.subplot(5,5,1+i)
pyplot.axis('off')
pyplot.imshow(np.reshape(solv.nets['generator'](inputs[0]['x_src'],tf.random.normal((1,24))).numpy(), [28,28]), cmap='gray_r') | _____no_output_____ | MIT | AI-data-Projects/MNIST_GAN/MNIST_Style_GAN_v2.ipynb | nk555/AI-Projects |
Here we can see the process of how the image transforms into the target. In these small images there is not too much that is changing but we can still appreciate the process. | s1=solv.nets['style_encoder'](inputs[3]['x_src'],inputs[3]['y_src'])
s2=solv.nets['style_encoder'](inputs[3]['x_ref'],inputs[3]['y_ref'])
for i in range(5):
pyplot.subplot(5,5,1+i)
pyplot.axis('off')
s=(1-i/5)*s1+i/5*s2
pyplot.imshow(np.reshape(solv.nets['generator'](inputs[3]['x_src'],s).numpy(), [28,28]), cmap='gray_r') | _____no_output_____ | MIT | AI-data-Projects/MNIST_GAN/MNIST_Style_GAN_v2.ipynb | nk555/AI-Projects |
git-bakup | USER='tonybutzer'
API_TOKEN='ATOKEN'
GIT_API_URL='https://api.github.com'
def get_api(url):
try:
request = urllib2.Request(GIT_API_URL + url)
base64string = base64.encodestring('%s/token:%s' % (USER, API_TOKEN)).replace('\n', '')
request.add_header("Authorization", "Basic %s" % base64string)
result = urllib2.urlopen(request)
result.close()
except:
print ('Failed to get api request from %s' % url)
!curl "https://api.github.com/users/tonybutzer/repos?per_page=1000" | grep -w clone_url | grep -o '[^"]\+://.\+.git' >myrepos.txt
%%bash
mkdir -p ~/repo
for i in `cat myrepos.txt` ; do
{
echo $i
(cd ~/repo; git clone $i)
}; done
! ls ~/repo | active-fire
| MIT | Attic/repo/git-bakup.ipynb | tonybutzer/etscrum |
Euler Problem 94================It is easily proved that no equilateral triangle exists with integral length sides and integral area. However, the almost equilateral triangle 5-5-6 has an area of 12 square units.We shall define an almost equilateral triangle to be a triangle for which two sides are equal and the third differs by no more than one unit.Find the sum of the perimeters of all almost equilateral triangles with integral side lengths and area and whose perimeters do not exceed one billion (1,000,000,000). | a, b, p, s = 1, 0, 0, 0
while p <= 10**9:
s += p
a, b = 2*a + 3*b, a + 2*b
p = 4*a*a
a, b, p = 1, 1, 0
while p <= 10**9:
s += p
a, b = 2*a + 3*b, a + 2*b
p = 2*a*a
print(s)
| 518408346
| MIT | Euler 094 - Almost equilateral triangles.ipynb | Radcliffe/project-euler |
Now You Code 4: Temperature ConversionWrite a python program which will convert temperatures from Celcius to Fahrenheight.The program should take a temperature in degrees Celcius as input and output a temperature in degrees Fahrenheight.Example:```Enter the temperature in Celcius: 100100 Celcius is 212 Fahrenheight```HINT: Use the web to find the formula to convert from Celcius to Fahrenheight. Step 1: Problem AnalysisInputs: celcius and fahrenhieghtOutputs: celcius to farenhieght Algorithm (Steps in Program): | celcius = float(input("enter the temperature in celcius: "))
fahrenhieght=(celcius*9/5)+32
print("fahrenhieght equals " "%.2f" %fahrenhieght) | enter the temperature in celcius: 100
fahrenhieght equals 212.00
| MIT | content/lessons/03/Now-You-Code/NYC4-Temperature-Conversion.ipynb | MahopacHS/spring2019-Christian64Aguilar |
Cross Validation | from keras.models import Sequential
from keras.layers import Dense
from sklearn.model_selection import StratifiedKFold
import numpy as np
seed = 7
np.random.seed(seed)
dataset = np.loadtxt('pima-indians-diabetes.data', delimiter=',')
X = dataset[:, 0:8]
Y = dataset[:, 8]
X.shape
kfold = StratifiedKFold(n_splits=5, shuffle=True, random_state=seed)
cvscores = []
for train, test in kfold.split(X, Y):
model = Sequential()
model.add(Dense(12, input_dim=8, activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(X[train], Y[train], epochs=150, batch_size=10, verbose=0)
scores = model.evaluate(X[test], Y[test], verbose=0)
print('%s: %.2f%%' % (model.metrics_names[1], scores[1] * 100))
cvscores.append(scores[1] * 100)
print('%.2f%% (+/- %.2f%%' % (np.mean(cvscores), np.std(cvscores)))
X.shape
Y.shape | _____no_output_____ | MIT | keras/170605-cross-validation.ipynb | aidiary/notebooks |
Convolutional Neural Networks with Keras In this lab, we will learn how to use the Keras library to build convolutional neural networks. We will also use the popular MNIST dataset and we will compare our results to using a conventional neural network. Convolutional Neural Networks with KerasObjective for this Notebook 1. How to use the Keras library to build convolutional neural networks. 2. Convolutional Neural Network with One Convolutional and Pooling Layers. 3. Convolutional Neural Network with Two Convolutional and Pooling Layers. Table of Contents 1. Import Keras and Packages 2. Convolutional Neural Network with One Convolutional and Pooling Layers 3. Convolutional Neural Network with Two Convolutional and Pooling Layers Import Keras and Packages Let's start by importing the keras libraries and the packages that we would need to build a neural network. | import tensorflow.keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.utils import to_categorical | _____no_output_____ | MIT | 2. Intro to Deep Learning & Neural Networks with Keras/4. Convolutional Neural Learning/Convolutional-Neural-Networks-with-Keras-py-v1.0.ipynb | aqafridi/AI-Engineering-Specialization |
When working with convolutional neural networks in particular, we will need additional packages. | from tensorflow.keras.layers import Conv2D # to add convolutional layers
from tensorflow.keras.layers import MaxPooling2D # to add pooling layers
from tensorflow.keras.layers import Flatten # to flatten data for fully connected layers | _____no_output_____ | MIT | 2. Intro to Deep Learning & Neural Networks with Keras/4. Convolutional Neural Learning/Convolutional-Neural-Networks-with-Keras-py-v1.0.ipynb | aqafridi/AI-Engineering-Specialization |
Convolutional Layer with One set of convolutional and pooling layers | # import data
from tensorflow.keras.datasets import mnist
# load data
(X_train, y_train), (X_test, y_test) = mnist.load_data()
# reshape to be [samples][pixels][width][height]
X_train = X_train.reshape(X_train.shape[0], 28, 28, 1).astype('float32')
X_test = X_test.reshape(X_test.shape[0], 28, 28, 1).astype('float32') | _____no_output_____ | MIT | 2. Intro to Deep Learning & Neural Networks with Keras/4. Convolutional Neural Learning/Convolutional-Neural-Networks-with-Keras-py-v1.0.ipynb | aqafridi/AI-Engineering-Specialization |
Let's normalize the pixel values to be between 0 and 1 | X_train = X_train / 255 # normalize training data
X_test = X_test / 255 # normalize test data | _____no_output_____ | MIT | 2. Intro to Deep Learning & Neural Networks with Keras/4. Convolutional Neural Learning/Convolutional-Neural-Networks-with-Keras-py-v1.0.ipynb | aqafridi/AI-Engineering-Specialization |
Next, let's convert the target variable into binary categories | y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
num_classes = y_test.shape[1] # number of categories | _____no_output_____ | MIT | 2. Intro to Deep Learning & Neural Networks with Keras/4. Convolutional Neural Learning/Convolutional-Neural-Networks-with-Keras-py-v1.0.ipynb | aqafridi/AI-Engineering-Specialization |
Next, let's define a function that creates our model. Let's start with one set of convolutional and pooling layers. | def convolutional_model():
# create model
model = Sequential()
model.add(Conv2D(16, (5, 5), strides=(1, 1), activation='relu', input_shape=(28, 28, 1)))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
model.add(Flatten())
model.add(Dense(100, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
# compile model
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
return model | _____no_output_____ | MIT | 2. Intro to Deep Learning & Neural Networks with Keras/4. Convolutional Neural Learning/Convolutional-Neural-Networks-with-Keras-py-v1.0.ipynb | aqafridi/AI-Engineering-Specialization |
Finally, let's call the function to create the model, and then let's train it and evaluate it. | # build the model
model = convolutional_model()
# fit the model
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=10, batch_size=200, verbose=2)
# evaluate the model
scores = model.evaluate(X_test, y_test, verbose=0)
print("Accuracy: {} \n Error: {}".format(scores[1], 100-scores[1]*100)) | WARNING:tensorflow:From /home/jupyterlab/conda/envs/python/lib/python3.7/site-packages/tensorflow/python/ops/init_ops.py:1251: calling VarianceScaling.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor
Train on 60000 samples, validate on 10000 samples
Epoch 1/10
60000/60000 - 43s - loss: 0.2902 - acc: 0.9203 - val_loss: 0.1027 - val_acc: 0.9695
Epoch 2/10
60000/60000 - 43s - loss: 0.0866 - acc: 0.9751 - val_loss: 0.0647 - val_acc: 0.9785
Epoch 3/10
60000/60000 - 43s - loss: 0.0591 - acc: 0.9827 - val_loss: 0.0489 - val_acc: 0.9847
Epoch 4/10
60000/60000 - 43s - loss: 0.0458 - acc: 0.9862 - val_loss: 0.0415 - val_acc: 0.9867
Epoch 5/10
60000/60000 - 43s - loss: 0.0355 - acc: 0.9892 - val_loss: 0.0371 - val_acc: 0.9876
Epoch 6/10
60000/60000 - 44s - loss: 0.0295 - acc: 0.9911 - val_loss: 0.0378 - val_acc: 0.9870
Epoch 7/10
60000/60000 - 43s - loss: 0.0235 - acc: 0.9926 - val_loss: 0.0358 - val_acc: 0.9877
Epoch 8/10
60000/60000 - 43s - loss: 0.0195 - acc: 0.9942 - val_loss: 0.0363 - val_acc: 0.9882
Epoch 9/10
60000/60000 - 43s - loss: 0.0163 - acc: 0.9953 - val_loss: 0.0353 - val_acc: 0.9880
Epoch 10/10
60000/60000 - 43s - loss: 0.0133 - acc: 0.9962 - val_loss: 0.0331 - val_acc: 0.9888
Accuracy: 0.9887999892234802
Error: 1.1200010776519775
| MIT | 2. Intro to Deep Learning & Neural Networks with Keras/4. Convolutional Neural Learning/Convolutional-Neural-Networks-with-Keras-py-v1.0.ipynb | aqafridi/AI-Engineering-Specialization |
* * * Convolutional Layer with two sets of convolutional and pooling layers Let's redefine our convolutional model so that it has two convolutional and pooling layers instead of just one layer of each. | def convolutional_model():
# create model
model = Sequential()
model.add(Conv2D(16, (5, 5), activation='relu', input_shape=(28, 28, 1)))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
model.add(Conv2D(8, (2, 2), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
model.add(Flatten())
model.add(Dense(100, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
# Compile model
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
return model | _____no_output_____ | MIT | 2. Intro to Deep Learning & Neural Networks with Keras/4. Convolutional Neural Learning/Convolutional-Neural-Networks-with-Keras-py-v1.0.ipynb | aqafridi/AI-Engineering-Specialization |
Now, let's call the function to create our new convolutional neural network, and then let's train it and evaluate it. | # build the model
model = convolutional_model()
# fit the model
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=10, batch_size=200, verbose=2)
# evaluate the model
scores = model.evaluate(X_test, y_test, verbose=0)
print("Accuracy: {} \n Error: {}".format(scores[1], 100-scores[1]*100)) | Train on 60000 samples, validate on 10000 samples
Epoch 1/10
60000/60000 - 47s - loss: 0.4901 - acc: 0.8633 - val_loss: 0.1385 - val_acc: 0.9570
Epoch 2/10
60000/60000 - 47s - loss: 0.1185 - acc: 0.9642 - val_loss: 0.0848 - val_acc: 0.9728
Epoch 3/10
60000/60000 - 47s - loss: 0.0831 - acc: 0.9740 - val_loss: 0.0633 - val_acc: 0.9813
Epoch 4/10
60000/60000 - 47s - loss: 0.0657 - acc: 0.9795 - val_loss: 0.0661 - val_acc: 0.9783
Epoch 5/10
60000/60000 - 47s - loss: 0.0566 - acc: 0.9830 - val_loss: 0.0514 - val_acc: 0.9843
Epoch 6/10
60000/60000 - 47s - loss: 0.0496 - acc: 0.9845 - val_loss: 0.0476 - val_acc: 0.9868
Epoch 7/10
60000/60000 - 47s - loss: 0.0432 - acc: 0.9869 - val_loss: 0.0478 - val_acc: 0.9857
Epoch 8/10
60000/60000 - 47s - loss: 0.0400 - acc: 0.9873 - val_loss: 0.0497 - val_acc: 0.9848
Epoch 9/10
60000/60000 - 47s - loss: 0.0364 - acc: 0.9887 - val_loss: 0.0406 - val_acc: 0.9873
Epoch 10/10
60000/60000 - 47s - loss: 0.0325 - acc: 0.9899 - val_loss: 0.0373 - val_acc: 0.9883
Accuracy: 0.9883000254631042
Error: 1.1699974536895752
| MIT | 2. Intro to Deep Learning & Neural Networks with Keras/4. Convolutional Neural Learning/Convolutional-Neural-Networks-with-Keras-py-v1.0.ipynb | aqafridi/AI-Engineering-Specialization |
多元函数微分法及其应用只有一个自变量的函数叫做一元函数。在很多实际问题中往往牵涉到多方面的因素,反映到数学上,就是一个变量依赖于多个变量的情形。这就提出了多元函数以及多元函数的微分和积分问题。本章将在一元函数微分学的基础上,讨论多元函数的微分法及其应用。讨论中我们以二元函数为主,因为从一元函数到二元函数会产生新的问题,而从二元函数到二元以上的多元函数则可以类推。本节包括以下内容:1. 多元函数的基本概念2. 偏导数3. 全微分4. 多元复合函数的求导法则5. 隐函数的求导公式6. 多元函数微分学的几何应用7. 方向导数和梯度8. 多元函数的极值及其求法9. 二元函数的泰勒公式10. 最小二乘法 1. 多元函数的基本概念 1.1 平面点集 n维空间在讨论一元函数时,一些概念、理论和方法,都是基于 $\mathbb{R}^1$ 中的点集、两点间的距离、区间和邻域等概念。为了将一元函数微积分推广到多元的情形,首先需要将上述概念加以推广,同时还需涉及一些其他概念。为此先引入平面点集的一些基本概念,将有关概念从 $\mathbb{R}^1$ 中的情形推广到 $\mathbb{R}^2$ 中;然后引入 $n$ 维空间,以便推广到一般的 $\mathbb{R}^n$ 中。**平面点集**:由平面解析几何知道,当在平面上引入一个直角坐标系后,平面上的点 $P$ 与有序二元实数组 $(x,y)$ 之间就建立了一一对应。于是,我们常把有序实数组 $(x,y)$ 与平面上的点 $P$ 视作是等同的。这种建立了坐标系的平面称为坐标平面。二元有序实数组 $(x,y)$ 的全体,即 $\mathbb{R}^2=\mathbb{R} \times \mathbb{R}=\{(x,y)|x,y\in\mathbb{R}\}$ 就表示坐标平面。坐标平面上具有某种性质 $P$ 的点的集合,称为平面点集,记作 $E=\{(x,y)|(x,y)具有性质P\}$。现在我们来引入 $\mathbb{R}^2$ 中邻域的概念。设 $P_0(x_0,y_0)$ 是 $xOy$ 平面上的一个点,$\delta$ 是某一正数。与点 $P_0(x_0,y_0)$ 距离小于 $\delta$ 的点 $P(x,y)$ 的全体,称为点 $P_0$ 的 $\delta$ 邻域,记作 $U(P_0,\delta)$,即$$ U(P_0,\delta)=\{P||PP_0|<\delta\} $$也就是$$ U(P_0,\delta)=\{(x,y)|\sqrt{(x-x_0)^2+(y-y_0)^2}<\delta\} $$点 $P_0$ 的去心 $\delta$ 邻域,记作 $\mathring{U}(P_0, \delta)$,即$$\mathring{U}(P_0, \delta)=\{P|0<|PP_0|<\delta\}$$在几何上,$U(P_0,\delta)$ 就是 $xOy$ 平面上以点 $P_0(x_0,y_0)$ 为中心,$\delta>0$ 为半径的圆内部的点 $P(x,y)$ 的全体。如果不需要强调邻域的半径 $\delta$,则用 $U(P_0)$ 表示点 $P_0$ 的某个邻域,点 $P_0$ 的去心邻域记作 $\mathring{U}(P_0)$。下面利用邻域来描述点和点集之间的关系。任意一点 $P \in \mathbb{R}^2$ 与任意一个点集 $E \subset \mathbb{R}^2$ 之间必有以下三种关系中的一种:1. **内点**:如果存在点 $P$ 的某个邻域 $U(P)$,使得 $U(P) \subset E$,则称 $P$ 为 $E$ 的内点。2. **外点**:如果存在点 $P$ 的某个邻域 $U(P)$,使得 $U(P) \cap E = \emptyset$,则称 $P$ 为 $E$ 的外点。3. **边界点**:如果点 $P$ 的任一邻域内既含有属于 $E$ 的点,又含有不属于 $E$ 的点,则称 $P$ 为 $E$ 的边界点。$E$ 的所有边界点的全体,称为 $E$ 的**边界**,记作 $\partial E$。$E$ 的内点必属于 $E$;$E$ 的外点不定不属于 $E$;而 $E$ 的边界点可能属于 $E$,也可能不属于 $E$。根据点集所属点的特征,再来定义一些重要的平面点集。1. **开集**:如果点集 $E$ 的点都是 $E$ 的内点,则称 $E$ 为开集。2. **闭集**:如果点集 $E$ 的边界 $\partial E \subset E$,则称 $E$ 为闭集。3. **连通集**:如果点集 $E$ 内的任何两点,都可用折线联结起来,且该折线上的点都属于 $E$,则称 $E$ 为连通集。4. **区域(开区域)**:连通的开集称为区域或开区域。5. **闭区域**:开区域连通它的边界一起所构成的点集称为闭区域。6. **有界集**:对于平面点集 $E$,如果存在某一正数 $r$,使得 $E \subset U(O,r)$,其中 $O$ 是坐标原点,则称 $E$ 为有界集。7. **无界集**:一个集合如果不是有界集,就称这集合为无界集。**n维空间**:设 $n$ 为取定的一个正整数,我们用 $\mathbb{R}^n$ 表示 $n$ 元有序实数组 $(x_1, x_2,...,x_n)$ 的全体构成的集合,即$$ \mathbb{R}^n = \mathbb{R} \times \mathbb{R} \times \cdots \times \mathbb{R} = \{(x_1, x_2,..., x_n)|x_i \in \mathbb{R}, i=1,2,...,n\} $$$\mathbb{R}^n$ 中的元素 $(x_1, x_2,...,x_n)$ 有时也用单个字母 $x$ 来表示,即 $x=(x_1,x_2,...,x_n)$。当所有的 $x_i(i=1,2,...,n)$ 都为零时,称这样的元素为 $\mathbb{R}^n$ 中的零元,即为 $0$ 或 $O$。在解析几何中,通过直角坐标系,$\mathbb{R}^2$(或 $\mathbb{R}^3$)中的元素分别与平面(或空间)中的点或向量建立一一对应,因而 $\mathbb{R}^n$ 中的元素 $x=(x_1,x_2,...,x_n)$ 也称为 $\mathbb{R}^n$ 中的一个点或一个 $n$ 维向量,$x_i$ 称为点 $x$ 的第 $i$ 个坐标或 $n$ 维向量 $x$ 的第 $i$ 个分量。特别地,$\mathbb{R}^n$ 中的零元 $0$ 称为 $\mathbb{R}^n$ 中的坐标原点或 $n$ 维零向量。为了在集合 $\mathbb{R}^n$ 中的元素之间建立联系,在 $\mathbb{R}^n$ 中定义线性运算如下:设 $x=(x_1,x_2,...,x_n), y=(y_1,y_2,...,y_n)$ 为 $\mathbb{R}^n$ 中任意两个元素,$\lambda \in \mathbb{R}$,规定:$$\begin{split}& x+y=(x_1+y_1, x_2+y_2,...,x_n+y_n) \\& \lambda x = (\lambda x_1, \lambda x_2, ..., \lambda x_n)\end{split}$$这样定义了线性运算的集合 $\mathbb{R}^n$ 称为 $n$ 维空间。$\mathbb{R}^n$ 中点 $x=(x_1,x_2,...,x_n)$ 和点 $y=(y_1,y_2,...,y_n)$ 间的距离,记作 $\rho(x,y)$,规定$$ \rho(x,y) = \sqrt{(x_1-y_1)^2+(x_2-y_2)^2+\cdots+(x_n-y_n)^2} $$显然,$n=1,2,3$ 时,上述规定与数轴上、直角坐标系下平面及空间中两点间的距离一致。$\mathbb{R}^n$ 中元素 $x=(x_1,x_2,...,x_n)$ 与零元 $0$ 之间的距离 $\rho(x, 0)$ 记作 $||x||$(在 $\mathbb{R}^1, \mathbb{R}^2, \mathbb{R}^3$ 中,通常将 $||x||$ 记作 $|x|$),即$$ ||x|| = \sqrt{x_1^2+x_2^2+\cdots+x_n^2} $$采用这一记号,结合向量的线性运算,便得$$ ||x-y|| = \sqrt{(x_1-y_1)^2+(x_2-y_2)^2+\cdots+(x_n-y_n)^2} = \rho(x,y) $$在 $n$ 维空间 $\mathbb{R}^n$ 中定义了距离以后,就可以定义 $\mathbb{R}^n$ 中变元的极限:设 $x=(x_1,x_2,...,x_n), a=(a_1,a_2,...,a_n) \in \mathbb{R}^n$。 如果$$ ||x-a|| \rightarrow 0 $$则称变元 $x$ 在 $\mathbb{R}^n$ 中趋于固定元 $a$,记作 $x \rightarrow a$,显然$$ x \rightarrow a \Leftrightarrow x_1 \rightarrow a_1, x_2 \rightarrow a_2, \cdots, x_n \rightarrow a_n $$在 $\mathbb{R}^n$ 中线性运算和距离的引入,使得前面讨论过的有关平面点集的一系列概念,可以方便地引入到 $n(n \geq 3)$ 维平面中来,例如:设 $a=(a_1,a_2,...,a_n) \in \mathbb{R}^n$,$\delta$ 是某一正数,则 $n$ 维空间内的点集$$ U(a, \delta) = \{x|x \in \mathbb{R}^n, \rho(x,a) < \delta \} $$就定义为 $\mathbb{R}^n$ 中点 $a$ 的邻域。以邻域为基础,可以定义点集的内点、外电、边界点以及开集、闭集、区域等一系列概念,这里不再赘述。 1.2 多元函数概念**定义1 设 $D$ 是 $\mathbb{R}^2$ 的一个非空子集,称映射 $f:D \rightarrow \mathbb{R}$ 为定义在 $D$ 上的二元函数,通常记为$$ z=f(x,y), (x,y) \in D $$或$$ z=f(P), P \in D $$其中点集 $D$ 称为该函数的定义域,$x,y$ 称为自变量,$z$ 称为因变量。**上述定义中,与自变量 $x,y$ 的一对值(即二元有序实数组)$(x,y)$ 相对应的因变量 $z$ 的值,也称为 $f$ 在点 $(x,y)$ 处的函数值,记作 $f(x,y)$,即 $z=f(x,y)$。函数值 $f(x,y)$ 全体所构成的集合称为函数 $f$ 的值域,记作 $f(D)$,即$$ f(D)=\{z|z=f(x,y),(x,y) \in D\} $$与一元函数的情形相仿,记号 $f$ 和 $f(x,y)$ 的意义是有区别的,但习惯上常用记号 $f(x,y),(x,y) \in D$ 或 $z=f(x,y),(x,y) \in D$ 来表示 $D$ 上的二元函数 $f$。表示二元函数的记号 $f$ 也是可以任意选取的,例如也可以记为 $z=\phi(x,y), z=z(x,y)$ 等。类似地可以定义三元函数 $u=f(x,y,z),(x,y,z) \in D$ 以及三元以上的函数。一般的,把定义1中的平面点集 $D$ 换成 $n$ 维空间 $\mathbb{R}^n$ 内的点集 $D$,映射 $f:D \rightarrow \mathbb{R}$ 就称为定义在 $D$ 上的** $n$ 元函数**,通常记为$$ u=f(x_1,x_2,\cdots,x_n),(x_1,x_2,\cdots,x_n) \in D $$或简记为$$ u=f(x),x=(x_1,x_2,\cdots,x_n) \in D $$也可记为$$ u=f(P),P(x_1,x_2,\cdots,x_n) \in D $$在 $n=2$ 或 $n=3$ 时,习惯上将点 $(x_1, x_2)$ 与点 $(x_1, x_2, x_3)$ 分别写成 $(x,y)$ 与 $(x,y,z)$。这时,若用字母表示 $\mathbb{R}^2$ 或 $\mathbb{R}^3$ 中的点,即写成 $P(x,y)$ 或 $M(x,y,z)$,则相应的二元函数及三元函数也可简记为 $z=f(P)$ 及 $u=f(M)$。当 $n=1$ 时,$n$ 元函数就是一元函数。当 $n \geq 2$ 时,$n$ 元函数统称为**多元函数**。关于多元函数的定义域,与一元函数相类似,我们作如下约定:在一般地讨论用算式表达的多元函数 $u=f(x)$ 时,就以使这个算式有意义的变元 $x$ 的值所组成的点集为这个**多元函数的自然定义域**。因而,对这类函数,它的定义域不再特别标出。设函数 $z=f(x,y)$ 的定义域为 $D$。对于任意取定的点 $P(x,y) \in D$,对应的函数值为 $z=f(x,y)$。这样,以 $x$ 为横坐标,$y$ 为纵坐标,$z=f(x,y)$ 为竖坐标在空间就确定一点 $M(x,y,z)$。当 $(x,y)$ 遍取 $D$ 上的一切点时,得到一个空间点集$$ \{(x,y,z)|z=f(x,y), (x,y) \in D\} $$这个点集称为**二元函数 $z=f(x,y)$ 的图形**。通常我们也说二元函数的图形是一张曲面。 1.3 多元函数的极限先讨论二元函数 $z=f(x,y)$ 当 $(x,y) \rightarrow (x_0,y_0)$,即 $P(x,y) \rightarrow P_0(x_0,y_0)$ 时的极限。这里 $P \rightarrow P_0$ 表示点 $P$ 以任何方式趋于点 $P_0$,也就是点 $P$ 与点 $P_0$ 间的距离趋于零,即$$ |PP_0| = \sqrt{(x-x_0)^2 + (y-y_0)^2} \rightarrow 0 $$与一元函数的极限概念类似,如果在 $P(x,y) \rightarrow P_0(x_0, y_0)$ 的过程中,对应的函数值 $f(x,y)$ 无限接近于一个确定的常数 $A$,就说 $A$ 是函数 $f(x,y)$ 当 $(x,y) \rightarrow (x_0,y_0)$ 时的极限。下面用 $\epsilon - \delta$ 语言描述这个极限概念。**定义2 设二元函数 $f(P)=f(x,y)$ 的定义域为 $D$,$P_0(x_0,y_0)$ 是 $D$ 的聚点。如果存在常数 $A$,对于任意给定的正数 $\epsilon$,总存在正数 $\delta$,使得当点 $P(x,y) \in D \cap \mathring{U}(P_0,\delta) $ 时,都有$$ |f(P)-A| = |f(x,y)-A| < \epsilon $$成立,那么就称常数 $A$ 为函数 $f(x,y)$ 当 $(x,y) \rightarrow (x_0,y_0) $ 时的极限,记作$$ \lim_{(x,y) \rightarrow (x_0, y_0)}f(x,y)=A 或 f(x,y) \rightarrow A((x,y) \rightarrow (x_0,y_0)) $$也记作$$ \lim_{P \rightarrow P_0}f(P)=A 或 f(P) \rightarrow A(P \rightarrow P_0) $$**为了区别于一元函数的极限,我们把二元函数的极限叫做**二重极限**。必须注意,所谓二重极限存在,是指 $P(x,y)$ 以任何方式趋于 $P_0(x_0,y_0)$ 时,$f(x,y)$ 都无限接近于 $A$。因此,如果 $P(x,y)$ 以某一种特殊方式,例如沿着一条定直线或定曲线趋于 $P_0(x_0,y_0)$ 时,即使 $f(x,y)$ 无限接近于某一确定值,我们还不能由此断定函数的极限存在。但是反过来,如果当 $P(x,y)$ 以不同的方式趋于 $P_0(x_0,y_0)$ 时,$f(x,y)$ 趋于不同的值,那么就可以判定这函数的极限不存在。以上关于二元函数的极限概念,可相应地推广到 $n$ 元函数 $u=f(P)$ 上去。关于多元函数的极限运算,有与一元函数类似的运算法则。 1.4 多元函数的连续性**定义3 设二元函数 $f(P)=f(x,y)$ 的定义域为 $D$,$P_0(x_0,y_0)$ 为 $D$ 的聚点,且 $P_0 \in D$,如果$$ \lim_{(x,y) \rightarrow (x_0, y_0)}f(x,y) = f(x_0,y_0) $$则称函数 $f(x,y)$ 在点 $P_0(x_0,y_0)$ 连续。设函数 $f(x,y)$ 在 $D$ 上有定义,$D$ 内的每一点都是函数定义域的聚点。如果函数 $f(x,y)$ 在 $D$ 的每一点都连续,那么就称函数 $f(x,y)$ 在 $D$ 上连续,或者称 $f(x,y)$ 是 $D$ 上的连续函数。**以上关于二元函数的连续性概念,可相应地推广到 $n$ 元函数 $f(P)$ 上去。**定义4 设函数 $f(x,y)$ 的定义域为 $D$,$P_0(x_0,y_0)$ 是 $D$ 的聚点。如果函数 $f(x,y)$ 在点 $P_0(x_0,y_0)$ 不连续,则称 $P_0(x_0,y_0)$ 为函数 $f(x,y)$ 的间断点。**前面已经指出:一元函数中关于极限的运算法则,对于多元函数仍然适用。根据多元函数的极限运算法则,可以证明多元连续函数的和、差、积仍为连续函数;连续函数的商在分母不为零处仍连续;多元连续函数的复合函数也是连续函数。与一元初等函数相类似。多元初等函数是指可用一个式子表示的多元函数,这个式子是由常数及具有不同自变量的一元基本初等函数经过有限次的四则运算和符合运算而得到的。一切多元初等函数在其定义区域内是连续的。所谓定义区域是指包含在定义域内的区域或闭区域。由多元初等函数的连续性,如果要求它在点 $P_0$ 处的极限,而该点又在此函数的定义区域内,则极限值就是函数在该点的函数值,即$$ \lim_{P \rightarrow P_0}f(P) = f(P_0) $$与闭区间上一元连续函数的性质相类似,在有界闭区域上连续的多元函数具有如下性质:**性质1(有界性与最大值最小值定理)**:在有界闭区域 $D$ 上的多元连续函数,必定在 $D$ 上有界,且能取得它的最大值和最小值。**性质2(介值定理)**:在有界闭区域 $D$ 上的多元连续函数必取得介于最大值和最小值之间的任何值。**性质3(一致连续性定理)**:在有界闭区域 $D$ 上的多元连续函数必定在 $D$ 上一致连续。 2. 偏导数 2.1 偏导数的定义及其计算方法在研究一元函数时,我们从研究函数的变化率引入了导数概念。对于多元函数同样需要讨论它的变化率。但多元函数的自变量不止一个,因变量和自变量的关系要比一元函数复杂得多。在这一节里,我们首先考虑多元函数关于其中一个自变量的变化率。以二元函数 $f(x,y)$ 为例,如果只有自变量 $x$ 变化,而自变量 $y$ 固定(即看做常量),这时它就是 $x$ 的一元函数,这函数对 $x$ 的导数,就称为二元函数 $z=f(x,y)$ 对于 $x$ 的**偏导数**,即有如下定义:**定义 设函数 $z=f(x,y)$ 在点 $(x_0,y_0)$ 的某一邻域内有定义,当 $y$ 固定在 $y_0$ 而 $x$ 在 $x_0$ 处有增量 $\Delta x$ 时,相应的函数有增量$$ f(x_0+\Delta x,y_0) - f(x_0,y_0) $$如果$$ \lim_{\Delta x \rightarrow 0} \frac{f(x_0+\Delta x,y_0) - f(x_0,y_0)}{\Delta x} $$存在,则称此极限为函数 $z=f(x,y)$ 在点 $(x_0,y_0)$ 处对 $x$ 的偏导数,记作$$ \frac{\partial z}{\partial x}|_{\begin{split}x=x_0\\y=y_0\end{split}}, \frac{\partial f}{\partial x}|_{\begin{split}x=x_0\\y=y_0\end{split}}, z_x|_{\begin{split}x=x_0\\y=y_0\end{split}} 或 f_x(x_0,y_0) $$**如果函数 $z=f(x,y)$ 在区域 $D$ 内每一点 $(x,y)$ 处对 $x$ 的偏导数都存在,那么这个偏导数就是 $x,y$ 的函数,它就称为函数 $z=f(x,y)$ 对自变量 $x$ 的偏导函数,记作$$ \frac{\partial z}{\partial x},\frac{\partial f}{\partial x},z_x 或 f_x(x,y) $$类似地,可以定义函数 $z=f(x,y)$ 对自变量 $y$ 的偏导函数,记作$$ \frac{\partial z}{\partial y},\frac{\partial f}{\partial y},z_y 或 f_y(x,y) $$就像一元函数的导函数一样,在不至于混淆的地方也把偏导函数简称为偏导数。至于实际求 $z=f(x,y)$ 的偏导数,并不需要新的方法,因为这里只有一个自变量在变动,另一个自变量是看做固定的,所以仍旧是一元函数的微分法问题。偏导数的概念还可推广到二元以上的函数。二元函数 $z=f(x,y)$ 在点 $(x_0,y_0)$ 的偏导数有下述几何意义:设 $M_0(x_0,y_0,f(x_0,y_0))$ 为曲面 $z=f(x,y)$ 上的一点,过 $M_0$ 作平面 $y=y_0$,截此平面得一曲线,此曲线在平面 $y=y_0$ 上的方程为 $z=f(x, y_0)$,则导数 $\frac{d}{dx}f(x,y_0)|_{x=x_0}$,即偏导数 $f_x(x_0,y_0)$,就是这曲线在点 $M_0$ 处的切线对 $x$ 轴的斜率。同样,偏导数 $f_y(x_0,y_0)$ 的几何意义是曲面被平面 $x=x_0$ 所截得的曲线在点 $M_0$ 处的切线对 $y$ 轴的斜率。我们已经知道,如果一元函数在某点具有导数,则它在该点必定连续。但对于多元函数来说,即使各偏导数在某点都存在,也不能保证函数在该点连续。这是因为各偏导数存在只能保证点 $P$ 沿着平行于坐标轴的方向趋于 $P_0$ 时,函数值 $f(P)$ 趋于 $f(P_0)$,但不能保证点 $P$ 按任何方式趋于 $P_0$ 时,函数值 $f(P)$ 都趋于 $f(P_0)$。 2.2 高阶偏导数设函数 $z=f(x,y)$ 在区域 $D$ 内具有偏导数$$ \frac{\partial z}{\partial x}=f_x(x,y), \frac{\partial z}{\partial y}=f_y(x,y) $$那么在 $D$ 内 $f_x(x,y),f_y(x,y)$ 都是 $x,y$ 的函数。如果这两个函数的偏导数也存在,则称它们是函数 $z=f(x,y)$ 的二阶偏导数。按照对变量求导次序的不同有下列四个二阶偏导数:$$\begin{split}\frac{\partial}{\partial x}(\frac{\partial z}{\partial x})=\frac{\partial^2 z}{\partial x^2}=f_{xx}(x,y) \\\frac{\partial}{\partial y}(\frac{\partial z}{\partial x})=\frac{\partial^2 z}{\partial x \partial y}=f_{xy}(x,y) \\\frac{\partial}{\partial x}(\frac{\partial z}{\partial y})=\frac{\partial^2 z}{\partial y \partial x}=f_{yx}(x,y) \\\frac{\partial}{\partial y}(\frac{\partial z}{\partial y})=\frac{\partial^2 z}{\partial y^2}=f_{yy}(x,y) \\\end{split}$$其中第二、三两个偏导数称为**混合偏导数**。同样可得三阶、四阶、...以及 $n$ 阶偏导数。二阶及二阶以上的偏导数统称为**高阶偏导数**。**定理 如果函数 $z=f(x,y)$ 的两个二阶混合偏导数 $\frac{\partial^2 z}{\partial y \partial x}$ 及 $\frac{\partial^2 z}{\partial x \partial y}$ 在区域 $D$ 内连续,那么在该区域内这两个二阶混合偏导数必相等。**换句话说,二阶混合偏导数在连续的条件下与求导的次序无关。对于二元以上的函数,也可以类似地定义高阶偏导数。而且高阶混合偏导数在偏导数连续的条件下也与求导的次序无关。 3. 全微分 3.1 全微分的定义由偏导数的定义知道,二元函数对某个自变量的偏导数表示当另一个自变量固定时,因变量相对于该自变量的变化率。根据一元函数微分学中增量与微分的关系,可得$$\begin{split}f(x+\Delta x, y) - f(x,y) \approx f_x(x,y)\Delta x \\f(x, y+\Delta y) - f(x,y) \approx f_y(x,y)\Delta y\end{split}$$上面两式的左端分别叫做二元函数对 $x$ 和对 $y$ 的**偏增量**,而右端分别叫做二元函数对 $x$ 和对 $y$ 的**偏微分**。在实际问题中,有时需要研究多元函数中各个自变量都取得增量时因变量所获得的增量,即所谓全增量的问题。下面以二元函数为例进行讨论。设函数 $z=f(x,y)$ 在点 $P(x,y)$ 的某邻域内有定义,$P'(x+\Delta x,y+\Delta y)$ 为这邻域内的任意一点,则称这两点的函数值之差 $f(x+\Delta x, y+\Delta y)-f(x,y)$ 为函数在点 $P$ 对应于自变量增量 $\Delta x, \Delta y$ 的**全增量**,记作 $\Delta z$,即$$ \Delta z = f(x+\Delta x, y+\Delta y)-f(x,y) $$一般说来,计算全增量 $\Delta z$ 比较复杂。与一元函数的情形一样,我们希望用自变量的增量 $\Delta x, \Delta y$ 的线性函数来近似地代替函数的全增量 $\Delta z$,从而引入如下定义。**定义 设函数 $z=f(x,y)$ 在点 $(x,y)$ 的某邻域内有定义,如果函数在点 $(x,y)$ 的全增量$$ \Delta z = f(x+\Delta x, y+\Delta y) - f(x,y) $$可表示为$$ \Delta z = A\Delta x + B\Delta y + o(\rho) $$其中 $A, B$ 不依赖于 $\Delta x, \Delta y$ 而仅与 $x,y$ 有关,$\rho=\sqrt{(\Delta x)^2+(\Delta y)^2}$,则称函数 $z=f(x,y)$ 在点 $(x,y)$ 可微分,而 $A\Delta x + B\Delta y$ 称为函数 $z=f(x,y)$ 在点 $(x,y)$ 的全微分,记作 $dz$,即$$ dz=A\Delta x + B\Delta y $$**如果函数在区域 $D$ 内各点处都可微分,那么称这函数**在 $D$ 内可微分**。在第二节中曾指出,多元函数在某点的偏导数存在,并不能保证函数在该点连续。但是,由上述定义可知,如果函数 $z=f(x,y)$ 在点 $(x,y)$ 可微分,那么这函数在该点必定连续。事实上,由$$ \lim_{\rho \rightarrow 0}\Delta z=0 $$从而$$ \lim_{(\Delta x, \Delta y) \rightarrow (0,0)}f(x+\Delta x, y+\Delta y) = \lim_{\rho \rightarrow 0}[f(x,y)+\Delta z] = f(x,y) $$因此函数 $z=f(x,y)$ 在点 $(x,y)$ 处连续。下面讨论函数 $z=f(x,y)$ 在点 $(x,y)$ 可微分的条件。**定理1(必要条件) 如果函数 $z=f(x,y)$ 在点 $(x,y)$ 可微分,则该函数在点 $(x,y)$ 的偏导数 $\frac{\partial z}{\partial x}, \frac{\partial z}{\partial y}$ 必定存在,且函数 $z=f(x,y)$ 在点 $(x,y)$ 的全微分为$$ dz=\frac{\partial z}{\partial x}\Delta x + \frac{\partial z}{\partial y}\Delta y $$**一元函数在某点的导数存在是微分存在的充分必要条件。但对于多元函数来说,情形就不同了。当函数的各偏导数都存在时,虽然能形式地写出 $\frac{\partial z}{\partial x}\Delta x + \frac{\partial z}{\partial y}\Delta y$,但它与 $\Delta z$ 之差并不一定是较 $\rho$ 高阶的无穷小,因此它不一定是函数的全微分。换句话说,各偏导数的存在只是全微分存在的必要条件而不是充分条件。**定理2(充分条件) 如果函数 $z=f(x,y)$ 的偏导数 $\frac{\partial z}{\partial x}, \frac{\partial z}{\partial y}$ 在点 $(x,y)$ 连续,则函数在该点可微分。**以上关于二元函数全微分的定义及可微分的必要条件和充分条件,可以完全类似地推广到三元和三元以上的多元函数。习惯上,我们将自变量的增量 $\Delta x, \Delta y$ 分别记作 $dx, dy$,并分别称为自变量 $x,y$ 的微分。这样,函数 $z=f(x,y)$ 的全微分就可写成$$ dz=\frac{\partial z}{\partial x}dx + \frac{\partial z}{\partial y}dy $$通常把二元函数的全微分等于它的两个偏微分之和这件事称为二元函数的微分符合**叠加原理**。叠加原理也适用于二元以上的函数的情形。 3.2 全微分在近似计算中的应用由二元函数的全微分的定义及关于全微分存在的充分条件可知,当二元函数 $z=f(x,y)$ 在点 $P(x,y)$ 的两个偏导数 $f_x(x,y), f_y(x,y)$ 连续,并且 $|\Delta x|, |\Delta y|$ 都较小时,就有近似等式$$ \Delta z \approx dz = f_x(x,y)\Delta x + f_y(x,y)\Delta y $$上式也可以写成$$ f(x+\Delta x, y+\Delta y) \approx f(x,y) + f_x(x,y)\Delta x + f_y(x,y)\Delta y $$与一元函数的情形相类似,可以利用上述两个式子对二元函数作近似计算和误差估计。 3.3 二元函数实例考虑函数$$f(x,y) = \left\{\begin{aligned}& \frac{xy}{x^2+y^2}, & x^2+y^2 \neq 0 \\& 0, & x^2+y^2 = 0\end{aligned}\right.$$其曲面图像如下所示: | import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
from mpl_toolkits.mplot3d import Axes3D
%matplotlib inline
@np.vectorize
def f(x, y):
return x * y / (x ** 2 + y ** 2)
step = 0.05
x_min, x_max = -1, 1
y_min, y_max = -1, 1
x_range, y_range = np.arange(x_min, x_max + step, step), np.arange(y_min, y_max + step, step)
x_mat, y_mat = np.meshgrid(x_range, y_range)
z = f(x_mat.reshape(-1), y_mat.reshape(-1)).reshape(x_mat.shape)
fig = plt.figure(figsize=(12, 6))
ax1 = fig.add_subplot(1, 2, 1, projection='3d', elev=50, azim=-50)
ax1.plot_surface(x_mat, y_mat, z, cmap=cm.jet, rstride=1, cstride=1, edgecolor='none',alpha=.8)
ax1.set_xlabel('$x$')
ax1.set_ylabel('$y$')
ax1.set_zlabel('$z$')
plt.show() | _____no_output_____ | MIT | Multivariable Differential Calculus and its Application.ipynb | reata/Calculus |
重点考察 $(0, 0)$ 这个点:**极限**显然当点 $P(x, y)$ 沿 $x$ 轴趋于点 $(0,0)$ 时$$ \lim_{\begin{split}(x,y)\rightarrow (0,0) \\ y=0 \end{split}}f(x,y) = \lim_{x \rightarrow 0}f(x,0) =\lim_{x \rightarrow 0}0 = 0$$又当点 $P(x, y)$ 沿 $y$ 轴趋于点 $(0,0)$ 时$$ \lim_{\begin{split}(x,y)\rightarrow (0,0) \\ x=0 \end{split}}f(x,y) = \lim_{y \rightarrow 0}f(0,y) =\lim_{y \rightarrow 0}0 = 0$$虽然点 $P(x,y)$ 以上述两种特殊方式(沿 $x$ 轴或沿 $y$ 轴)趋于原点时函数的极限存在并且相等,但是**极限 $\lim_{(x,y) \rightarrow (0,0)}f(x,y)$ 并不存在**。这是因为当点 $P(x,y)$ 沿着直线 $y=kx$ 趋于点 $(0,0)$ 时,有$$ \lim_{\begin{split}(x,y)\rightarrow (0,0) \\ y=kx \end{split}}\frac{xy}{x^2+y^2} = \lim_{x \rightarrow 0}\frac{kx^2}{x^2+k^2x^2} = \frac{k}{1+k^2}$$显然它是随着 $k$ 的值的不同而改变的。**连续性**由于极限不存在,所以点 $(0,0)$ 是该函数的一个间断点。故**该函数在点 $(0, 0)$ 不连续**。**偏导数**$$\begin{split}f_x(0,0) = \lim_{\Delta x \rightarrow 0} \frac{f(0+\Delta x,0) - f(0,0)}{\Delta x} = \lim_{\Delta x \rightarrow 0}0 = 0 \\f_y(0,0) = \lim_{\Delta y \rightarrow 0} \frac{f(0,0+\Delta y) - f(0,0)}{\Delta y} = \lim_{\Delta y \rightarrow 0}0 = 0 \\\end{split}$$对一元函数,可导则一定连续(连续不一定可导)。在这里可以发现,**对多元函数,可偏导无法推出连续**。 为了讨论全微分的情况,修改函数为$$f(x,y) = \left\{\begin{aligned}& \frac{xy}{\sqrt{x^2+y^2}}, & x^2+y^2 \neq 0 \\& 0, & x^2+y^2 = 0\end{aligned}\right.$$其曲面图像如下所示: | @np.vectorize
def f(x, y):
return x * y / np.sqrt(x ** 2 + y ** 2)
step = 0.05
x_min, x_max = -1, 1
y_min, y_max = -1, 1
x_range, y_range = np.arange(x_min, x_max + step, step), np.arange(y_min, y_max + step, step)
x_mat, y_mat = np.meshgrid(x_range, y_range)
z = f(x_mat.reshape(-1), y_mat.reshape(-1)).reshape(x_mat.shape)
fig = plt.figure(figsize=(12, 6))
ax1 = fig.add_subplot(1, 2, 1, projection='3d', elev=50, azim=-50)
ax1.plot_surface(x_mat, y_mat, z, cmap=cm.jet, rstride=1, cstride=1, edgecolor='none',alpha=.8)
ax1.set_xlabel('$x$')
ax1.set_ylabel('$y$')
ax1.set_zlabel('$z$')
plt.show() | _____no_output_____ | MIT | Multivariable Differential Calculus and its Application.ipynb | reata/Calculus |
Data Import and Check | import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns; sns.set()
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import classification_report
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from scipy import stats
import statsmodels.api as sm
from scipy.stats import mannwhitneyu
import matplotlib.gridspec as gridspec | _____no_output_____ | MIT | Watch Bookings/Watches Table Analytics Exercise.ipynb | nediyonbe/Data-Challenge |
* I import data and drop duplicates* I had tried to set the user id as index. Expectedly, it did not work as a user can have multiple trips. However the user - trip combination did not work either which revealed the entire rows duplicated* Once the duplicates are removed, the count of use -trip combinations reveal they constitute a unique key | hoppi = pd.read_csv('C:/Users/gurkaali/Documents/Info/Ben/Hop/WatchesTable.csv', sep=",")
hoppi.drop_duplicates(inplace = True)
hoppi.groupby(['user_id', 'trip_id'])['user_id']\
.count() \
.reset_index(name='count')\
.sort_values(['count'], ascending = False)\
.head(5) | _____no_output_____ | MIT | Watch Bookings/Watches Table Analytics Exercise.ipynb | nediyonbe/Data-Challenge |
Now that I am sure, I can set the index: | hoppi.set_index(['user_id', 'trip_id'], inplace = True) | _____no_output_____ | MIT | Watch Bookings/Watches Table Analytics Exercise.ipynb | nediyonbe/Data-Challenge |
Pandas has great features for date calculations. I set the related field types as datetime in case I need those features | hoppi['departure_date'] = pd.to_datetime(hoppi['departure_date'], format = '%m/%d/%y')
hoppi['return_date'] = pd.to_datetime(hoppi['return_date'], format = '%m/%d/%y')
hoppi['first_search_dt'] = pd.to_datetime(hoppi['first_search_dt'], format = '%m/%d/%y %H:%M')
hoppi['watch_added_dt'] = pd.to_datetime(hoppi['watch_added_dt'], format = '%m/%d/%y %H:%M')
hoppi['latest_status_change_dt'] = pd.to_datetime(hoppi['latest_status_change_dt'], format = '%m/%d/%y %H:%M')
hoppi['first_buy_dt'] = pd.to_datetime(hoppi['first_buy_dt'], format = '%m/%d/%y %H:%M')
hoppi['last_notif_dt'] = pd.to_datetime(hoppi['last_notif_dt'], format = '%m/%d/%y %H:%M')
hoppi['forecast_last_warning_date'] = pd.to_datetime(hoppi['forecast_last_warning_date'], format = '%m/%d/%y')
hoppi['forecast_last_danger_date'] = pd.to_datetime(hoppi['forecast_last_danger_date'], format = '%m/%d/%y') | _____no_output_____ | MIT | Watch Bookings/Watches Table Analytics Exercise.ipynb | nediyonbe/Data-Challenge |
The explanations in the assignment do not cover all fields but field names and the content enable further data verification* Stay should be the difference between departure and return dates. Based on that assumption, the query below should return no records i.e. the 1st item in the tuple returned by shape should be 0: | hoppi['stay2'] = pd.to_timedelta(hoppi['stay'], unit = 'D')
hoppi['stay_check'] = hoppi['return_date'] - hoppi['departure_date']
hoppi.loc[(hoppi['stay_check'] != hoppi['stay2']) & (hoppi['return_date'].isnull() == False), \
['stay2', 'stay_check', 'return_date', 'departure_date']].shape | _____no_output_____ | MIT | Watch Bookings/Watches Table Analytics Exercise.ipynb | nediyonbe/Data-Challenge |
The following date fields must not be before the first search date. Therefore the queries below should reveal no records* watch_added_dt* latest_status_change_dt* first_buy_dt* last_notif_dt* forecast_last_warning_date* forecast_last_danger_date | hoppi.loc[(hoppi['watch_added_dt'] < hoppi['first_search_dt']), ['first_search_dt', 'watch_added_dt']].shape
hoppi.loc[(hoppi['latest_status_change_dt'] < hoppi['first_search_dt']), ['first_search_dt', 'latest_status_change_dt']].shape | _____no_output_____ | MIT | Watch Bookings/Watches Table Analytics Exercise.ipynb | nediyonbe/Data-Challenge |
33 records have a first buy suggestion datetime earlier than the user's first search. | hoppi.loc[(hoppi['first_buy_dt'] < hoppi['first_search_dt']), ['first_search_dt', 'first_buy_dt']].shape | _____no_output_____ | MIT | Watch Bookings/Watches Table Analytics Exercise.ipynb | nediyonbe/Data-Challenge |
While the difference is just minutes in most cases, I don't have an explanation to justify it. Given the limited number of cases, I prefer removing them | hoppi.loc[(hoppi['first_buy_dt'] < hoppi['first_search_dt']), ['first_search_dt', 'first_buy_dt']].head()
hoppi = hoppi.loc[~(hoppi['first_buy_dt'] < hoppi['first_search_dt'])] | _____no_output_____ | MIT | Watch Bookings/Watches Table Analytics Exercise.ipynb | nediyonbe/Data-Challenge |
There are also 2 records where the last notification is done before the user's first search. I remove those as well | hoppi.loc[(hoppi['last_notif_dt'] < hoppi['first_search_dt']), ['first_search_dt', 'last_notif_dt']]
hoppi = hoppi.loc[~(hoppi['last_notif_dt'] < hoppi['first_search_dt'])] | _____no_output_____ | MIT | Watch Bookings/Watches Table Analytics Exercise.ipynb | nediyonbe/Data-Challenge |
Same checks on last warning and last danger dates show 362K + and 98K + suspicious records. As the quantitiy is large and descriptions sent with the assignment do not contain details on these 2 fields, I prefer to keep them while taking a note here in case something provides with additional argument to delete them during analyses. | hoppi.loc[(hoppi['forecast_last_warning_date'] < hoppi['first_search_dt']), \
['first_search_dt', 'forecast_last_warning_date']].shape
hoppi.loc[(hoppi['forecast_last_danger_date'] < hoppi['first_search_dt']), \
['first_search_dt', 'forecast_last_danger_date']].shape | _____no_output_____ | MIT | Watch Bookings/Watches Table Analytics Exercise.ipynb | nediyonbe/Data-Challenge |
Check outliers I reshape the columns in a way that will make working with seaborn easier: | hoppi_box_components = [hoppi[['first_advance']].assign(measurement_type = 'first_advance').reset_index(). \
rename(columns = {'first_advance': 'measurement'}),
hoppi[['watch_advance']].assign(measurement_type = 'watch_advance').reset_index(). \
rename(columns = {'watch_advance': 'measurement'}),
hoppi[['current_advance']].assign(measurement_type = 'current_advance').reset_index(). \
rename(columns = {'current_advance': 'measurement'})]
hoppi_box = pd.concat(hoppi_box_components)
sns.set(font = 'DejaVu Sans', style = 'white')
ax = sns.boxplot(x="measurement_type", y="measurement",
data=hoppi_box, palette=["#FA6866", "#01AAE4", "#505050"], #Hopper colors
linewidth = 0.5) | _____no_output_____ | MIT | Watch Bookings/Watches Table Analytics Exercise.ipynb | nediyonbe/Data-Challenge |
While several observations look like outliers on the boxplots, the histograms below show that the data is highly skewed. Therefore I do not consider them as outliers | f, axes = plt.subplots(1, 3, figsize=(15, 5), sharex=True)
sns.distplot(hoppi['first_advance'], kde=False, color="#FA6866", ax=axes[0])
sns.distplot(hoppi.loc[hoppi['watch_advance'].isnull() == False, 'watch_advance'], kde=False, color="#01AAE4", ax=axes[1])
sns.distplot(hoppi.loc[hoppi['current_advance'].isnull() == False, 'current_advance'], kde=False, color="#505050", ax=axes[2]) | _____no_output_____ | MIT | Watch Bookings/Watches Table Analytics Exercise.ipynb | nediyonbe/Data-Challenge |
Question 1 Given the business model of Hopper, we should understand who is more likely to buy a ticket eventually. Logistic Regression constitutes a convenient way of conducting such analysis. It runs faster than SVN and is easier to interpret, making it ideal for a task like this one: I prepare categorical variables for trip types: | one_hot_trip_type = pd.get_dummies(hoppi['trip_type'])
hoppi2 = hoppi.join(one_hot_trip_type) | _____no_output_____ | MIT | Watch Bookings/Watches Table Analytics Exercise.ipynb | nediyonbe/Data-Challenge |
I believe the city / airport distinction in origin and destination fields refer to the fact that some airports are more central such as the difference between Toronto Billy Bishop and Pearson airports. I also checked some airport codes, they do corresponds to cities where there are multiple airports with one or more being city airports | origin_cols = hoppi2['origin'].str.split("/", n = 1, expand = True)
hoppi2['origin_code'] = origin_cols[1]
hoppi2['origin_type'] = origin_cols[0]
destination_cols = hoppi2['destination'].str.split("/", n = 1, expand = True)
hoppi2['destination_code'] = destination_cols[1]
hoppi2['destination_type'] = destination_cols[0]
one_hot_destination_type = pd.get_dummies(hoppi2['destination_type'])
hoppi3 = hoppi2.join(one_hot_destination_type)
hoppi3.rename(columns={"airport": "destination_airport", "city": "destination_city"}, inplace = True)
one_hot_origin_type = pd.get_dummies(hoppi3['origin_type'])
hoppi4 = hoppi3.join(one_hot_origin_type)
hoppi4.rename(columns={"airport": "origin_airport", "city": "origin_city"}, inplace = True) | _____no_output_____ | MIT | Watch Bookings/Watches Table Analytics Exercise.ipynb | nediyonbe/Data-Challenge |
I prepare categorical variables for whether a watch is placed or not: | hoppi4.loc[hoppi3['watch_added_dt'].isnull() == True, 'watch_bin'] = 0
hoppi4.loc[hoppi3['watch_added_dt'].isnull() == False, 'watch_bin'] = 1 | _____no_output_____ | MIT | Watch Bookings/Watches Table Analytics Exercise.ipynb | nediyonbe/Data-Challenge |
Given the user - trip combination being unique across the data file, we do not have information on the changes for a user who has updated his trip status. As the data looks like covering the last status of a trip, I prefer to focus analyses on concluded queries i.e. trips either expired or booked. I exclude:* actives: because their result is yet to be seen. The user can end up booking before departure* shopped: because a user can make several searches on the same itinerary with alternative options each ending up as a new record in the database. I consider a search once the suer starts following the trip price* inactive: because some have departure in the future so their result cannot be concluded. I also exclude those with departure in the past as it falls in the same category as the shopped trips as the user stopped following the trip.I assign a new column for records I take into account in my analyses further below: | hoppi4.loc[hoppi3['status_latest'] == 'expired', 'result'] = 0
hoppi4.loc[hoppi3['status_latest'] == 'booked', 'result'] = 1 | _____no_output_____ | MIT | Watch Bookings/Watches Table Analytics Exercise.ipynb | nediyonbe/Data-Challenge |
A person might be prompted to buy once the price falls because it makes sense or maybe he buys as soon as it starts increasing to avoid further increase. Whatever the case, it makes sense to compare the price at different time points with respect to the original price at first search. For that, I create columns to measure price difference between the last price, the first time a buy recommended, the lowest price vs the the very first price: | hoppi4['dif_last_first'] = hoppi4['last_total'] - hoppi4['first_total']
hoppi4['dif_buy_first'] = hoppi4['first_buy_total'] - hoppi4['first_total']
hoppi4['dif_lowest_first'] = hoppi4['lowest_total'] - hoppi4['first_total'] | _____no_output_____ | MIT | Watch Bookings/Watches Table Analytics Exercise.ipynb | nediyonbe/Data-Challenge |
I create a categorical variable for the last recommendation as well to check whether a buy recommendation makes user to book: | one_hot_last_rec = pd.get_dummies(hoppi4['last_rec']) # this create s 2 columns: buy and wait
hoppi5 = hoppi4.join(one_hot_last_rec)
hoppi5.loc[hoppi5['last_rec'].isnull(), 'buy'] = np.nan # originally null values are given 0. I undo that manipulation here | _____no_output_____ | MIT | Watch Bookings/Watches Table Analytics Exercise.ipynb | nediyonbe/Data-Challenge |
I make a table with rows containing certain results that I want to focus on i.e. expired and booked | hoppi6 = hoppi5.loc[hoppi5['result'].isnull() == False,
['round_trip',
'destination_city', 'origin_city',
'weekend',
'filter_no_lcc', 'filter_non_stop', 'filter_short_layover', 'status_updates',
'watch_bin', 'total_notifs', 'total_buy_notifs', 'buy',
'dif_last_first', 'dif_buy_first', 'dif_lowest_first', 'first_advance', 'result']]
hoppi6.info() | <class 'pandas.core.frame.DataFrame'>
MultiIndex: 45237 entries, (e42e7c15cde08c19905ee12200fad7cb5af36d1fe3a3310b5f94f95c47ae51cd, 05d59806e67fa9a5b2747bc1b24842189bba0c45e49d3714549fc5df9838ed20) to (d414b1c72a16512dbd7b3859c9c9f574633578acef74d120490625d9010103c7, 3a363a2456b6b7605347e06d2879162b3008004370f73a68f52523330ccd38a6)
Data columns (total 17 columns):
round_trip 45237 non-null uint8
destination_city 45237 non-null uint8
origin_city 45237 non-null uint8
weekend 45237 non-null int64
filter_no_lcc 45237 non-null int64
filter_non_stop 45237 non-null int64
filter_short_layover 45237 non-null int64
status_updates 45237 non-null int64
watch_bin 45237 non-null float64
total_notifs 44800 non-null float64
total_buy_notifs 44800 non-null float64
buy 44800 non-null float64
dif_last_first 44800 non-null float64
dif_buy_first 44133 non-null float64
dif_lowest_first 44800 non-null float64
first_advance 45237 non-null int64
result 45237 non-null float64
dtypes: float64(8), int64(6), uint8(3)
memory usage: 12.8+ MB
| MIT | Watch Bookings/Watches Table Analytics Exercise.ipynb | nediyonbe/Data-Challenge |
Some rows have null values such as the price difference between the buy moment and the first price as some users may not have gt the buy recommendation yet. To cover these features, I get only non-null rows: | df = hoppi6.dropna()
df.info()
X = df[['round_trip',
'destination_city', 'origin_city',
'weekend',
'filter_non_stop', 'filter_short_layover', 'status_updates', 'filter_no_lcc',
'watch_bin', 'total_notifs', 'buy', 'total_buy_notifs',
'dif_lowest_first',
'dif_last_first',
'dif_buy_first',
'first_advance']]
y = df['result']
print(X.shape, y.shape)
X_train, X_test , y_train, y_test = train_test_split(X, y, test_size=0.8, random_state=1)
logit_model=sm.Logit(y_train, X_train)
result=logit_model.fit(maxiter = 1000)
print(result.summary2())
lr = LogisticRegression()
lr.fit(X_train, y_train)
y_pred = lr.predict(X_test)
accuracy_score(y_test, y_pred)
sum(y_train)/len(y_train)
print(classification_report(y_test, y_pred)) | precision recall f1-score support
0.0 0.99 1.00 0.99 32564
1.0 0.97 0.87 0.92 2743
micro avg 0.99 0.99 0.99 35307
macro avg 0.98 0.93 0.96 35307
weighted avg 0.99 0.99 0.99 35307
| MIT | Watch Bookings/Watches Table Analytics Exercise.ipynb | nediyonbe/Data-Challenge |
Data driven insights:The model shows a good level of accuracy. However given the imbalance of data (only 8% of data corresponds to an actual booking) it is crucial to check recall which also shows a high value i.e. false negatives are limited.Now that we know the model looks robust, we can make the following data-driven insights:1. City travelers, regardless their origin and destination, are not necessarily more likely to end up booking. These are the people likely to be business travelers. When we look at the weekend travelers which I use as a substitute as pleasure travelers, people are significantly more likely to end up booking. It looks like people are more sensitive to buy recommendations when it is a personal travel2. Among filters, only those who filter for short layover are more likely to book whereas the significance is not as powerfull (at a p = 0.10 level)3. Buy recommendations significantly impact booking behavior which indicates that the algorithm makes sense to the customer4. Price fluctuations have significant impact on users' booking behavior: * The lowest and first price difference is significant with a positive relationship with the booking behavior showing that people are more likely to buy when there is a price drop after their first query * When the last price or the price of the buy recommendation are higher than the first price, users are less likely to book. * The above 2 points show that the algorithm leads to the expected user behavior.5. Those who sign up for a price watch are more likely to book. This feature might be an indicator that the user is seriously making a plan. To be concrete, someone who is looking at options for a dream vacation in case he wins the lottery would not be setting the watch on whereas someone who took days off at work next month would do so. Question 2 Most "watched" itineraries I would like to see the watch selected cases w.r.t. the itinerary i.e. NY to MTL would be considered the same as MTL to NY | hoppi5.loc[(hoppi5['watch_bin'] == 1.0) & (hoppi5['result'] == 0)].info()
pareto_watch_0 = hoppi5.loc[(hoppi5['watch_bin'] == 1.0) & (hoppi5['result'] == 0.0), ['origin_code', 'destination_code']]
pareto_watch_0.loc[pareto_watch_0['origin_code'] < pareto_watch_0['destination_code'], \
'itinerary'] = \
pareto_watch_0['origin_code'] + pareto_watch_0['destination_code']
pareto_watch_0.loc[pareto_watch_0['origin_code'] > pareto_watch_0['destination_code'], \
'itinerary'] = \
pareto_watch_0['destination_code'] + pareto_watch_0['origin_code']
pareto_watch_0.info()
pareto_watch = pareto_watch_0 \
.groupby(['itinerary']) \
.size().reset_index() \
.rename(columns = {0: 'count'}) \
.sort_values(['count'], ascending = False)
pareto_watch.set_index('itinerary', inplace = True)
pareto_watch['cumulative_sum'] = pareto_watch['count'].cumsum()
pareto_watch['cumulative_perc'] = 100 * pareto_watch['cumulative_sum'] / pareto_watch['count'].sum()
pareto_watch.loc[pareto_watch['cumulative_perc'] <= 80].shape[0]
pareto_watch.shape[0]
print('All observations where the user watched the price but did not book, cover ',
pareto_watch.shape[0],
'itineraries. Out of these, ',
pareto_watch.loc[pareto_watch['cumulative_perc'] <= 80].shape[0],
' constitute 80% of the whole observation set. That is around ',
round(100 * pareto_watch.loc[pareto_watch['cumulative_perc'] <= 80].shape[0] / pareto_watch.shape[0], 1),
'% of the whole set.') | All observations where the user watched the price but did not book, cover 11697 itineraries. Out of these, 4236 constitute 80% of the whole observation set. That is around 36.2 % of the whole set.
| MIT | Watch Bookings/Watches Table Analytics Exercise.ipynb | nediyonbe/Data-Challenge |
The list gives the biggest airports. This result reassures that it is additionally critical to make reliable estimations for these itineraries. The top 10 itineraries consist only of US destinations showing the importance of the US market. As we have seen in the previous question, a user setting the watch on is a good estimator of an actual booking. Therefore accuracy of price estimations is extra important for the US market. This information could be handy for the data scientists developing algorithms e.g. they can give extra weight to the accuracy of US flights Watch vs the Moment the First Search is Done * Here I am looking whether there is significant difference between users with a watch and without in terms of the following two: * the first price found * the number of days left to departure as of first search | dfw = hoppi5.loc[hoppi5['result'].isnull() == False, ['first_advance', 'first_total', 'watch_bin', 'result']]
dfw = dfw.dropna()
dfw.groupby('watch_bin').agg({'first_advance': np.mean, 'first_total': np.mean}) | _____no_output_____ | MIT | Watch Bookings/Watches Table Analytics Exercise.ipynb | nediyonbe/Data-Challenge |
* As the data is skewed using non-parametrical tests makes more sense. I use the Mann Whitney test for that purpose* The test reveal significant difference between watched and non-watched itineraries at 0.1 level in terms of the number of days between the departure and the first search. Those who place a watch have a week less time left to their departure compared to the rest. Users may be using hopper as an assistant when they feel like they missed the time window where they could shop for different offers. For those users more frequent notifications can be planned | stat, p = mannwhitneyu(dfw.loc[dfw['watch_bin'] == 1, 'first_advance'],
dfw.loc[dfw['watch_bin'] == 0, 'first_advance'])
print('Statistics=%.3f, p=%.3f' % (stat, p)) | Statistics=41274130.000, p=0.074
| MIT | Watch Bookings/Watches Table Analytics Exercise.ipynb | nediyonbe/Data-Challenge |
* The test on same user groups (those watching vs those who don't) show that they differ in terms of the price they get at their first search. The difference is highly significant given the p-value. * Those who watch have a trip cost of USD125 more on average.* There might be a growth opportunity in budget passengers. When the user makes a first search which reveals a relatively cheap price, Hopper can suggest watching for the same trip with additional services such as business class. If that suggesiton can be supported with a statement like "business flights for this flight can get as close as $X to the economy fares, why don't you watch?" the user can be convinced to shop for more. | stat, p = mannwhitneyu(dfw.loc[dfw['watch_bin'] == 1, 'first_total'],
dfw.loc[dfw['watch_bin'] == 0, 'first_total'])
print('Statistics=%.3f, p=%.3f' % (stat, p)) | Statistics=29810391.000, p=0.000
| MIT | Watch Bookings/Watches Table Analytics Exercise.ipynb | nediyonbe/Data-Challenge |
Question 3 Chart 1: What is the situation as of now compared to PY?* Note that from the current advance field in the data, I see that we are on April 10th 2018* Expired: Watch is on + Current Date > Departure Date* Inactive: Watch is off + Current Date can be before or after Current Date* Active: Watch is on + Current Date <= Departure Date* Shopped: Watch is on or off + Current Date can be before or after Current Date; the latter of first search and watch date added is equal to the latest_status_change* Booked: Chart 1: Number of Incoming / Outgoing / Converted Searches Through Time On a daily basis, I'd like to see the number of * new searches of trips (incoming), * end of validity trips i.e. trips with departure date passing by* converted searches i.e. booked tripsIdeally I would like to see the number for these for a given day / time window as well as for the same period prior year (more on this in Q4). howeveer the data covers first searches over a period from 2018 start to April 10th. For illustrative purposes I show the count of the 2 KPIs above throughout the year It is good practive to create date range and join data onto that as the data source may not have data for every day: | date_range = pd.date_range(start='1/1/2018', end='04/10/2018', freq='D')
df_date = pd.DataFrame(date_range, columns = ['date_range'])
df_date.set_index('date_range', inplace = True) | _____no_output_____ | MIT | Watch Bookings/Watches Table Analytics Exercise.ipynb | nediyonbe/Data-Challenge |
incoming traffic counts the number of first time searches each day: | hoppi5['first_search_dt_dateonly'] = hoppi5['first_search_dt'].dt.date
incoming_traffic = hoppi5.groupby(['first_search_dt_dateonly']) \
.size().reset_index() \
.rename(columns = {0: 'count'})
incoming_traffic.set_index('first_search_dt_dateonly', inplace = True) | _____no_output_____ | MIT | Watch Bookings/Watches Table Analytics Exercise.ipynb | nediyonbe/Data-Challenge |
outgoing traffic counts the number of trips with departure within the same day, each day. Until a trip is considered 'outgoing' there is a chance that it can be converted to booking: | outgoing_traffic = hoppi5.groupby(['departure_date']) \
.size().reset_index() \
.rename(columns = {0: 'count'})
outgoing_traffic.set_index('departure_date', inplace = True) | _____no_output_____ | MIT | Watch Bookings/Watches Table Analytics Exercise.ipynb | nediyonbe/Data-Challenge |
converted traffic is the numbe rof bookings that took place each day i.e. conversions: | hoppi5['latest_status_change_dt_dateonly'] = hoppi5['first_search_dt'].dt.date
converted_traffic = hoppi5.loc[hoppi5['status_latest'] == 'booked'].groupby(['latest_status_change_dt_dateonly']) \
.size().reset_index() \
.rename(columns = {0: 'count'})
converted_traffic.set_index('latest_status_change_dt_dateonly', inplace = True) | _____no_output_____ | MIT | Watch Bookings/Watches Table Analytics Exercise.ipynb | nediyonbe/Data-Challenge |
I join counts on the date range index created above: | df_chart1 = pd.merge(df_date, incoming_traffic, left_index = True, right_index = True, how='left')
df_chart1.rename(columns = {'count': 'incoming_count'}, inplace = True)
df_chart2 = pd.merge(df_chart1, outgoing_traffic, left_index = True, right_index = True, how='left')
df_chart2.rename(columns = {'count': 'outgoing_count'}, inplace = True)
df_chart3 = pd.merge(df_chart2, converted_traffic, left_index = True, right_index = True, how='left')
df_chart3.rename(columns = {'count': 'converted_count'}, inplace = True)
df_chart3['day'] = df_chart3.index.dayofyear
df_chart3_components = [df_chart3[['incoming_count', 'day']].assign(count_type = 'incoming').reset_index(). \
rename(columns = {'incoming_count': 'count'}),
df_chart3[['outgoing_count', 'day']].assign(count_type = 'outgoing').reset_index(). \
rename(columns = {'outgoing_count': 'count'}),
df_chart3[['converted_count', 'day']].assign(count_type = 'converted').reset_index(). \
rename(columns = {'converted_count': 'count'})]
df_chart4 = pd.concat(df_chart3_components) | _____no_output_____ | MIT | Watch Bookings/Watches Table Analytics Exercise.ipynb | nediyonbe/Data-Challenge |
I plot the chart here below. Note that the data collection seems to have started as of 2018 start. Therefore the outgoing count do not reflect the reality in the early periods of the chart. Also the number of trips whose departure is in the future at a given time could be shown as well. That would show the pool of trips that could be converted at a given time. | sns.set_style('dark')
fig, ax1 = plt.subplots(figsize=(15,10))
ax2 = ax1.twinx()
sns.lineplot(x=df_chart3['day'],
y=df_chart3['incoming_count'],
color='#6FC28B',
marker = "X",
ax=ax1)
sns.lineplot(x=df_chart3['day'],
y=df_chart3['outgoing_count'],
color='#FA6866',
marker="v",
ax=ax1)
sns.lineplot(x=df_chart3['day'],
y=df_chart3['converted_count'],
color='#F0A02A',
marker="o",
ax=ax2)
fig.legend(['Incoming #', 'Expiring #', 'Converted #'])
ax1.set(xlabel='Day of Year', ylabel='Incoming and Expiring Search Count')
ax2.set(ylabel='Converted Search Count')
plt.title('Number of Incoming / Expiring / Converted Searches by Day', fontsize = 14)
plt.show() | _____no_output_____ | MIT | Watch Bookings/Watches Table Analytics Exercise.ipynb | nediyonbe/Data-Challenge |
Chart 2: KPIs Affecting Conversion - Categorical KPIs Categorical variables that turned out to have an impact on conversion are worth following daily. As I sugegsted for the 1st chart, it makes more sense to compare these with prior year same period figures.In this chart we follow the % of people who* look for a round trip* look for a weekend trip* look for a short layover* have an ongoing watch* have an received a buy suggestionNote that these are all categories that help estimate conversion | df_chart_perc1 = hoppi5.loc[hoppi5['departure_date'] >= '04-10-2018'].describe() # describe() gives the mean per vcategory.
# As they were binary, it gives the %
df_chart_perc2 = df_chart_perc1.loc[['mean'], ['round_trip', 'weekend', 'filter_short_layover', 'watch_bin', 'buy']]
df_chart_perc2 = df_chart_perc2.transpose().reset_index() # transpose to make it convenient for seaborn notation
df_chart_perc2['mean'] = df_chart_perc2['mean'] * 100 # percentages in absolute numbers
df_chart_perc2.rename(columns = {'mean':'percentage'}, inplace=True)
sns.set_style('white')
fig, ax = plt.subplots(figsize=(15,8))
sns.barplot(x="index",
y="percentage",
palette=["#FA6866", "#01AAE4", "#505050", "#AAAAAA", "#F67096"],
data=df_chart_perc2,
ax=ax)
ax.set(xlabel='Trip Categories', ylabel='% of Qualified Trips')
plt.title("Percentage of Trips",fontsize=14)
plt.show() | _____no_output_____ | MIT | Watch Bookings/Watches Table Analytics Exercise.ipynb | nediyonbe/Data-Challenge |
Chart 3: KPIs Affecting Conversion - Ordinal KPIs In a similar vein to the Chart2, I look at KPIs having an impact on conversion here as well. This time I check ordinal variables. Again, it would make more sense to compare with prior year same period figures.In this chart we follow * average difference between the lowest price found and the first price* average difference between the last price and the first price* average difference between the price when a buy recommendation was made and the first price* average number of days between the first search and the departure dateNote that these are all categories that help estimate conversion as well. | df_chart_abs1 = hoppi5.loc[hoppi5['departure_date'] >= '04-10-2018'].describe()
df_chart_abs2 = df_chart_abs1.loc[['mean'], ['dif_lowest_first',
'dif_last_first', 'dif_buy_first',
'first_advance']]
df_chart_abs3 = df_chart_abs2.transpose().reset_index()
df_chart_abs3.rename(columns = {'mean':'average'}, inplace=True)
fig, (ax1, ax2) = plt.subplots(1, 2, gridspec_kw={'width_ratios': [3, 1]}, figsize=(15,8))
sns.barplot(x=df_chart_abs3.loc[df_chart_abs3['index'] != 'first_advance']['index'],
y=df_chart_abs3.loc[df_chart_abs3['index'] != 'first_advance']['average'],
palette=["#FA6866", "#01AAE4", "#505050"],
ax=ax1)
sns.barplot(x=df_chart_abs3.loc[df_chart_abs3['index'] == 'first_advance']['index'],
y=df_chart_abs3.loc[df_chart_abs3['index'] == 'first_advance']['average'],
color='#AAAAAA',
ax=ax2)
ax1.set(xlabel='KPIs', ylabel='Average Difference in Prices ($)')
ax2.set(ylabel='Average Number of Days')
ax1.set_title('Trips by Absolute Numbers', fontsize = 14)
plt.show() | _____no_output_____ | MIT | Watch Bookings/Watches Table Analytics Exercise.ipynb | nediyonbe/Data-Challenge |
Background This project deals with artificial advertising data set, indicating whether or not a particular internet user clicked on an Advertisement. This dataset can be explored to train a model that can predict whether or not the new users will click on an ad based on their various low-level features.This data set contains the following features:* 'Daily Time Spent on Site': consumer time on site in minutes* 'Age': cutomer age in years* 'Area Income': Avg. Income of geographical area of consumer* 'Daily Internet Usage': Avg. minutes a day consumer is on the internet* 'Ad Topic Line': Headline of the advertisement* 'City': City of consumer* 'Male': Whether or not consumer was male* 'Country': Country of consumer* 'Timestamp': Time at which consumer clicked on Ad or closed window* 'Clicked on Ad': 0 or 1 indicated clicking on Ad Dataset overview | import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
sns.set_style('white')
df_ad = pd.read_csv('Data/advertising.csv')
df_ad.head(3)
df_ad.info()
df_ad.isnull().any()
df_ad.describe() | _____no_output_____ | MIT | Mini capstone projects/Ad click prediction_Logistic Regression.ipynb | sungsujaing/DataScience_MachineLearning_Portfolio |
EDA Age distribution of the dataset | sns.set_context('notebook',font_scale=1.5)
sns.distplot(df_ad.Age,bins=30,kde=False,color='red')
plt.show() | _____no_output_____ | MIT | Mini capstone projects/Ad click prediction_Logistic Regression.ipynb | sungsujaing/DataScience_MachineLearning_Portfolio |
pairplot of dataset defined by `Clicked on Ad` | import warnings
warnings.filterwarnings('ignore') #### since the target variable is numeric, the joint plot by the target variable generates the warning.
sns.pairplot(df_ad,hue='Clicked on Ad') | _____no_output_____ | MIT | Mini capstone projects/Ad click prediction_Logistic Regression.ipynb | sungsujaing/DataScience_MachineLearning_Portfolio |
Model training: Basic Logistic Regression | from sklearn.model_selection import train_test_split
X = df_ad[['Daily Time Spent on Site', 'Age', 'Area Income',
'Daily Internet Usage', 'Male']]
y = df_ad['Clicked on Ad']
X_train, X_test, y_train, y_test = train_test_split(X,y,random_state=100) | _____no_output_____ | MIT | Mini capstone projects/Ad click prediction_Logistic Regression.ipynb | sungsujaing/DataScience_MachineLearning_Portfolio |
training | from sklearn.linear_model import LogisticRegression
lr = LogisticRegression().fit(X_train,y_train) | _____no_output_____ | MIT | Mini capstone projects/Ad click prediction_Logistic Regression.ipynb | sungsujaing/DataScience_MachineLearning_Portfolio |
Predictions and Evaluations | from sklearn.metrics import classification_report,confusion_matrix
y_predict = lr.predict(X_test)
pd.DataFrame(confusion_matrix(y_test,y_predict),index=['True 0','True 1'],
columns=['Predicted 0','Predicted 1'])
print(classification_report(y_test,y_predict)) | precision recall f1-score support
0 0.86 0.92 0.89 119
1 0.93 0.86 0.89 131
micro avg 0.89 0.89 0.89 250
macro avg 0.89 0.89 0.89 250
weighted avg 0.89 0.89 0.89 250
| MIT | Mini capstone projects/Ad click prediction_Logistic Regression.ipynb | sungsujaing/DataScience_MachineLearning_Portfolio |
Model training: Optimized Logistic Regression | from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import GridSearchCV
scaler = StandardScaler().fit(X_train)
X_train_scaled = scaler.transform(X_train)
x_test_scaled = scaler.transform(X_test) | _____no_output_____ | MIT | Mini capstone projects/Ad click prediction_Logistic Regression.ipynb | sungsujaing/DataScience_MachineLearning_Portfolio |
3-fold CV grid search | grid_param = {'C':[0.01,0.03,0.1,0.3,1,3,10]}
grid_lr = GridSearchCV(LogisticRegression(),grid_param,cv=3).fit(X_train_scaled,y_train)
print('best regularization parameter: {}'.format(grid_lr.best_params_))
print('best CV score: {}'.format(grid_lr.best_score_.round(3))) | best regularization parameter: {'C': 0.3}
best CV score: 0.971
| MIT | Mini capstone projects/Ad click prediction_Logistic Regression.ipynb | sungsujaing/DataScience_MachineLearning_Portfolio |
Predictions and Evaluations | y_predict_2 = grid_lr.predict(x_test_scaled)
pd.DataFrame(confusion_matrix(y_test,y_predict_2),index=['True 0','True 1'],
columns=['Predicted 0','Predicted 1'])
print(classification_report(y_test,y_predict_2)) | precision recall f1-score support
0 0.94 1.00 0.97 119
1 1.00 0.94 0.97 131
micro avg 0.97 0.97 0.97 250
macro avg 0.97 0.97 0.97 250
weighted avg 0.97 0.97 0.97 250
| MIT | Mini capstone projects/Ad click prediction_Logistic Regression.ipynb | sungsujaing/DataScience_MachineLearning_Portfolio |
Demo Notebook: The Continuous-Function Estimator Tophat and Spline bases on a periodic boxHello! In this notebook we'll show you how to use the continuous-function estimator to estimate the 2-point correlation function (2pcf) with a method that produces, well, continuous correlation functions. Load in data We'll demonstrate with a low-density lognormal simulation box, which we've included with the code. We'll show here the box with 3e-4 ($h^{-1}$Mpc)$^{-3}$, but if you're only running with a single thread, you will want to run this notebook with the 1e-4 ($h^{-1}$Mpc)$^{-3}$ box for speed. (The code is extremely parallel, so when you're running for real, you'll definitely want to bump up the number of threads.) | x, y, z = read_lognormal_catalog(n='3e-4')
boxsize = 750.0
nd = len(x)
print("Number of data points:",nd) | Number of data points: 125342
| MIT | example_theory.ipynb | abbyw24/Corrfunc |
We'll also want a random catalog, that's a bit bigger than our data: | nr = 3*nd
x_rand = np.random.uniform(0, boxsize, nr)
y_rand = np.random.uniform(0, boxsize, nr)
z_rand = np.random.uniform(0, boxsize, nr)
print("Number of random points:",nr)
print(x)
print(x_rand) | [1.13136184e+00 4.30035293e-01 2.08324015e-01 ... 7.49666077e+02
7.49922791e+02 7.49938477e+02]
[567.62600303 166.85340522 461.79238824 ... 577.65066275 9.85155819
581.1525008 ]
| MIT | example_theory.ipynb | abbyw24/Corrfunc |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.